id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
230560951 | pes2o/s2orc | v3-fos-license | Resilient Urbanization: A Systematic Review on Urban Discourse in Pakistan
: Urbanization is a common phenomenon in the modern world. It has come with new challenges, especially for developing countries. Such countries, therefore, have to stay ahead in their preparedness e ff orts to meet these urban issues halfway. Unfortunately, urban residents in Pakistan are living in serious social, physical, and economic hardships. Despite being economic engines, cities in Pakistan su ff er from stresses like climate change, haphazard and unregulated expansion, housing shortage, and a lack of basic civic amenities. While using systematic review methodology, we collected published and grey data from national and international sources. Literature shows that successive governments in Pakistan gave ample space to urban development in most of the policy documents. However, urban resilience and community engagement were given scant attention. This major gap, both in policy and practice, needs to be bridged to promote resilient and sustainable urbanization in Pakistan.
Introduction
"At no time in human history have so many people lived in cities. Poor land-use planning; environmental management will increase risk and exacerbate the effects of natural disasters." Kofi Annan.
Urbanization is a complex socio-economic process that transforms the built environment, converting formerly rural areas into urban settlements, while also shifting the spatial distribution of a population from rural to urban areas. The major consequences of urbanization are increasing population size or urban settlements, and in the number and share of urban residents compared to rural dwellers [1,2].
Urban resilience is the capacity of cities to act efficiently so that its residents and workforce, especially the vulnerable people, survive and thrive in spite of the stresses or shocks they encounter in their everyday lives [3,4]. Similarly, 100 resilient cities, a not-for-profit organization of the Rockefeller Foundation, defines the term resilient urbanism as surviving and thriving, regardless of the challenge. The word resilience means "the persistence of relationships within a system" and "the ability of these systems to absorb the changes of state variables, driving variables, and parameters, as well as persist [5,6].
Two distinctly evident but intertwined trends of the 21st century are rapid urbanization and frequent natural disasters [7]. A densely populated and interconnected world requires new models of governance to manage rapid urban growth and mitigate stresses such as extreme weather events, the refugee crisis, disease pandemics, and cyber-attacks [8]. Acute or sudden shocks such as earthquakes, hurricanes, and terrorist attacks are further exacerbated by chronic stresses such as recurrent flooding, high unemployment, and overtaxed or inefficient public transportation [8]. With the occurrence of any rigorous secondary review is conducted [19]. The technique is more appropriate for an exhaustive topic like resilient urbanization and urban policy discourse in Pakistan.
Both published and grey literature was searched and included in the study for analysis. Published literature including national and international regulations, guidelines, models, and laws on urban resilience was considered. All plans and policies, regardless of their origin or impact, specific or general, urban-oriented or rural-urban mix, and their relevance to cities were studied and incorporated. Grey literature like unpublished reports, dissertations, and abstracts were carefully considered to minimize the research bias and maximize validity [20,21].
A study protocol was developed before starting the review. The protocol contained the inclusion/exclusion criteria of data to be analyzed. It also had a list of databases, the methodology of data extraction, and summary writing.
Inclusion and exclusion criteria were mainly based on research objectives. We used separate procedures for extracting data from national and international urban resilience and planning discourse. For the former category, we studied all national-level documents. These included Vision 2025, Framework for Economic Growth 2011, National Climate Change Policy 2012, Constitution of Pakistan for Legislation on Local Government, Census 2017 data, Urban Unit Lahore, and Orangi Pilot Project. For the latter category, only those urban resilience models and protocols were included which have been ratified or voluntarily adopted by Pakistan. Such adopted and ratified documents include but are not limited to urban data from the United Nations, Asian Development Bank (ADB), United Nations Integrated Strategy for Disaster Reduction (UNISDR), Munich Re, research journals on resilient urbanization, Sendai Framework, Paris Agreement, and Sustainable Development Goals (SDGs). The rest of the global urban resilience models and precedents were included using the key terms on the topic. It is followed by selecting the appropriate models using the snowball sampling method considering their context and applicability in Pakistan.
The key terms for research included "Pakistan", "sustainability", "disaster planning", "disaster preparedness", "urbanization", "resilient urbanization" "urban policy", "land use planning", and "resilience". The terms were identified using the research topic, research objectives, and reviews of the research results.
The selected literature was extensively perused for extracting relevant data. For instance, national development discourse contains the development plans of several sectors. However, we analyzed only those sections which contained terms like urbanization, urban planning, or resilience building. The rest of the chapters and sections were excluded. Likewise, data from global models were also extracted using the same technique. Extracted data were synthesized descriptively for drawing comparison and suggesting policy recommendations.
A comparative study has been conducted between national urban discourse with that of international urban resilience models and frameworks. It is followed by recommendations for the national urban policymakers.
Urbanization in Pakistan
With 210 million people today, Pakistan will be the fourth most populous nation in the world by 2030. In the 1998 census, the urban share of the total population in Pakistan was 32%, which is expected to be over 50% by 2025 under the administrative definition. However, Reza Ali, an eminent Pakistani urban scholar, while using satellite imaging, claims that 70% of Pakistan's population is non-rural [22]. He clarified that it does not mean that 70% is urban; rather this percentage of the population is living in concentrated areas in or around some urban core.
Research reveals that Pakistan will achieve rural and urban population parity around 2050. However, the majority of Pakistan's population is already living in real urban areas or highly dense agglomerations, which is factually not rural. In any case, Pakistan is predominantly urban.
Widespread floods across Pakistan in 2010 and 2011 forced the permanent migration of farmers from rural to urban areas of Pakistan [23], thus, further accelerating urbanization in the country.
However, rapid growth in cities has made them unable to absorb, comfortably accommodate, and meaningfully employ rural-to-urban migrants. It has therefore exacerbated the social/ethnic tensions in the cities. The immense challenge for Pakistan is to resolve the existing urban problems and plan for reaping the full economic potential of urbanization [24].
Urban Challenges in Pakistan
Pakistan faces a challenging urban environment. For example, Karachi, one of the top ten largest cities of the world with around 20 million population, confronts frequent power failures, water shortages, transport woes, heat island effect, rising sea levels, ever-expanding unregulated informal settlements, urban flooding, choked drains, and an extremely poor solid waste management system. Similarly, Lahore has recently witnessed the worst smog in recent history. The issue has given the city an unwanted distinction of the third most polluted city in the world, only behind its next-door neighbor Delhi and Central Asian Uzbek Capital Tashkent [18]. Karachi and Lahore are the two largest and most well-funded urban municipalities in Pakistan. The urban environment in other cities, except the federal capital Islamabad, is even worse than one could imagine.
In spite of these serious challenges and the recurrence of disasters, there is no disaster monitoring and early warning system in Karachi. Human and financial resource constraints, limited institutional capacities and coordination, and non-existing emergency operation centers have further deteriorated the city's fragile urban sustainability. A senior official of the Sindh government shared these thoughts at an international forum in September 2018 [25].
From the collected data, we found the following major urban challenges in Pakistan: (a) Climate change: climate change is not only a big threat to urban resilience in Pakistan but it also affects almost the entire world. It has both direct and indirect effects. Direct ones include storms, typhoons, and heatwaves, the inundation of coastal areas due to sea-level rise, temperature increase, and disturbances in rainfall patterns. Indirect ones in urban areas include severe flooding resulting in road blockages, blackouts, risk of water or vector-borne diseases due to the accumulation of rainwater in the streets and low-lying areas, and rise in temperature which ultimately leads to health and infrastructure losses. Damages from climate change have resulted in US$1.7 trillion global losses from 2000 to 2012 [3]. Similarly, EM-DAT (emergency disaster database), reports that the total number of natural disasters year-on-year has increased from 78 in 1970 to 348 in 2004 [26]. Floods and their impacts are likely to increase in the future due to urbanization, land-use change, lack of regulations, and poor preparedness efforts [27]. Undoubtedly, South Asian countries will bear the major brunt of climate change. People move to cities presuming to be safer against climate-related natural disasters in Pakistan [28]. However, overpopulation, congestion, and haphazard urban growth make urban areas dangerous as compared to the countryside. Pakistan's Planning Commission has also acknowledged that rapid urbanization and climate change reinforce the negative impacts of each other [24]. (b) Unregulated urbanization: in the last decade alone, low and middle-income countries faced 53% of global disasters yet they suffered 93% of the fatalities [29]. Such significantly polarized impacts on the developing world are large because of unsafe and unregulated urban development [30], which leads to natural disasters. In fact global economic losses from natural disasters estimated at USD 232 billion from 2000 to 2020 [31]. The global urban population is expected to rise from the present 50 to 66% by 2050. It is expected that a 90% urban growth would take place in Africa and Asia [32], where South Asia will top the list with major capital investment to build new houses to accommodate the burgeoning population. This housing growth will take place in cities with poor capacity to ensure risk-sensitive construction, putting the lives of vulnerable and poor people at higher risk to natural hazards [30]. The referred literature re-emphasizes the fact that haphazard urbanization is a major challenge in Pakistan. (c) Housing shortage: research suggests that globally 1.4 million people are moving into urban areas every week. To meet the rising housing demand, humankind will build 1 billion new residential units by 2050, which is more than the houses built in the entire history of mankind [33]. The State Bank of Pakistan estimated an urban housing shortage of 4.4 million in 2015. The five largest cities in Pakistan will have 78% of the total housing shortage by 2035. The Framework of Economic Growth (FEG) and Vision2025 explicitly acknowledge the housing crisis in Pakistan, to be mainly due to horizontal urban growth. For example, FEG provides a comparative example of Dubai and Pakistan. Figure 1 depicts that in Dubai, 0.2 million people live in 1 km 2 , whereas in Pakistan the corresponding figure is merely 6 thousand. This shows that Dubai's urban density is 27 times greater than that of Pakistan's [34]. According to the FEG, the reason for low urban density in Pakistan is the adoption of the "garden city" approach in the early years of independence. The absence of tower cranes, strict land regulation, and zoning policies stifled vertical urban growth and development of downtowns while allowing Pakistani cities to develop large suburban sprawls [22]. Unfortunately, this policy continues unabated. The absence of high-rises does not mean that they are unfeasible in Pakistani cities. Such mixed-use high-rise development was a norm until the sixties when the "garden city" paradigm promoted single housing development [22]. In fact, multi-story buildings are commonly available in big cities, demonstrating their commercial and structural feasibility. While acknowledging the housing shortage in Pakistan, the successive governments devised plans like the FEG and Vision2025 to address the shortage. Lately, the Pakistan Tehreek-e-Insaf (PTI) government has also made an ambitious strategy to increase the availability of residential units in the country. (d) Diminishing social capital: social capital is defined as trust, connectedness, and teamwork in a community. Unfortunately, inadequate and dilapidated public spaces in Pakistani urban areas such as town squares, community centers, theaters, playgrounds, forums, shopping centers, and libraries are the reason for reduced social capital in the country. FEG understood this important need in urban development as it desired the availability of more public spaces while duly considering the context of high-rise and mixed-use construction [34]. Regrettably, the entire plan could not be materialized with the change of government in the year 2013. (e) Inadequate Spatial Planning: Disproportionate and outdated zoning laws have exacerbated the rational use of urban land for residential, commercial, and industrial needs. For instance, the best planned city of Pakistan, Islamabad, has 55% of the land reserved for residential use, whereas only 5% for commercial activity, which leads to unplanned and haphazard urbanization [34]. Similarly, the big cities in Pakistan, like Karachi, Lahore, and Faisalabad face exponential growth in slums and katchi abadis (shanty towns) without any basic municipal facilities. Such unplanned growth will ultimately lead to unsustainable and retarded economic growth [24]. (f) Ineffective building by-laws: enforcing building codes and land use planning to deal with mass disasters is a prerequisite [35]. Unfortunately, they are not implemented in many developing countries including Pakistan. For instance, poorly built buildings were severely damaged during the 2005 earthquake in Pakistan in which thousands of people lost their lives under the collapsed buildings [36]. Similarly, a 7.8 magnitude earthquake in Nepal in April 2015 took 9000 lives and demolished built infrastructure [33]. In developed countries like Japan, earthquakes of a similar magnitude normally cause lesser damages due to resilient building infrastructure. The country's well developed national laws based on scientific research, engineering analysis, a framework for certification, inspection, professional and workforce training, building finance, and insurance have reduced the risk of natural hazards.
(g) Urban water scarcity: industrialization, urbanization, and population growth, coupled with inefficiencies in water use, leads to groundwater depletion and the declining quality of surface water. Climate change aggravates these pressures [37]. In fact, water scarcity is a global issue where 78% of the world population will be facing physical and economic water scarcity by 2025 [38]. Pakistan's per capita water availability has already reduced from 5300 m 3 in 1947 to less than 1000 m 3 in 2016 [39]. Approximately, 120 million people in Pakistan face severe water scarcity during at least part of the year [40]. National estimates suggest that in Punjab, the groundwater table has gone down by 15 to 20 feet in the last five to six years, whereas in Khyber Pakhtunkhwa, it has been going down by 6 to 21 inches every year [41]. Droughts, unexpected water supply interruptions, or dilapidated networks may further jeopardize the water resilience of urban areas, which may trigger social and ethnic tensions especially in socially and ethnically diverse cities like Karachi and Peshawar.
Considering the increasing population and water disputes with India and Afghanistan, per capita water availability will shrink further. These are the neighboring countries from where Pakistan gets the most of its freshwater inflows. According to the FEG, the reason for low urban density in Pakistan is the adoption of the "garden city" approach in the early years of independence. The absence of tower cranes, strict land regulation, and zoning policies stifled vertical urban growth and development of downtowns while allowing Pakistani cities to develop large suburban sprawls [22]. Unfortunately, this policy continues unabated. The absence of high-rises does not mean that they are unfeasible in Pakistani cities. Such mixeduse high-rise development was a norm until the sixties when the "garden city" paradigm promoted single housing development [22]. In fact, multi-story buildings are commonly available in big cities, demonstrating their commercial and structural feasibility. While acknowledging the housing shortage in Pakistan, the successive governments devised plans like the FEG and Vision2025 to address the shortage. Lately, the Pakistan Tehreek-e-Insaf (PTI) government has also made an ambitious strategy to increase the availability of residential units in the country. (d) Diminishing social capital: social capital is defined as trust, connectedness, and teamwork in a community. Unfortunately, inadequate and dilapidated public spaces in Pakistani urban areas such as town squares, community centers, theaters, playgrounds, forums, shopping centers, and libraries are the reason for reduced social capital in the country. FEG understood this important need in urban development as it desired the availability of more public spaces while duly considering the context of high-rise and mixed-use construction [34]. Regrettably, the entire plan could not be materialized with the change of government in the year 2013. (e) Inadequate Spatial Planning: Disproportionate and outdated zoning laws have exacerbated the rational use of urban land for residential, commercial, and industrial needs. For instance, the In short, these are only some of the major challenges Pakistani cities face at present. They will keep mounting by embracing the surrounding localities and permanent migrants from rural areas into their ever-increasing geographical boundaries.
Urban Resilience Discourse
For the sake of convenience, discourse on urban policy and planning is divided into two broad categories. The first is related to the nationally or indigenously drafted plans and policies, and actions for achieving resilient urbanization, whereas the second will include global urban plans, policies, and actions that are binding upon the member nations or they have adopted them voluntarily for attaining urban resilience.
National Urban Discourse
Local governments carry out urban governance and management in Pakistan. They are protected under Articles 32 and 140-A of the constitution. All four provinces of Pakistan have their respective local-government-enabling legislation and ministries responsible for the implementation of urban policies. There are 2055 urban councils in Pakistan, which include one city district, four metropolitan corporations, 24 municipal corporations, 280 municipal committees, 148 town councils, and 1598 urban union committees. Additionally, 43 federally administered Cantonment Boards also function as local government in urban areas of Pakistan by providing allied municipal services to their residents [42].
Except for an urban unit in the Punjab that was established in 2006, and in 2012 converted into a private company for Urban Sector Planning and Management Services Unit Ltd., there is no National Urban Policy Institute in Pakistan [22,42]. Rapid urbanization, bereft of an overarching urban policy, has stifled economic growth in Pakistan. This is probably the reason that the country and municipal-level governments are left to fend for themselves to reduce the damages associated with catastrophes [7] Nevertheless, the Pakistani government has made urban development plans and policies either exclusively or in combination with the overall national development plan. These are discussed especially in relation to urban development planning.
(a) Framework for Economic Growth: in the year 2011, the Planning Commission of Pakistan prepared the Framework for Economic Growth. In this document, reasonable emphasis was made on making cities creative. The FEG aimed to spur economic growth while considering the cities as engines for economic productivity. The idea for creative cities was for promoting mixed-use activities, encouraging energy efficiency, facilitating vertical growth, privatizing unproductive state-owned land, encouraging foreign land developers to compete in the Pakistani real estate market, and focusing on research and development in low-cost energy-efficient construction techniques [34]. FEG also elucidated some elaborate provisions for promoting the housing sector in Pakistan. For instance, it advised modernizing the land registration system in a centralized database, establishing a housing database such as the price index, the access index in assistance with national organizations like the National Database and Registration Authority (NADRA) and the Federal Board of Revenue (FBR), registration of property dealers, releasing unproductive state land, curtailing the growth of slums and encouraging high-density, mixed-use urban development [34]. Even though never implemented, FEG remained a benchmark for policymakers in the following years. (b) Vision2025: the next major planning document of the Planning Commission was Vision2025.
It was introduced in the year 2014. It acknowledged the serious urban challenges in Pakistan. The Vision2025 enlisted many measures for transforming Pakistani urban areas into the most advanced and creative cities so that they can be on par with the cities of the developed world. For instance, Vision2025 proposed creative, eco-friendly, and sustainable cities. The government also envisioned the availability of efficient mass transit systems, better security, zoning laws for 'mixed-use' areas, vertical rather than horizontal growth, meeting housing shortage, the provision of adequate municipal services, developing pedestrian-friendly streets, the digitization of the land registration system, maintenance and protection of heritage sites and digitally intra and interconnected cities for real-time data sharing so that cities in Pakistan be smart and creative in the future [24]. All measures were enlisted without any detailed implementation planning except mentioning the presence of an Urban Planning Unit at the Ministry of Planning, Development & Reforms.
Since urban councils are provincial subjects, except for the federally administered 43 Cantonment Boards and the capital Islamabad, it is yet to be ascertained if any real urban development work or consultation has been undertaken with the provincial governments with regard to the implementation of Vision25. The ministry confessed the serious impacts of climate change on urban areas. It proposed several policy measures for climate change adaptation and mitigation. In an urban area, town planning was made a prerequisite for the adaptation to climate change. The policy also desired low-carbon emissions by human settlements with properly managed fuel and energy consumption [43].
For mitigating climate change impacts on urban areas, respective municipal governments will introduce changes in town planning and building systems. For achieving this purpose, the municipal bodies will build wastewater treatment plants, modernize solid waste management, cut carbon footprints via updated town planning, design zero-emission buildings through renewable energy technology, ensure "land use planning", encourage vertical rather than horizontal urban expansion, undertake the mapping and zoning of land for industrial areas, and make the installation of solar water heaters mandatory for commercial and public buildings [43]. However, the implementation of the policy is a big challenge, which has already been acknowledged by the ADB in its report on the climate change profile of Pakistan in the year 2017 [44]. Further research is also needed if urban policy measures from the climate change policy section were ever given any serious thought. Whether municipal bodies of various cities from different provinces were ever taken on board or any consultation was held for implementing the policy recommendations. for building water resilience in the province. The project is for building flood embankments along major riverine canals, constructing small recharged dams for protecting communities from torrential and flash flooding, and developing the capacity of the Sindh Irrigation Department for equipment upgrading, and river morphological studies. Unfortunately, in this project, the researchers could not find a single urban specific investment in any city of the province for building urban resilience in the most urbanized province of Pakistan. The reason for mentioning this project in this urban-specific literature is that this is the only project in Pakistan which mentions the term "resilience building". Its mandate is to give policy advice and provide services to the public and private sector organizations in the field of housing, urban planning, urban transport, solid waste management, water and sanitation, urban economics, municipal finance, institutional development, capacity building, and urban services delivery improvement. The unit has extensively published a database on urbanization and urban issues especially in the largest province of Pakistan i.e., Punjab. Urban discourse on the Urban Unit website covers a wide range of topics like green spaces, transportation, and Pakistan's urban growth data in recent decades. It also publishes an Urban Geographic journal and organizes consultative discussions with national stakeholders on the latest issues like smog in Lahore. (g) Naya Pakistan Housing Project: the Pakistan Tehreek-e-Insaf (PTI) government has initiated an ambitious plan to provide housing facilities to the urban residents at a reduced cost. The initiative, if executed as planned, will certainly reduce the housing deficiency in the urban areas. The project will not only increase the social resilience of urban residents but also reduce the slums or katchi abadis. However, the housing plan table of the proposed scheme shows that it has chosen the existing model of horizontal urban expansion rather than opting for the vertical path. The table (Figure 2) reveals that one-unit houses will be maximum followed by additions to existing stories, ground+3, and midrise. Considering the quality of construction prevalent in Pakistan where multi-story buildings keep collapsing every other day, adding further stories to existing buildings will not only jeopardize the lives of existing residents but also endanger the future inhabitants. High-rise has been given the last priority in the plan. This slum is a cluster of 113 low-income settlements, housing 1.5 million people. The self-supported organization aimed to uplift the lives of the slum residents through five programs such as low-cost sanitation, housing, health, education, and credit for micro-enterprise. The organization has three branches. The first is the Research and Training Institute, which manages low-cost sanitation, housing support, education, water supply, and women's saving programs. From this platform, the project has succeeded in approving land tenure security to 1063 goths (villages) by mid-2010 and provided a loan to 100 houses on an annual basis [47]. The second is Orangi Charitable Trust, which manages a micro-enterprise credit program. The third is Karachi Health and Social Development Association (KHASDA), which runs a health program. The project was immensely successful as it improved the lives of a million residents. This slum is a cluster of 113 low-income settlements, housing 1.5 million people. The selfsupported organization aimed to uplift the lives of the slum residents through five programs such as low-cost sanitation, housing, health, education, and credit for micro-enterprise. The organization has three branches. The first is the Research and Training Institute, which manages low-cost sanitation, housing support, education, water supply, and women's saving programs. From this platform, the project has succeeded in approving land tenure security to 1063 goths (villages) by mid-2010 and provided a loan to 100 houses on an annual basis [47]. The second is Orangi Charitable Trust, which manages a micro-enterprise credit program. The third is Karachi Health and Social Development Association (KHASDA), which runs a health program. The project was immensely successful as it improved the lives of a million residents.
Institutional Framework against DRR
Municipal and local bodes in urban areas are directly responsible for the management of urban areas. However, there are some other federal and provincial government institutions which supplement the functioning of these urban governance bodies in disaster risk reduction.
Pakistan Meteorological Department (PMD), which was established in 1947, provides
Institutional Framework against DRR
Municipal and local bodes in urban areas are directly responsible for the management of urban areas. However, there are some other federal and provincial government institutions which supplement the functioning of these urban governance bodies in disaster risk reduction.
Pakistan Meteorological Department (PMD), which was established in 1947, provides information on the early warning of natural hazards including drought, flood, tropical cyclone, tsunami, seismic activities, and advisory services in the fields of planning and development, town planning, and infrastructure. It has four divisions based in different cities of Pakistan. However, two departments can contribute to urban resilience. These departments are flood forecasting division at Lahore and the National Seismic Monitoring and Tsunami Warning center at Karachi [44].
National Disaster Management Authority (NDMA) was set up under the National Disaster Management Act 2010 for laying the guidelines and formulating plans, strategies and programs on disaster risk reduction (DRR). The entire NDMA structure is composed of national, provincial, and district level bodies for DRR at their respective levels. The Ministry of Climate Change is the apex body that creates linkages for Climate Change Adaptation (CCA) and DRR [48].
Civil Defense, established under an Act in 1952, is another organization that works under the Ministry of Interior for the provision of defense and emergency services especially in times of enemy attack. It is a semi-trained workforce which is made active in times of crisis [49].
Global Change Impact Studies Centre (GCISC) is an autonomous institution dedicated to climate change impact studies. However, its work on urban resilience is little revealed and understood [50].
These government bodies are working independently for the overall safety of the national population regardless of their location and specific objectives. They are not directly involved in building urban resilience.
Globally Adopted Urban Resilient Policies and Plans
As a responsible member of the international community, Pakistan has participated, contributed, and adopted almost all global plans and treaties on urban sustainability regardless of these documents that were drafted at the global level like the United Nations platform, continental levels like those of the ADB or regional level like those at the South Asian Association of Regional Conference (SAARC). The Pakistani government has wholeheartedly accepted them for achieving all enshrined goals for uplifting the living standards of its citizens. The country adopted and implemented the following global urban specified plans. Pakistan has adopted the SFDRR to achieve global targets such as reducing global disaster mortality by 2030, reducing the number of affected people, reducing economic losses due to disasters, enhancing international cooperation among developing countries for disaster risk reduction, and installing multi-hazard early warning systems and disaster risk information.
Pakistan Disaster Management Authority (PDMA) is responsible for the coordination and implementation of the framework in consultation with all stakeholders. A national consultative conference for localizing SFDRR in Pakistan was held in Islamabad in collaboration with the United Nations Development Program (UNDP) and UK Aid in February 2017 [52].
Global Risk Reduction and Urban Resilience Models
We read some popular resilient city models and frameworks from different research papers and reports of international organizations for their global effectiveness and practical applicability in Pakistan. These frameworks and models can be good templates for other countries and local governments facing urban challenges. Some of them are described below: (a) Resilient qualities: resilient cities usually have some powerful characteristics such as being reflective, robust, redundant, flexible, resourceful, integrated, and inclusive in their systems.
Each one of them is explained below. Reflective: institutions and their allied stakeholders keep learning from experiences with an adaptive planning mindset so that they can minimize the impacts of catastrophes. They should have dynamic standards for adopting emerging challenges rather than relying on redundant solutions for shocks and stresses. Robust: city systems are designed and managed in a way to prevent catastrophes and anticipate system failures to enhance the predictability of challenges and the security of cities.
Redundant: this means an extra capacity to meet the untoward demands of city residents if one system becomes redundant. For example, a city can have multiple sources of water or electricity supply. If one system fails to deliver due to unknown reasons, the next should be on standby to prevent interruption. Flexible: a changing and evolving city will continue adopting alternative strategies both in the short as well as long-term to respond to the changing conditions. Resourceful: city stakeholders and managers should predict future urban challenges so that they can prioritize, mobilize, and coordinate all kinds of resources in case of extreme events or needs. Inclusive: an approach in which all urban communities, especially the vulnerable segment of the urban population, are consulted with and are engaged in for building city resilience so that they have a feeling of ownership. Integrated: investment, decision-making, and city systems should be supportive of each other for a common goal. They should be built in such a way to be in sync with one another and have information and feedback response mechanisms in time of urgency. (b) The Rockefeller Foundation City Resilience Framework: Arup, in its report on City Resilience Framework, has enlisted eight key city functions that sustain a city's resilience. These include delivering basic human needs, safeguarding the life of human beings, protecting, maintaining, and enhancing physical assets, facilitating identity and relationship among humans, promoting knowledge and information, defending the rule of law, justice, and equity, supporting livelihood, and stimulating economic prosperity [4]. On the contrary, if a city has an unsafe and degraded environment, conflicts, deprivations, insecurity, or ill-health, it is considered as not resilient and extremely vulnerable to shocks. The Arup report on City Resilience is a comprehensive document based on collected data from cities across the continent having diverse capabilities and resources to cope up with the disasters and which have faced a catastrophe in recent years. The foundation has developed a City Resilience Index, which has four broad categories of City Resilience Index, divided into 12 goals and subdivided into 52 indicators followed by 156 variables. (c) "Crunch Model" or commonly called "Pressure and Release Model": this model was developed by Oxfam. It helps in understanding and reducing the disaster risk. The model at Figure 3 indicates that vulnerability (pressure), which is endemic in socio-economic and political processes, has to be dealt with (released) so that disaster risk can be reduced and the resilience of the urban areas is amplified. According to the disaster crunch model, a hazard is an unexpected event, which affects vulnerable people. When two elements i.e., hazard and vulnerability, join in tandem, they influence marginalized people by bringing disaster. A hazard cannot be a disaster if it struck a resilient population. Likewise, a highly vulnerable community can stay safe from a disaster if a triggering event like a danger stays away from the population [53]. Hence a vulnerability pressure, which has roots in socio-economic and political processes has to be addressed and released so that the risk of disaster can be minimized. The original model is not much different from the latest one, which is reasonably brief. According to the original model, people are vulnerable if they cannot forecast, withstand, and recover from a disaster.
The two-dimensional model has a vulnerability progression and hazards as major components.
The root causes of vulnerability are limited access to power, structures, and resources with weak political and economic systems. Dynamic pressures like the lack of effective local institutions, training, and investment, as well as population growth, rapid and haphazard urbanization, deforestation, and soil degradation, merge with dangerous locations, dilapidated buildings and infrastructure, lack of disaster preparedness, and endemically prevalent diseases to bring disasters. These factors are combined with opposing hazards like earthquakes, fast winds, cyclones, hurricanes, landslides, drought, and volcanic eruptions to damage lives and livelihoods [53]. (d) Three levels of city resilience: the Asian Development Bank proposed this model in its report on climate change resilient cities. According to the ADB, a city's functional systems can bear shocks and stresses whereas nonfunctional systems cannot. In functional cities, frequent stresses do not impact on peoples' and organizations' everyday decision-making. Moreover, peoples' and organizations' capacity to fulfill their aims is continually supported by the cities' institutional structures [3]. (e) Risk reduction (resilience) model: it was proposed by Mehrota [54] and amended by UK Aid by adding a resilience dimension. According to the model when any stress or a challenging situation arises in an urban area resulting from natural disasters, drought, smog, food shortage, a concomitant increase in refugees, crimes, and criminals, they will test the residents' vulnerability and resilience. Urban machinery such as institutions, civil society, public, and local action groups can reduce both acute and chronic urban challenges. (d) Three levels of city resilience: the Asian Development Bank proposed this model in its report on climate change resilient cities. According to the ADB, a city's functional systems can bear shocks and stresses whereas nonfunctional systems cannot. In functional cities, frequent stresses do not impact on peoples' and organizations' everyday decision-making. Moreover, peoples' and organizations' capacity to fulfill their aims is continually supported by the cities' institutional structures [3]. (e) Risk reduction (resilience) model: it was proposed by Mehrota [54] and amended by UK Aid by adding a resilience dimension. According to the model when any stress or a challenging situation arises in an urban area resulting from natural disasters, drought, smog, food shortage, a concomitant increase in refugees, crimes, and criminals, they will test the residents' vulnerability and resilience. Urban machinery such as institutions, civil society, public, and local action groups can reduce both acute and chronic urban challenges.
Disaster Risk Reduction and Relief Projects-Global Examples
Many global organizations like the United Nations, US Aid, and the Department for International Development (DFID) are engaged in relief and rescue operations all around the world. Their purpose is to help developing countries and communities to recover from disasters. These organizations have accumulated valuable experiences and research data while working in different
Disaster Risk Reduction and Relief Projects-Global Examples
Many global organizations like the United Nations, US Aid, and the Department for International Development (DFID) are engaged in relief and rescue operations all around the world. Their purpose is to help developing countries and communities to recover from disasters. These organizations have accumulated valuable experiences and research data while working in different geographical and ethnic areas. Their experiences and project models are highly useful as they can be implemented anywhere in the world with minor modifications as per local needs. Some useful examples of their relief projects are given below: (a) The Katye Neighborhood Upgrading and Recovery Program: a devastating earthquake of 7.0 magnitude struck Port-au-Prince, capital of Haiti, on 12 January 2010. The earthquake affected 3 million people [55]. Ravine Pintade was among the hundreds of informal settlements badly affected by the disaster. Almost 90% of the residents were affected, whereas infrastructure was badly damaged and became inaccessible [56]. USAID and the Office of the US Foreign Disaster Assistance (OFDA) funded project was initiated for providing relief and recovery for the residents of Ravine Pintade. The goal was to meet the basic humanitarian needs of those affected and displaced by the earthquake to provide safe and habitable neighborhoods along with providing essential services [57]. The Katye pilot program aimed to ensure expert engagement, community participation, and coordination among government agencies to face natural disasters. The project was different from others in a way that it directly engaged with the affected households to rehabilitate their original neighborhoods rather than shifting them to camps and Greenfield construction. Within ten months of starting in November 2010, it succeeded in providing health, shelter, livelihoods, debris removal, water, sanitation, and hygiene (WASH) services. It was evaluated that the project generated substantial trust and mutual understanding between the communities and implementing agencies. Thus, Katye was a notable success [57]. (b) Barrio Mio: Barrio Mio (My Neighborhood) was also a disaster risk reduction (DRR) project.
It was funded and managed by Project Concern International (PCI) and OFDA, respectively.
The project was carried out in 17 vulnerable settlements of Mixco city of Guatemala in three phases starting from 2012 [58]. The first phase was from October 2012 to March 2015, the second one from April 2015 to October 2017, and the third and last phase was expected to end in September 2020. The objectives of the project were to identify, pilot, and scale solutions to strengthen high-risk urban informal settlements, improve emergency response to disasters and convert vulnerable areas into safer, healthier, and resilient neighborhoods. It joined 40 stakeholders ranging from national and local governments, academic institutions, the private sector, and community participants.
The strategy was to change, influence, advocate, and reduce the risk for people from the lower scale to a higher scale of governance.
The basic approaches of Barrio Mio for addressing the vulnerability of precarious urban settlements and their inhabitants were including risk and vulnerability, participatory mapping/enumerating, identifying and piloting innovative shelter, water and sanitation retro-fitting solutions, women empowered groups, disaster response, supporting and strengthening the institutional capacity of local councils [58]. Initiated in 17 precarious settlements, the project will expand its operation by including more settlements for strengthening their resilience to disasters.
The foregoing sections comprehensively discussed the national urban discourse and international urban resilience models and practical examples. The following section will carry out a comparative analysis of the national and international urban discourse.
Comparative Analysis
Almost all mentioned national development plans desire urban development. The kind of development includes upgrading city infrastructure, provision of civic and municipal services, developing and continuously upgrading urban zoning plans, and improving urban air quality. However, none of them set resilient urbanization as a goal for mitigating urban challenges. The term urban resilience is scarcely used in all these plans and policies. This is the biggest shortcoming in urban literature considering the serious and widespread urban challenges in Pakistan.
Nevertheless, the absence of terms in national development literature does not mean that the government has ignored building urban resilience. The federal and provincial governments have taken many urban development initiatives in recent years in various urban areas to modernize cities and improve the livelihood of urban residents. For instance, the federal government in collaboration with the Punjab government has developed a metro bus for the twin cities of Islamabad and Rawalpindi. Similarly, urban mass transport systems have also been made operational in Lahore and Peshawar whereas, in Karachi, the green line urban transport system is in the final stages of construction. Another major milestone in the mass transit system, known as the Orange Railway line, has also been made operational in Lahore in collaboration with the Chinese government. Regardless of these efforts, the public mobility demand far outweighs the supply. It shows that much more needs to be done on every front to combat urban poverty and improve the miserable living condition of urban residents.
Moreover, the national urban discourse confirms that the government and municipal agencies certainly desired urban resilience even if they did not explicitly mention the term resilience in their development plans. However, cities could have benefited more if disaster resilience and risk reduction were included in development plans.
For example, the federal and provincial governments could have directed their respective municipal bodies for identifying vulnerable communities or addressing their susceptibility. Most immigrants in cities are living in low-lying dangerous localities around the cities, and these slums lack basic urban amenities like clean water, sewerage system, paved streets, and health care facilities. By engaging with the slum communities in line with OPP, Katye neighborhood project, and Barrio Mio, the municipal bodies could have lessened the burden on strained civic services.
Unfortunately, national development plans rarely desired or proposed community participation. The community is a strong stakeholder. Empowering and including the public in urban development plans will not only help the government to swiftly and efficiently implement its urban reform programs but also promote a sense of ownership of the towns. For instance, the Phnom Penh Water Supply Authority (PPWSA) shows that communities can be effective planners and regulators which is an essential aspect of successful and resilient urbanization [59]. The success of PPWSA in the provision of clean and safe water to more than a million people in Phnom Penh is a result of the combination of public sector activism and community-level participation [59].
Likewise, regular monitoring is necessary for the execution of planned projects. However, one can easily guess from the ubiquitous urban challenges in Pakistan that these rosy plans might have been ignored altogether or implemented piecemeal and haphazardly, leaving the cities to deteriorate further. In Pakistani cities, poorly executed projects like the Peshawar metro show that monitoring and evaluation mechanisms are missing.
The Table 1 has a summary of the entire plans, policies, frameworks, and models already discussed in the previous sections. The section ends with a discussion of the table.
It is apparent from the above table that Pakistani urban discourse is mostly development oriented. Although it does promote creative and eco-friendly cities to build energy efficiency, saving precious land by going vertical, empowering the community, developing mass transit systems and addressing housing shortages, however, it drastically differs from global urban resilience building models as cited in the table. For instance, global resilience discourse uses terms like disaster anticipation and risk reduction. It also proposes developing alternatives to urban civic services such as water supply networks, electricity, and gas. Likewise, releasing socio-economic pressures, and taking risk reduction measures like identifying and removing dangerous buildings, eradicating endemic diseases, and controlling population growth are common terms often used in global urban resilience building literature. Such types of efforts are lacking in national urban development initiatives despite Pakistani cities facing frequent climate-related and other disasters.
Lastly, national urban discourse is completely missing in disaster risk reduction jargon. On the contrary, international discourse is bulky on terms like disaster risk reduction, disaster anticipation, risk identification, and mapping vulnerable communities. The Pakistani government must consider this lacuna in future urban planning discourse.
In short, the national urban literature uses a top-down and passive approach whereas the global resilience-building discourse conforms to a bottom-up and active approach. For instance, the former uses the term community empowerment while the latter desires community engagement. A passive approach cannot be sustainable in the long run as it has to rely on federal or provincial governments for financial and technical support. In an active approach, the cities and their allied institutions have to develop their own intrinsic and inherent abilities to run urban affairs. Because strong and dynamic institutions are vital for resilient and sustainable urbanization, it is important to minimize the impacts of the extensive human footprint on the planet and environmental degradation so that our cities can be amenable to productive living [60]. Reflective cities, adaptive planning, innovative solutions against shocks and stresses, disaster anticipation and prevention, urban security, additional water and electric supply capacities, mobilization and coordination of existing resources, community engagement, promoting a sense of ownership, delivering human needs, knowledge promotion, releasing socio-economic and political pressures, reforestation, identification of dangerous buildings, eradicating endemic diseases, increasing income levels, controlling population growth, dynamic and strong urban institutions, risk identification, participative mapping, piloting innovative shelter, water and sanitation retro-fitting, women empowered groups.
Recommendations
Pakistani cities are beset with immense urban challenges. Therefore, developing urban resilience is paramount to avert disasters. From these models, the researchers have found the "Pressure and Release Model" relevant for the country. The model mentions the majority of urban issues Pakistani cities face at the moment. For example, limited access to power, weak political and economic systems, high population growth, ineffective local institutions, haphazard urbanization, deforestation, and endemically prevalent diseases are common urban issues in most Pakistani cities. These pressures and their ilk are either specifically mentioned in the "Pressure or Release Model" or they have been indirectly referred to for their resolution. Indeed, all these issues seriously afflict the marginalized segment of urban residents. Once urban pressures are released, the fringe urban residents will be able to bear shocks and recover quickly from extreme events.
However, specifying the "Pressure and Release Model" does not mean that other resilience models do not relate to Pakistan or the researchers have found them irrelevant in the context. For instance, achieving resilience qualities like cities being reflective, redundant, flexible, resourceful, robust, integrated, and inclusive are equally important. The urban planners and municipal managers must consider these aspects in the development and management of cities. Similarly, the risk reduction model of UK Aid will be considered implemented in urban areas if the "Pressure and Release Model" is adopted in its entirety. With the implementation of the risk reduction model, cities will also achieve most of the resilient qualities as already mentioned in the second sentence of this paragraph. In fact, most of these models are interlinked in the sense that they aim to build sustainability, reduce risk, and minimize the impacts of extreme events. Achieving the objectives of one model means that the goals of other models have also been realized.
Furthermore, several risk reduction examples and precedents have been found to be extremely useful in addressing community vulnerability in urban areas around the world. These precedents are mentioned in the paper and recommended for implementation in Pakistan at suitable places depending upon the available financial resources and physical modalities. For example, the negative consequences of haphazard urbanization, natural hazards, as well as physical and psychological insecurities can be prevented by strengthening urban residents' capacity to anticipate, resist, cope with and recover from natural hazards [61]. Risk management in urban areas may be considered as a high priority for the government, considering the role of cities in the national economy, and centers of intellectual learning, businesses, and financial activities [62]. It can be achieved by building risk-reducing infrastructure and services, such as drainage systems, waste collection, sanitation, emergency services, and health care facilities [35]. Enhancing individual capabilities, expanding services networks, connecting communities with national institutions along with taking soft measures such as new regulations, technology, and information systems, and social networks are equally important. Engaging multiple stakeholders from various socio-economic groups and economic interests across different sectors i.e., government, businesses, civil society, and academia for transformative changes may also be considered as real action plans. Understanding the connected systems located beyond the city boundaries to interact economically, physically, ecologically, and politically for an effective and coordinated response for building urban resilience.
Furthermore, inclusive future planning with today's needs such as water supply and urban drainage will bring future scenarios into current decision-making. In addition, tapping the local expertise is more important. For instance, the involvement of academics and key-informants in bringing quality engagement and long-term adaptive planning capacity can also be considered. Likewise, focusing on vulnerable communities by identifying, empowering, educating, and engaging them in decision making will help them and cities to be more resilient [3].
The World Bank Group and the Global Facility for Disaster Risk Reduction (GFDRR) are partnering with governments, the private sector, and civil society for building resilient buildings. The program studies the best global building construction practices such as those of Japan while emphasizing the cultural, economic, and social factors for developing region-specific building codes for a resilient future.
For example, in Northern Pakistan, the dhajji dewari approach (a timber and stone earth construction practice developed over centuries) can be a cost-effective and resilience-building practice for local needs. Many similar construction techniques can be studied and implemented at the local level for resilient homes [33]. Such affordable and resilient homes will minimize the impacts of disasters on urban areas.
In addition, when it comes to owning a house, affordability is the major issue for low-income families. For helping such households, private developers should allocate at least 50% of the land for poor communities. The remaining land can be equally divided between medium-income and high-income groups at actual and inflated prices to compensate for the loss of providing land to poor families at discounted rates [46].
Many well documented resilience-promoting tools have been planned over the years. For example, facilitating preparedness, effective planning against challenges, and raising public awareness are some of its examples. The role of the international, national and local governments, civil society, and the private sector, etc., is crucial for achieving urban resilience. The public availability and effective communication of data increases the planning and preparedness prospects. Likewise, financial readiness is equally vital along with political commitment. For this, micro-insurance and micro-finance such as catastrophe bonds for providing liquidity in times of crisis; and country-level finances will help reduce public sector stress [35].
Last but not the least, cities should adopt an iterative, inclusive, and integrated planning process for urban analysis. Understanding how the city works and analyzing its present population pressure and future population and economic growth projections, and vulnerability analysis are key for building robust and resilient cities. Identifying the most vulnerable segment of the urban population especially those residing in low-lying areas, flood plains, and highly congested slums, who will be exposed to urban pressures and have limited coping capacity with which to weather the impending impacts, is critical for their survival in times of crisis [3].
Cities are diverse and dynamic. These recommendations have geographical and chronological constraints. Some of them can be relevant today in a certain city. Tomorrow they might be outdated. Therefore, urban planners and managers keep abreast of the ever-emerging challenges and their allied solutions.
Conclusions
In spite of witnessing rapid urbanization and the economic potential of cities, Pakistan's economic and development policies are still fixated on agriculture. Soon, the majority of the population will begin to live in cities. Without efficient, effective, and functional systems, cities in Pakistan will be a liability, jeopardizing the economic sustainability of the country. It is therefore imperative for national leaders to prioritize resilient urban development for sustainable economic growth.
The discussed urban literature reveals diverging trends between global and Pakistani urban policies. For instance, global urban resilience models are focused on reducing disaster risk, strengthening institutions, building alternatives to civic services, and promoting disaster preparedness. On the contrary, Pakistani urban discourse is focused on the urban development paradigm. Development without sustainability is short-lived. The country's urban landscape is yet to develop a sound disaster risk reduction plan, which is extremely important considering the urban challenges and rapidly unfolding climate change impacts on South Asia.
Empowering and engaging with the community have different connotations. The former is an end-product whereas the latter is a self-sustaining process for building urban resilience. In Pakistan, Dr. Akhtar Hameed is a pioneer of this innovative method of community engagement in self-development. Unfortunately, the lack of state patronage to adopt his model deprived the urban residents of obtaining better civic services. The same model was successfully implemented in Barrio Mio and Katye neighborhood upgrading and recovery project by the international aid agencies with astounding success. This proves that such projects could have been implemented in other Pakistani cities via government backing.
Finally, releasing urban pressures is paramount for minimizing the impacts of extreme events. People migrate towards cities to make their dreams come true. However, they live a nightmarish life. Therefore, government and municipal bodies help urban residents live a life full of capabilities. This cannot be achieved without their active involvement. | 2020-12-17T09:13:58.391Z | 2020-12-14T00:00:00.000 | {
"year": 2020,
"sha1": "f76552a573fda326e3883270b49da9312fc0c0cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2413-8851/4/4/76/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "49b472261ab674e21fb201b5c2960bd214ba9444",
"s2fieldsofstudy": [
"Sociology",
"Geography",
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
13869356 | pes2o/s2orc | v3-fos-license | Integral Relations for Twist 2 abd Twist 3 Contributions to Polarized Structure Functions
We discuss the relations between the twist 2 and twist 3 contributions to polarized deep-inelastic scattering structure functions both for neutral and charged current interactions which are predicted by the operator product expansion in lowest order in QCD.
INTRODUCTION
In the case of polarized deep inelastic scattering the cross sections depend on (up to) three unpolarized F i (x, Q 2 ) 3 i=1 and five polarized structure functions g j (x, Q 2 ) 5 j=1 in the limit of vanishing fermion masses. The lowest twist contributions are those of twist 2 for the structure functions F i (x, Q 2 ) and g 1,4,5 (x, Q 2 ). The structure functions g 2 (x, Q 2 ) and g 3 (x, Q 2 ) contain as well twist 3 terms [1,2]. In this note we give a summary on the relations between the twist 2 and twist 3 contributions to the different structure functions in lowest order in QCD. We also comment on a sum rule which has been derived in ref. [3] recently.
TWIST 2
For the twist 2 contributions to the structure functions one may seek a partonic interpretation. However, in lowest order in QCD only two generic parton combinations exist to describe the polarized structure functions : (1) 1) To appear in : Proc. of the 5th. Int. Conference on Deep Inelastic Scattering, Chicago, April, 1997 The remaining three structure functions are therefore related to this basis by three linear operators. Two of the correponding relations have been known for several years already, the Dicus relation [4] and the Wandzura-Wilczek relation [5] The third relation has been found only recently in ref. [6] Eqs. (2)(3)(4) can either be obtained analyzing the polarized structure functions by means of the operator product expansion [6,2] or in applying the covariant parton model [7], cf. [6] 2 . In the operator product expansion they result from equating different expression for the matrix element a n of the symmetric part of the quark operators in lowest order QCD, see e.g. [2] 3 . These relations between the different contributions to the longitudinal and transverse spin projections of the hadronic tensor are illustrated in Figure 1. Whereas the corresponding parts of W µν and W ⊥ µν are connected by integral relations, those acting in either part are just multiplications by a factor. For the valence parts this holds as well for the two contributions to W ⊥ µν .
2) A derivation of eq. (3) using the latter method was given in refs. [8] before.
3) The lowest moments of g i (x)| 5 i=1 were studied in ref. [9] and agree with the corresponding relations derived directly from eq. (2-4).
TWIST 3
For the twist 3 contributions to the structure functions g 2 (x, Q 2 ) and g 3 (x, Q 2 ), which emerge in the different neutral and charged current reactions, the operator product expansion [2] implies the relation : It results from equations between differences of the matrix elements d n of the non-symmetric part of the quark operators in lowest order QCD. Since one may express the twist 3 contributions to g 2 and g 3 by the analytic continuation of eqs. (6,7) in n can be rewritten by as a relation between twist 3 contributions only.
Recently, a sum rule for the valence part of the structure functions g 1 (x, Q 2 ) and g 2 (x, Q 2 ) was discussed in ref. [3] 4 . We would like to investigate the relation of eq. (9) to the operator product expansion. Here one firstly meets the problem that the valence parts g V 1 (x) and g V 2 (x) cannot be isolated for electromagnetic interactions from the complete structure functions and a formulation of eq. (9) with the help of the local operator product expansion is thus not straightforward. On the other hand, one may consider 4) This sum rule was found firstly in ref. [10] for a specific flavor combination.
which results from eqs. (61,62), ref. [2], in the charged current case. It is easily seen that the left-hand-side of eq. (10) includes only valence quark contributions and one may even rewrite eq. (10) for individual quark flavors separately, denoting the valence parts of the corresponding matrix elements by a V q n and d V q n , respectively. For the first moment one obtains The right-hand-side of eq. (11) vanishes in the case of massless quarks, because For the latter equation, see [11]. h q 1 (x),h q 1 (x) are the quark and antiquark transversity functions, respectively, which can be measured in the Drell-Yan process. The right-hand-side of eq. (12) vanishes in the limit m q → 0, which yields d V q 1 = 0. Due to this, eq. (12), similar to the case of the Burkhardt-Cottingham sum rule [12], is not described by the operator product expansion, but is formally consistent with it. | 2014-10-01T00:00:00.000Z | 1997-06-01T00:00:00.000 | {
"year": 1997,
"sha1": "813bf859f927927d8c924aab7db3d9a842acfa91",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "4a69ff59986aadd25f6217ca6e479f8fed7b03fc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
25026340 | pes2o/s2orc | v3-fos-license | Functional RsaI/PstI Polymorphism in Cytochrome P450 2E1 Contributes to Bladder Cancer Susceptibility: Evidence from a Meta-analysis
Bladder cancer was the ninth most common malignancy and the thirteenth most common cancer-related cause of mortality in the world, which was a complex disorder with both environmental and genetic influences (Parkin, 2008). The major environmental factors included tobacco smoking and occupational exposures (Clapp et al., 2008; Strope and Montie, 2008), which could cause DNA damage, such as cross-links, bulky adducts and single or double strand breaks resulting in unregulated cell growth and even cancer (Johansson et al., 1990; Hoeijmakers, 2001; Yue et al., 2009). Nevertheless, only a small proportion of the individuals exposed to these environmental factors eventually developed bladder cancer, indicating that host genetic factors may play an important role in bladder carcinogenesis (Taioli and Raimondi, 2005). A few gene polymorphisms associated with bladder cancer risk have been identified. Metabolizing enzymes were involved in the bioactivation and detoxification of xenobiotics,
Introduction
Bladder cancer was the ninth most common malignancy and the thirteenth most common cancer-related cause of mortality in the world, which was a complex disorder with both environmental and genetic influences (Parkin, 2008). The major environmental factors included tobacco smoking and occupational exposures (Clapp et al., 2008;Strope and Montie, 2008), which could cause DNA damage, such as cross-links, bulky adducts and single or double strand breaks resulting in unregulated cell growth and even cancer (Johansson et al., 1990;Hoeijmakers, 2001;Yue et al., 2009). Nevertheless, only a small proportion of the individuals exposed to these environmental factors eventually developed bladder cancer, indicating that host genetic factors may play an important role in bladder carcinogenesis (Taioli and Raimondi, 2005). A few gene polymorphisms associated with bladder cancer risk have been identified. Metabolizing enzymes were involved in the bioactivation and detoxification of xenobiotics, particularly the cytochrome P450 2E1 (CYP2E1), and its polymorphisms might be associated with bladder cancer risk (Gonzalez, 2005).
The CYP2E1 gene, located on chromosome 10q26.3, is a member of the CYP450 superfamily, is constitutively expressed in various organs and tissues including urothelial cells (Sheweita et al., 2001). It is a key ethanol-inducible enzyme in the metabolic activation of many low-molecular-weight carcinogens, such as vinyl chloride, benzene, and tobacco-specific nitrosamines (Guengerich et al., 1991;Yamazaki et al., 1992). Of the many known CYP2E1 genetic polymorphisms, RsaI/PstI polymorphism in the 5'-flanking region were in close linkage disequilibrium and affected the transcriptional activation of the gene (Hayashi et al., 1991). The wild allele (C1) and/or the less mutant allele (C2) of CYP2E1 RsaI/PstI polymorphism have been reported as conferring higher risk for developing liver, esophageal and lung cancer by meta-analysis (Wang et al., 2009;Leng et al., 2012;Tian et al., 2012). Therefore, the CYP2E1 RsaI/PstI polymorphism was believed to be risk factors for bladder cancer.
The associations of the CYP2E1 RsaI/PstI polymorphism and bladder cancer susceptibility have been extensively studied (Brockmoller et al., 1996;Choi et al., 2003;Mittal et al., 2005;Shao et al., 2008;Cantor et al., 2010;Basma et al., 2013). However, these studies yielded contradictory results, some studies showing significant association (Choi et al., 2003;Cantor et al., 2010;Basma et al., 2013), while others did not show such association, even in the same population (Brockmoller et al., 1996;Mittal et al., 2005;Shao et al., 2008). The inconsistency results might be resulted from a single study and the relatively small sample size, which had lower statistical power to detect the overall effects. Therefore, a quantitative synthesis of the combined data from different studies was necessary to estimate the association between CYP2E1 RsaI/PstI polymorphism and bladder cancer risk. In our study, we performed a systematic review and meta-analysis of the currently available literatures of the literature to clarify the accurate relationship between CYP2E1 RsaI/PstI polymorphism and bladder cancer risk.
Identification and eligibility of relevant studies
We conducted a comprehensive search in the PubMed, Medline, Embase, and Web of Science databases for all literatures about the association between CYP2E1 RsaI/ PstI polymorphism and bladder cancer (updated on Feb, 2014). Search term combinations were as follows: (Cytochrome P4502E1 or CYP2E1), (polymorphisms or SNPs or mutation or variant or variation) and (bladder cancer or bladder neoplasm or bladder tumor). All reference lists from the main literatures and relevant reviews were hand searched for additional eligible studies. Only those studies assessing the association between the CYP2E1 RsaI/PstI polymorphism and bladder cancer risk were included in this meta-analysis: (1) Case-control studies (retrospective or nested case-control); (2) Only English language articles reporting human studies were considered; (3) Studies with available data for estimating odds ratios (ORs) and the 95% confidence interval (CI); (4) For duplicated publications, only the study with the largest sample numbers was included; (5) We did not define a minimum number of cases or controls in the meta-analysis.
Data extraction
Information was independently extracted from all eligible publications by two investigators (Deng XD and Qin Gao) according to the inclusion criteria. The original extraction data were checked by Ma Y, and in case of disagreement, an agreement was reached after a discussion. For each of the eligible case-control studies, the following data were recorded: first author's last name, year of publication, ethnicity, country, number of cases and controls, number of different genotypes in cases and controls, Hardy-Weinberg equilibrium (HWE), genotyping methods, matching criteria. The main data of eligible studies are presented in Table 1. Different ethnicity descents were categorized as Asian, Caucasian, and African.
Statistical analysis
The HWE was assessed by Fisher's exact test and P value less than 0.05 was considered significant. Summary ORs with corresponding 95%CIs were used to evaluate the strength of association between CYP2E1 RsaI/PstI polymorphism and bladder cancer risk under different genetic models, including heterozygote (C1C2 vs C1C1), homozygote (C2C2 vs C1C1), dominant (C2C2/C1C2 vs C1C1), and recessive (C2C2 vs C1C1/ C1C2) genetic model. The Q test and I 2 statistics were evaluated to test statistical heterogeneity among studies (Higgins and Thompson, 2002). Pooled ORs estimation of each study was calculated by the fixed effects model (Mantel and Haenszel, 1959) or the random effects model (DerSimonian and Laird, 1986) according to the heterogeneity. The fixed-effects model was adopted when the studies were found to be homogeneous (P Q >0.1 and I 2 <50%). Otherwise, the random-effects model was applied. Subgroup analysis was conducted by ethnicity. Sensitivity analysis was carried out to assess the stability of the results. Publication bias among the literatures was assessed by Begg's funnel plot and Egger's regression asymmetry test. All statistical tests were performed by Stata software, version 11.0 (STATA Corp, College Station, TX).
Main results of meta-analysis
The main results of the meta-analysis and heterogeneity test are listed in (Figure 2), but not in Asians.
All I 2 values of heterogeneity were less than 50% and P Q values were greater than 0.10 in overall, unfortunately, significant heterogeneity with subgroup analysis were found in Caucasians under dominant genetic models (dominant: I 2 =57.6%, P Q =0.094, P OR =0.077, OR=0.598, 95%CI=0.338-1.058) ( Table 2). Due to only 3 studies in Caucasians, we failed to explore the sources of heterogeneity.
Sensitivity analysis and Publication bias
Sensitivity analysis was carried out by sequential omission of individual studies. The significance of summary ORs was not influenced excessively by omitting any single study under different genetic models, indicating that currently meta-analysis results were statistically reliable. The shape of funnel plots did not show any evidence of obvious asymmetry under all genetic model (Figure 3), and the result s of Egger's test did not reveal any evidence of publication bias (P>0.361).
Discussion
Bladder cancer is increasing common cancer which is likely to be caused by multi-factors, including environmental, genetic and their interactions factors (Parkin, 2008). Only environmental factor cannot explain the phenomenon completely, and then genetic factors may play an important role. In recent years, genetic susceptibility was used to evaluate the risk of bladder cancer, but the results were inconsistent. In our study, we conducted a systematic review and meta-analysis to confirm the accurate relationship between the CYP2E1 RsaI/PstI polymorphism and bladder cancer risk.
In current meta-analysis, we found that the CYP2E1 RsaI/PstI polymorphism were significantly associated with bladder cancer susceptibility including 6 casecontrol studies, especially in Caucasians (Table 2). It was indicated that the C2 carrier genotypes of CYP2E1 RsaI/ PstI polymorphism might be a protective factor which decreased the risk of bladder cancer. The bladder was prone to expose under carcinogens which were known to induce DNA strand breaks in the bladder epithelium cell due to being the urine collecting area (Johansson et al., 1990;Hoeijmakers, 2001;Yue et al., 2009). The CYP2E1 played an important role in the metabolic activation of low molecular weight compounds and pro-carcinogens such as benzene, N-nitrosamines, and halogenated hydrocarbons, which might be involved in bladder cancer development (Guengerich et al., 1991;Yamazaki et al., 1992). The population and molecular biological studies indicated that the C2 allele or the C2 carrier genotypes of the CYP2E1 RsaI/PstI polymorphism had a lower ethanol-induced enzyme activity and basal CYP2E1 activity because the CYP2E1 PstI and RsaI restriction sites located in the transcription-regulation region might affect transcriptional activity, and decrease/lose the inducibility to pro-carcinogen (Uematsu et al., 1991;Lucas et al., 1995;Carriere et al., 1996;Kim et al., 1996). Although studies showed that the C2/C2 genotype produced higher enzyme activity than the Cl/Cl genotype in vitro (Hayashi et al., 1991;Ladero et al., 1996). However, this finding could not be verified in several in vivo and in vitro phenotyping studies (Kim and O'Shea, 1995;Lucas et al., 1995;Carriere et al., 1996;Kim et al., 1996). Additionally, a number of studies have suggested that individuals with C2 allele have lower risk in developing cancers of the lung, liver, and esophagus (Persson et al., 1993;Yu et al., 1995;Le Marchand et al., 1998;Lin et al., 1998). Thus, in view of the role of CYP2E1 in the metabolic activation of pro-carcinogen and our results suggesting a protective effect of the C2/C2 genotype against bladder cancer, we consider that this genotype may result in poor CYP2E1 activity/inducibility toward bladder epithelial cells procarcinogens than the corresponding Cl/Cl genotype. Unfortunately, due to power limitations, ethnic difference and the fact that other contributors of CYP2E1 variability were not adjusted, other studies have not found such a relation (Hirvonen et al., 1993;London et al., 1996). Although the explanation for the discordancy is unknown, power limitations, ethnic difference and other contributors of sex, dietary, age and smoking, for example, may be provide a mechanistic explanation (Zgheib et al., 2010).
Subgroup analysis based on the ethnic showed that Caucasians with carried the C2 genotypes had a decreased risk of bladder cancer, but not in Asians. It was indicated that the genetic diversity and variants among different ethnicities or populations might contribute to cancer risk (Shahriary et al., 2012;Lakkakula et al., 2013). Although the underlying mechanisms were not clear, ethnic diversity might affect bladder cancer risk. The study showed that the C2 allele frequencies of Asians (~25-50%) were significantly higher than those of Caucasians (~5-10%) (Stephens et al., 1994). Of course, it might also exist in weak effect or some selection bias due to small sample size. Further investigations are needed to confirm the possible effects of CYP2E1 RsaI/PstI polymorphism on bladder cancer risk, such as gene-gene and gene-environment interaction from different genetic background and lifestyles, in which it may play a role.
Although this meta-analysis has been recognized as a more precise and systematic method to evaluate the effect of selected genetic polymorphisms on the risk of disease than single case-control study and cohort study (Munafo and Flint, 2004), some limitations should be acknowledged in this meta-analysis. Firstly, bladder cancer was considered to be a multi-factorial disease, interacted by environmental factors and many genetic factors. A major route of metabolism from the body for most drugs and chemical carcinogens is mainly constituted by drugmetabolizing enzymes (DMEs) with phase I oxidation enzymes and phase II enzymes system. Cytochrome P450 (CYP) is the most important phase I enzymes system, which is usually involved in the activation of carcinogens; and phase II enzymes, particularly N-acetyltransferase (NAT) and glutathione s-transferase (GSTs), which mostly detoxify the products to be excreted in the urine and possibly played an important role in cancer etiology (Steck and Hebert, 2009;Zgheib et al., 2010). Although studies suggested that CYP2E1 genetic polymorphisms have an impact on the incidence of cancer (Danko and Chaschin, 2005), epidemiological studies suggest that the NAT1, NAT2, GSTM1 and GSTP1 polymorphisms modify the risk of developing cancers of the urinary bladder Pandith et al., 2013;Zabost et al., 2013). Therefore, not only is CYP2E1 suspected to be involved with the development of bladder cancer, but also other DMEs genetic factors may be associated with bladder cancer. However, lacking the original data of gene-environment and gene-gene interactions limited a more precise analysis. Secondly, in the current metaanalysis, only six studies were collected, statistical power was limited to assess the effects well. Thus, the results should be interpreted with caution. Functional RsaI/PstI Polymorphism in Cytochrome P450 2E1 Contributes to Bladder Cancer Susceptibility In summary, the present meta-analysis suggested that CYP2E1 RsaI/PstI polymorphism might be associated with bladder cancer risk in Caucasians. However, further studies with larger sample sizes and well-designed randomized studies in various ethnicities are needed to verify this association comprehensively. | 2017-06-08T05:33:19.463Z | 2014-06-30T00:00:00.000 | {
"year": 2014,
"sha1": "be5055f5e33bf60be477549c8b8f6a6bd8d5764c",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201424635095876&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "086611db92e83163f50ae87d435edf5181597065",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204950402 | pes2o/s2orc | v3-fos-license | Quantum phase transitions and critical behaviors in the two-mode three-level quantum Rabi model
We explore an extended quantum Rabi model describing the interaction between a two-mode bosonic field and a three-level atom. Quantum phase transitions of this few degree of freedom model is found when the ratio $\eta$ of the atom energy scale to the bosonic field frequency approaches infinity. An analytical solution is provided when the two lowest-energy levels are degenerate. According to it, we recognize that the phase diagram of the model consists of three regions: one normal phase and two superradiant phases. The quantum phase transitions between the normal phase and the two superradiant phases are of second order relating to the spontaneous breaking of the discrete $Z_{2}$ symmetry. On the other hand, the quantum phase transition between the two different superradiant phases is discontinuous with a phase boundary line relating to the continuous $U(1)$ symmetry. For a large enough but finite $\eta$, the scaling function and critical exponents are derived analytically and verified numerically, from which the universality class of the model is identified.
I. INTRODUCTION
The quantum Rabi model describes the interaction between a photon field and a two-level system [1,2], which is the simplest model for studying the light-matter interaction and plays a significant role in quantum optics [3], condensed-matter physics [4], and quantum information [5]. With the rapid experimental progress in accessing the strong [6,7], the ultrastrong [8][9][10], and the deep strong coupling regimes [11,12], the quantum Rabi model has received much attention since the rotating-wave approximation fails [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31]. Recently, phase transitions and critical phenomena have been surprisingly found in the quantum Rabi model although only a single atom is involved [32], which requires the infinite frequency ratio of the atom to the photon rather than the thermodynamic limit required by traditional phase transitions. Further study on the scaling behaviors of the Rabi and the Dicke models has revealed that these two models belong to the same universality class [33]. These progresses bring a new insight for the quantum phase transition without the thermodynamic limit.
In parallel the interaction between a two-mode cavity field and a three-level system leads to many important phenomena, such as electromagnetically induced transparency [34] and dark state [35], which are profitable in the precise control of coherent population trapping and transfer [36]. The three-level system is also important in quantum information, referred as qutrit. Compared with the two-level scheme, the quantum key distribution based on qutrits is more resistant to attack [37,38], and the quantum computation using qutrits shows a faster speed and a lower error rate [39,40]. A qutrit quantum computer with trapped ions has been proposed [41]. In addition, the three-level system is used to construct a quantum heat engine [42,43]. To identify the possible quantum phases and quantum phase transitions involved in a two-mode three-level model is helpful in further understanding these light-matter interaction models and extending their applications.
The two-mode three-level interaction model in the thermodynamic limit has received much attention. Hayn et al. studied quantum phase transitions by a generalized Holstein-Primakoff transformation and revealed that it exhibits two superradiant quantum phase transitions, which can be both first and second order [44]. Cordero et al. found that the polychromatic ground-state phase diagram can be divided into monochromatic regions by a variational analysis [45]. Here, we report an analytical calculation of the ground-state phase diagram, scaling function, and critical exponents for the two-mode three-level quantum Rabi model by taking a single Λ-type three-level atom as a prototype. The analytical results arXiv:1910.13043v1 [quant-ph] 29 Oct 2019 are further verified by an exact numerical diagonalization.
The paper is organized as follows. In Sec. II, an effective model is derived when the ratio between the atom frequency and the photon frequency approaches infinity. In Sec. III, a ground-state phase diagram is extracted analytically. In Sec. IV, the mean photon number in the ground state is analytically derived. In Sec. V, the scaling function and critical exponents are analytically derived for finite frequency ratios, and also the numerical diagonalization is used to verify the analytical results. Finally, a brief summary is presented in Sec. VI.
II. MODEL HAMILTONIAN
Our two-mode three-level quantum Rabi model describes the interaction between a two-mode quantized field and a single Λ-type three-level atom, which is given by ( = 1) andâ 2 ) are the creation and annihilation operators of the photon mode 1 (mode 2), ω 1 (ω 2 ) is the frequency of the photon mode 1 (mode 2), g 1 (g 2 ) is the coupling strength between the transition |1 ↔ |3 (|2 ↔ |3 ) and the photon mode 1 (mode 2). Note that the transition between state |1 and state |2 in the Λ-type configuration is forbidden. For convenience, we define several variables to make the Hamiltonian dimensionless: ∆ = ε 3 − ε 1 , δ = (ε 2 − ε 1 )/∆, α = ω 2 /ω 1 , β = g 2 /g 1 , R = 2g 1 / √ ω 1 ∆. Also, we set ε 1 = 0 and η = ∆/ω 1 , and then the dimensionless Hamiltonian rescaled by ∆ becomeŝ We rewrite the Hamiltonian in terms of the position and momentum operators ( Further rescaling the position and momentum operators byŷ i = η −1/2x i , the Hamiltonian becomeŝ Supposing thatp y1 andp y2 are finite, the contributions from the momentum terms disappear in the limit η → ∞, and then the effective Hamiltonian reduces tô Because of the absence of momentum operators,ŷ 1 and y 2 can be replaced with the eigenvalues y 1 and y 2 in Eq. (4). In consequence, the ground-state phase diagram and the mean photon number are obtained.
III. PHASE DIAGRAM
The analytical diagonalization of the effective Hamiltonian (4) is equivalent to solving a third-order algebraic equation. An explicit form of the ground-state energy can be readily obtained for a special case that the state |1 and the state |2 are degenerate (i.e., δ = 0), ), the ground-state energy E 0 is further determined. The detailed derivation of the ground-state energy is given in Appendix A. The obtained ground-state phase diagram is depicted in Fig. 1. One can see that the phase diagram shows several distinct regimes.
In the regime of α < β 2 , the critical point of phase transition is When R is less than or equal to R <,c , y 1 = 0 and y 2 = 0, and the ground state stays in a normal phase with E 0 = 0. When R is larger than R <,c , the normal phase becomes unstable and bifurcates into two degenerate stable solutions with the ground-state energy It implies that the ground state enters the so-called superradiant phase. While E 0 and ∂E0 ∂R are continuous, ∂R 2 is discontinuous at R = R c as shown in Fig. 2(a), which reveals that the phase transition from the normal phase to the superradiant phase is of second order.
In the regime of α > β 2 , the critical point of phase transition is When R is less than or equal to R >,c , the ground state corresponds to E 0 = 0 with y 1 = 0 and y 2 = 0, which is the normal phase. When R is larger than R >,c , the ground state bifurcates into two degenerate stable solutions with the ground-state energy The ground-state energy is a constant in this case, independent of α and β. Likewise, the normal-superradiant quantum phase transition is also a second-order phase transition. When α = β 2 , both y 1 and y 2 are nonzero in the ground state when R exceeds the critical point R =,c = 1, which satisfy the relation: The ground-state energy is accordingly given by This quantum phase transition still has the second-order nature. However, across the boundary line (α = β 2 ) between the two superradiant phases, the model undergoes the second-order derivative of the groundstate energy with respect to the coupling strength R. It reflects a second-order normal-superradiant phase transition at the critical points. (b) ∂E 0 ∂γ , the first-order derivative of the ground-state energy with respect to the parameter ratio γ = α/β 2 , reflecting a first-order transition between the two superradiant phases of α < β 2 and α > β 2 .
a first-order quantum phase transition because the firstorder partial derivative ∂E0 ∂γ (γ = α/β 2 ) is discontinuous, as shown in Fig. 2(b). Here, the two-mode three-level quantum Rabi model presents two kinds of typical spontaneous symmetry breaking, which are separately characterized by the order parameters y 1 and y 2 . One can see that the normal phase possesses a discrete Z 2 symmetry, but the ground-state energy functionals either E(y 2 ) or E(y 1 ) shows a double-well structure when the coupling strength exceeds the critical points, which implies a breaking of the Z 2 symmetry. In the case of α = β 2 , the ground-state energy functional E(y 1 , y 2 ) presents a continuous U (1) symmetry. When the coupling strength is above the critical point, both y 1 and y 2 become nonzero, which reflects a breaking of the continuous symmetry. The U (1) symmetry was also found in the two-level system [46], in which the light-atom interaction, however, requires the consideration of both the electric and the magnetic components for the electromagnetic field. Such continuous symmetry breaking is related to the Nambu-Goldstone mode [47].
IV. MEAN PHOTON NUMBER
The photon number is a common observable in experiments and usually used to characterize the states of the light-matter interaction systems. In the Dicke quantum phase transition experiments, an abrupt increase of the mean intracavity photon number marks the onset of the normal-superradiant phase transition [48,49]. To calculate the mean photon number of the ground state, we rewrite the Hamiltonian in Eq. (4) as (with δ = 0), When α < β 2 , the lowest eigenvalue ofM is 1− β 2 R 2 α for y 1 = 0 and y 2,± = ± β 4 R 4 −α 2 2α 2 β 2 R 2 , and the corresponding eigenstate (normalized) is Thus the ground-state wavefunction ofĤ eff can be expressed as Based on the wavefunction ψ 2 , the mean photon numbers of the two modes above the critical point are and When α > β 2 , the lowest eigenvalue ofM is 1 − R 2 for y 1,± = ± R 4 −1 2R 2 and y 2 = 0, and the corresponding eigenstate (normalized) is Hence, the ground-state wavefunction ofĤ eff can be expressed as Then, the mean photon numbers of the two modes above the critical point are In the normal phase below the critical points, the mean photon numbers of both modes are zero. Above the critical points, as for α/β 2 < 1, the superradiant phase of R > α/β 2 is characterized by the mode-2 mean photon number of â † 2â 2 /η (namely, y2-type); As for α/β 2 > 1, the superradiant phase of R > 1 is characterized by the mode-1 mean photon number of â † 1â 1 /η (y1-type). (β = 1.2).
We choose â † 2â 2 /η and â † 1â 1 /η as the order parameters for α < β 2 and α > β 2 , respectively. Figure 3 presents the mean photon numbers of the two photon modes. One can see that the photon field is a vacuum in the normal phase and simultaneously the atom is at |2 for α < β 2 and |1 for α > β 2 inferring from Eqs. (16) and (20). Above the critical points, both the photon field and the atom are excited and the so-called superradiant phase transition takes place. In each regime of α < β 2 and α > β 2 , there is only one coupling involved. More precisely, only the coupling between the photon mode 2 and the transition |2 ↔ |3 contributes to the ground state when α < β 2 , whereas only the coupling between the photon mode 1 and the transition |1 ↔ |3 contributes to the ground state when α > β 2 . Actually, the two-mode three-level quantum Rabi model degenerates into the quantum Rabi model in these two parameter regimes. In each parameter regime, there exists a dark state, which is state |1 for α < β 2 and |2 for α > β 2 , as seen in Eqs. (16) and (20). The dark state was also studied in a semi-classical two-mode three-level quantum Rabi model [50].
The critical exponent can be obtained by the deformation of the mean photon number with the reduced coupling strength r = (R − R c )/R c , They indicate that the critical exponent of the mean photon numbers is 1, which is the same as the quantum Rabi model [33,51]. It suggests that the present model belongs to the same universality class with the quantum Rabi model.
V. SCALING BEHAVIOR
The finite-size scaling behavior of continuous phase transitions is important near the critical point (the size here is characterized by the frequency ratio η). We derive the scaling function and critical exponents of this model. For finite η, after diagonalizing the atomic part, the Hamiltonian in Eq. (3) becomes (δ = 0) ]. (26) In the case of α < β 2 , the wavefunction near the critical point R <,c is very localized around y 2 = 0 when η is very large. Thus, the Hamiltonian might be approximated by a second-order expansion in the vicinity of Definingŷ 2 = η −1/3ẑ 2 and r = η −2/3 r , the Hamiltonian can be rewritten aŝ In this respect there exists a rescaled ground-state wavefunction φ(ẑ 2 , r ), which is independent of η. Based on this wavefunction, the mean photon number is In the case of α > β 2 , the wavefunction near R >,c is also very localized around y 1 = 0 for a very large η. Likewise, we expand the Hamiltonian to the second order in the vicinity of y 1 = 0, Analogously definingŷ 1 = η −1/3ẑ 1 and r = η −2/3 r , we haveĤ In this case, the mean photon number is The detailed derivation of the Hamiltonian from Eq. (26) to Eqs. (27) and (30) is given in Appendix B. Based on the above analytical derivation, the scaling function for the mean photon number of the two-mode three-level quantum Rabi model is obtained as follows, where N (η, r) is the photon numbers of â † 1â 1 /η and â † 2â 2 /η. According to the standard finite-size scaling law [52], the divergent correlation length at critical points enables the scaling behavior of a physical quantity P at different finite η: where κ is the critical exponent of P (P ∝ r κ ), ν is the critical exponent of the correlation length (ξ ∝ r −ν ). Comparing Eq. (33) with Eq. (34), we infer that κ and ν of the mean photon number for the two-mode threelevel quantum Rabi model are 1 and 3/2, respectively. The finite-size critical scaling exponent κ of the mean photon numbers is the same as that obtained from the limit η → ∞, as is concluded from Eqs. (24) and (25).
To check our analytical prediction, we numerically diagonalize the two-mode three-level quantum Rabi model. We calculate the mean photon numbers of the two photon modes in the case of δ = 0. To determine critical points and critical exponents numerically, we take the logarithm of Eq. (34) as ln P (η, r) = (−κ/ν) ln η + ln f (η 1/ν r).
At the critical point r = 0, ln P (η, r) and ln η are linearly related as The scaling parameter κ/ν can be determined by a linear fitting. The remaining parameter ν can be obtained by a collapse of the data points with different η values onto a single scaled curve. Figures 4(a) and 5(a) show the logarithm of the mean photon number as a function of the logarithm of η for the two regimes of α < β 2 and α > β 2 , respectively. The red-square line represents the linear behavior and the determined critical points are R <,c = 0.7454 and R >,c = 1.0000, which confirm the analytical critical points of R <,c = α/β 2 and R >,c = 1, respectively. The fitted values of κ/ν are 0.624 and 0.616, and ν is correspondingly obtained to be 1.582 and 1.592, which reproduce the analytical values κ/ν = 2/3 and ν = 3/2. One can see from Figs. 4(b) and 5(b) that the mean photon numbers for different η collapse onto a well-defined single curve. It indicates that the analytical solution reveals the correct ground-state phase diagram of the twomode three-level quantum Rabi model and captures its scaling invariance near the critical points.
VI. CONCLUSIONS
Based on the analytical solution and the numerical diagonalization, we attain the ground-state phase diagram, scaling function, and critical exponents of the twomode three-level quantum Rabi model. The phase diagram is divided into the three regions of one normal phase and two superradiant phases. The phase transitions could take place by adjusting the frequency ratio of the two photon modes and the relative strength of the two photon-atom couplings. The normal-superradiant quantum phase transitions are found to be second order and related to the spontaneous breaking of Z 2 symmetry. In addition, the model undergoes a first-order phase transition across the boundary line between the two superradiant phases, where a spontaneous continuous U (1) symmetry breaking is discovered. Different from the traditional phase transitions in the thermodynamic limit, here the quantum phase transition is realized alternatively when the frequency ratio η of the atomic transition and the photon field approaches infinity. The finite-η scaling function is derived and the obtained critical exponent of the mean photon number is the same as that determined in the limit η → ∞. Based on this, the universality class of this model is identified. This work is helpful in further understanding the quantum phase transitions and exploring some potential applications for such single-atom systems. The present results are obtained from the degenerate case for two lowest states, the general case of the two-mode three-level quantum Rabi To determine the ground-state energy of Eq. (5), we calculate its first derivative with respect to y 1 and y 2 , and let both them equal zero The solutions have four cases: Case 1: the corresponding ground-state energy E = 0. Case 2: we thus have This solution exists only when R α β 2 , and the corresponding energy E = − 1 4 ( α β 2 R 2 + β 2 R 2 α ) + 1 2 0. Case 3: = 0 y 2 = 0; (A6) hence, It exists only when R 1, and the corresponding energy E = − 1 4 ( 1 R 2 + R 2 ) + 1 2 0. We let its first-order derivative dE(S) dS = 0, then It indicates a continuous set of y 1 and y 2 with the degenerate energy of E = − 1 4 ( 1 R 2 + R 2 ) + 1 2 0, which exists also when R 1 To summarize, we get the ground-state energies as follows, They are verified by the positive second-order derivatives of ∂ 2 E(y1,y2) (B8) | 2019-10-29T02:25:50.000Z | 2019-10-29T00:00:00.000 | {
"year": 2019,
"sha1": "92fd29c5baeaa1274007ff874559d983077111b5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.13043",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "92fd29c5baeaa1274007ff874559d983077111b5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
235606475 | pes2o/s2orc | v3-fos-license | Hunting for the nature of the enigmatic narrow-line Seyfert 1 galaxy PKS 2004-447
Narrow-line Seyfert 1 (NLS1) galaxies are a class of active galactic nuclei (AGN) that, in some cases, can harbor powerful relativistic jets. One of them, PKS 2004-447, shows gamma-ray emission, and underwent its first recorded multifrequency flare in 2019. However, past studies revealed that in radio this source can be classified as a compact steep-spectrum source (CSS), suggesting that, unlike other gamma-ray sources, the relativistic jets of PKS 2004-447 have a large inclination with respect to the line of sight. We present here a set of spectroscopic observations of this object, aimed at carefully measuring its black hole mass and Eddington ratio, determining the properties of its emission lines, and characterizing its long term variability. We find that the black hole mass is $(1.5\pm0.2)\times10^7$ M$_\odot$, and the Eddington ratio is 0.08. Both values are within the typical range of NLS1s. The spectra also suggest that the 2019 flare was caused mainly by the relativistic jet, while the accretion disk played a minor role during the event. In conclusion, we confirm that PKS 2004-447 is one of the rare examples of gamma-ray emitting CSS/NLS1s hybrid, and that these two classes of objects are likely connected in the framework of AGN evolution.
Introduction
Since the launch of the Fermi Satellite, narrow-line Seyfert 1 galaxies (NLS1s) have been identified as the third class of active galactic nuclei (AGN) that can harbor powerful beamed relativistic jets and produce γ-ray emission beside the two wellknown classes of blazars, BL Lacertae objects (BL Lacs) and flat-spectrum radio quasars (FSRQs, Abdo et al. 2009a,b,c;Foschini et al. 2010). NLS1s are characterized by a relatively low full-width at half maximum (FWHM) of Hβ, which by definition must be lower than 2000 km s −1 , by a flux ratio [O III]/Hβ < 3, and two bumps of Fe II multiplets, which indicate that these obe-mail: marco.berton@utu.fi jects are type 1 AGN with an unobscured view of their central engine (Osterbrock & Pogge 1985;Goodrich 1989).
The narrowness of the permitted lines observed in NLS1s is typically interpreted as a sign of low rotational velocity around a relatively low-mass black hole (10 6 − 10 8 M , Boller et al. 1996;Peterson et al. 2000;Peterson 2011;Cracco et al. 2016;Rakshit et al. 2017;Chen et al. 2018). The black hole is accreting close to or above the Eddington limit (Boroson & Green 1992;Sulentic et al. 2000), especially in the strongest Fe II emitters (Du et al. 2016). Non-jetted NLS1s are typically hosted by a spiral galaxy with a pseudobulge (Crenshaw et al. 2003;Deo et al. 2006;Orban de Xivry et al. 2011;Mathur et al. 2012). This ensemble of properties has led several authors to hypothesize that these NLS1s may represent an early evolutionary stage in the life of AGN, that will eventually grow into classical broad-line Seyfert 1 galaxies (Grupe 2000;Mathur 2000;Sulentic et al. 2000;Komossa et al. 2006;Fraix-Burnet et al. 2017). Jetted NLS1s seem to behave like their non-jetted counterparts. They are the lowmass tail of the FSRQ distribution (Abdo et al. 2009a,c;Foschini et al. 2015;Berton et al. 2016b), which in turn suggests that they may be the progenitors of FSRQs (Berton et al. 2016c). NLS1s with misaligned relativistic jets may eventually evolve to form the parent population of FSRQs, i.e. high-excitation radio galaxies (HERG, Berton et al. 2016cBerton et al. , 2017Foschini 2017). Several authors pointed out that a possible link between jetted NLS1s and other classes of young radio galaxies (see O'Dea & Saikia 2021, for a review), such as compact steep-spectrum sources (CSS) and gigahertz-peaked sources (GPS), may exist (Oshlack et al. 2001;Gallo et al. 2006;Komossa et al. 2006;Yuan et al. 2008;Wu 2009;Caccianiga et al. 2014Caccianiga et al. , 2017Schulz et al. 2016;Gu et al. 2015;Liao & Gu 2020;Zhang et al. 2020;O'Dea & Saikia 2021;Yao & Komossa 2021). It is also worth noting that in the X-rays there is even evidence that the corona in some non-jetted NLS1s exhibit jet-like properties like collimation and outflow. This behavior is remarkable, as it could indicate potential jet-like behavior even among nonjetted AGN (see Gallo 2018, for a review).
NLS1s are located on the horizontal branch of the so-called quasar main sequence (MS, Marziani et al. 2018a). The MS is the locus on the plane defined by the flux ratio between the Fe II multiplets and Hβ, known as R4570, and the FWHM(Hβ), where all type 1 AGN lie. This sequence was originally identified by means of principal component analysis (Boroson & Green 1992). The MS can be roughly divided into two distinct populations of sources, called population A and B. Population A forms a horizontal branch in the MS, since its sources have different values of R4570 but they all have FWHM≤4000 km s −1 . Population A can be binned into four groups, from A1 to A4, with values of R4570 increasing by 0.5 from group to group. Population B, instead, forms a vertical branch, since all of them have R4570<0.5 but FWHM>4000 km s −1 (Sulentic et al. 2000). All NLS1s, because of their definition, belong to population A.
The main driver of the MS may be a decreasing Eddington ratio from population A to B (Boroson & Green 1992). Some authors even hypothesized that the MS could be an evolutionary path for AGN (Fraix-Burnet et al. 2017). However, also some other factors seem to play a role in the MS, such as metallicity and inclination with respect to the line of sight (Shen & Ho 2014;Panda et al. 2019;Śniegowska et al. 2021). This last parameter can be particularly confusing, since in the presence of a flattened broad-line region (BLR) observed pole-on, a low inclination can produce narrow permitted lines even in the presence of large black hole masses due to the lack of Doppler broadening (Decarli et al. 2008). Some authors suggested that this may be the case for jetted NLS1s (e.g., Calderone et al. 2013;Baldi et al. 2016; D'Ammando 2019). However, recent observations dedicated to the host galaxies of these objects showed that jetted NLS1s are typically hosted in disk galaxies similarly to their non-jetted counterparts, and confirmed that their black hole mass is lower than that of FSRQs (Antón et al. 2008;Orban de Xivry et al. 2011;Mathur et al. 2012;León Tavares et al. 2014;Kotilainen et al. 2016;Olguín-Iglesias et al. 2017;Järvelä et al. 2018;Berton et al. 2019c;Olguín-Iglesias et al. 2020;Hamilton et al. 2020, but see D'Ammando et al. 2017. The number of currently known γ-ray emitting NLS1s is rather limited, with approximately twenty sources classified to date (e.g., Romano et al. 2018;Paliya 2019;Järvelä et al. 2020;Rakshit et al. 2021, see the review by Komossa 2018). Because of their relatively high redshift (all but 3 have z>0.2), γ-NLS1s are rather faint in optical bands, and very few studies have been dedicated specifically to their optical spectra (e.g., Komossa et al. 2018;Kynoch et al. 2018Kynoch et al. , 2019Yao & Komossa 2021). Here we present new spectroscopic data for the southernmost (to date) γ-NLS1, PKS 2004-447. This NLS1 was identified as a γ-ray source soon after the launch of the Fermi Satellite (Abdo et al. 2009c), and at the end of 2019 it underwent its first γ-ray flare ever recorded (Gokus 2019;Gokus et al. 2021). The goal of this work is to study its long-term behavior and the nature of this flare by means of optical spectroscopy. We also accurately measure some of its most important physical parameters such as the black hole mass, the Eddington ratio, and the emission line properties, and we determine its role in the family of γ-ray emitting NLS1s. In Sect. 2 we provide a brief review about the target of this paper, in Sect. 3 we describe the process of data reduction, in Sect. 4 we analyze the profiles of the most prominent emission lines, in Sect. 5 we determine its black hole mass, in Sect. 6 we study its time variability, and finally in Sect. 7 we provide a summary of our results. Although we are aware of the tension in the value of the Hubble constant (Riess et al. 2019), which would require the use of H 0 ∼ 74 km s −1 Mpc −1 for the sources in the nearby Universe, throughout this work we adopt a standard ΛCDM cosmology, with a Hubble constant H 0 = 70 km s −1 Mpc −1 , and Ω Λ = 0.73 (Komatsu et al. 2011) to allow an easier comparison with previous works.
PKS 2004-447
The target of this study is the γ-ray emitting NLS1 PKS 2004-447 (R.A. 20h 07m 55s, Dec. -44d 34m 44s, z = 0.240). Like most NLS1s, it is hosted in a spiral galaxy with a pseudobulge (Kotilainen et al. 2016). The source was noticed early on due to its prominent radio emission, and it was originally included in the Parkes Half-Jansky Flat-Spectrum Sample because of its flat radio spectrum 1 between 2.7 and 5.0 GHz (F 2.7 GHz = 0.81 Jy, α ν = 0.36, with F ν ∝ ν −α , Drinkwater et al. 1997). However, this radio spectral measurement was carried out on non-simultaneous data. Later observations revealed instead a steep spectral index of α ν = 0.67, suggestive of a radio classification as a CSS (Oshlack et al. 2001;Gallo et al. 2006). Such result was confirmed more recently with new high-resolution observations in radio, that additionally found a core-jet morphology, and a flattening in the spectrum below 2 GHz ). This may be interpreted as a turnover, which is a defining property of CSS and GPS sources (O'Dea 1998). Also the linear projected jet size of ∼2 kpc, derived from the turnover frequency, is consistent with CSS sources ). Its radio luminosity at 5 GHz is ∼ 3.8 × 10 42 erg s −1 (7.4 × 10 25 W Hz −1 , Schulz et al. 2016), which lies at the lower end of the CSS/GPS luminosity distribution (O'Dea 1998), but within the typical range of jetted NLS1s (Berton et al. 2018a).
The history of its optical classification has been somewhat troubled. The source was originally included in the NLS1 class by Oshlack et al. (2001), that estimated the black hole mass for the first time as 5.4×10 6 M 2 . However, the spectrum they analyzed was derived from Drinkwater et al. (1997), and it unfortunately showed an issue in the y-axis of their Fig. 2, where the flux was underestimated by a factor 200 with respect to the origi- nal paper. This error propagated throughout their work and in the subsequent literature based on it. The low signal-to-noise ratio of the spectrum, furthermore, hampered a fully reliable classification of the source as an NLS1. Indeed, due to its seemingly weak Fe II emission, PKS 2004-447 was considered to be a possible narrow-line radio galaxy (NLRG) or a Type 2 AGN (Zhou et al. 2003;Sulentic et al. 2003;Komossa et al. 2006), although Gallo et al. (2006) noted that the presence of strong Fe II multiplets is not always included in the definition of NLS1, and therefore preferred an NLS1 classification. More recently, a new estimate of the black hole mass from optical spectroscopy is 7×10 7 M (see Foschini et al. 2015, arXiv:1409, footnote 10), close to the value 9×10 7 M derived from the K-band bulge luminosity (Kotilainen et al. 2016). A significantly higher value, 6×10 8 M , was instead derived using optical spectropolarimetry (Baldi et al. 2016). PKS 2004-447 was eventually included in the NLS1 class due to its multiwavelength properties, which are similar to those of other γ-ray NLS1s. Its X-ray spectrum, in particular, is well described by a single power law, likely due to the non-thermal emission of the relativistic jet . A soft X-ray excess was detected in XMM-Newton observations (Gallo et al. 2006;Foschini et al. 2009), although its presence is not confirmed in all observations and is correlated to a high flux level Gokus et al. 2021). It seems to be the high-energy tail of the synchrotron emission (Foschini et al. 2009;Foschini 2020). It is worth noting that, unlike what is seen in other γ-NLS1s (e.g., see PMN J0948+0022, Foschini 2012b), PKS 2004-447 has a less prominent variability both in X-rays and in radio Kreikenbohm et al. 2016).
Observations and data reduction
The calibrated spectrum observed with the Anglo-Australian Telescope (AAT) on 1984-05-02 was extracted from the paper by Drinkwater et al. (1997). The spectra from the FOcal Reducer and low dispersion Spectrograph 2 (FORS2) mounted on the Very Large Telescope (VLT) were originally obtained to get a reliable optical classification of the source, and study its optical variability (program ESO/096.B-0256, P.I. Kreikenbohm). The data from the Las Campanas Observatory (Clay and Dupont telescopes) were collected to monitor the low state of the source and its γ-ray flare. Finally, the spectrum in the flaring state was observed in the framework of a larger NLS1 study carried out with the ESO Faint Object Spectrograph and Camera 2 (EFOSC2) at the New Technology Telescope (NTT, program ESO/0104.B-0587, P.I. Berton). The technical details of each spectrum are reported in Table 1. For all these new observations, we carried out the standard data reduction with IRAF, with bias and flat-field correction, and wavelength and flux calibration. In all cases, we corrected the spectrum for Galactic absorption by using A(V)=0.091 (Schlafly & Finkbeiner 2011) and assuming a reddening law with R v = 3.1 (Cardelli et al. 1989). After this step we corrected the spectrum for redshift, z=0.240, and subtracted the AGN continuum by fitting it with a power law. We did not try to model the host galaxy contribution, because no absorption lines from the host are visible in the spectra. Furthermore, at the relatively high redshift of PKS 2004-447 the host contribution is expected to be negligible (Letawe et al. 2007).
To account for the different observing facilities and conditions among our spectra, we decided to use the total flux of the [O III]λ5007 line as a reference to rescale the spectra. Since this line originates in the narrow-line region (NLR), which is significantly larger than the BLR and much farther away from the nucleus, its flux is expected to remain constant over several years (Peterson et al. 2004). Since the VLT spectra are those with the highest signal-to-noise ratio (S/N), we decided to use their median [O III] flux as a reference, excluding the noisiest spectra. The first (2015-10-17) and the fifth observing nights (2016-03-29), indeed, were likely affected by some passing clouds, so they were not used. The reference flux we adopted is 4.84×10 −15 erg s −1 cm −2 , based on fitting the line profile with a double Gaussian. The exact measurement procedures are described in Sect. 4.2 and in Sect. 6.
Line profiles
To carry out an analysis of the line profiles, it is crucial to have a spectrum with a high S/N. Indeed, this parameter is crucial to examine in detail the wings of profiles, and to obtain an accurate decomposition (e.g., Järvelä et al. 2020). Therefore, we combined together all the spectra obtained by the VLT, reaching a S/N∼90 in the continuum around 5100Å. Although we may lose some information on the variability in the continuum and permitted lines over the six months of observations, this operation is necessary to ensure a good quality spectrum that can be studied in detail. The combined spectrum is shown in Fig. 1. All of the fitting procedures that follow have been performed with our own Python code (Harris et al. 2020). An analysis of the variability will instead be carried out in Sect. 6. In our line profile analysis, we focused on the most prominent lines: Hβ, [O III]λλ4959,5007, the [S II]λλ6716, 6731 doublet, and the Hα+[N II]λλ6548,6584 complex.
Fe II multiplets
The Hβ+[O III] region of NLS1s, between 4000 and 5500Å, is typically characterized by the presence of Fe II multiplets. These lines lie very close to the other emission features, and they are often blended with them. In particular, they can affect significantly the red wings of the [O III]λ5007 line and of the Hβ line. To reproduce the multiplets, we used the templates based on photoionization models provided by Marziani et al. (2009), adopting a FWHM for the Fe II of ∼1500 km s −1 (comparable to that of Hβ broad, see Sect. 4.3) and rescaling it to match the observed spectrum. The best fit is shown in Fig. 2. The typical errors produced by Fe II subtraction were already estimated by Cracco et al. (2016), and they are of the order of 10% of the flux. The template we used yields a flux of Fe II on the blue side of Hβ of (3.1±0.3)×10 −15 erg s −1 cm −2 . This value will be later used to calculate the R4570 parameter.
[O III] lines
After subtracting the Fe II multiplets, we modeled the [O III]λλ4959, 5007 lines. Usually these lines show two separate components. The first one is a narrow core component, which is associated with the gas of the NLR and typically has the same redshift as the host galaxy. The second is a broad wing usually interpreted as a sign of outflowing gas. Therefore, the two lines should be modeled with four Gaussian functions. To reduce the number of free parameters, we introduced some constraints on the λ4959 line. The flux ratio between the components was fixed to the theoretical value of 1/3 (Dimitrijević et al. 2007), the FWHM of the core and the wing were forced to be the same in both lines, and the relative shift between the two components was also fixed to be the same in both lines. The measurements were performed by using a Monte Carlo method. We repeated the fit one thousand times while adding every time to the line profiles a different Gaussian noise proportional to the noise in the continuum. The latter is also used to estimate the χ 2 ν . We finally used the median value for each parameter and its standard deviation. The result of the fit is shown in Fig. 3, where the χ 2 ν ∼ 6.7 is also reported. The total flux we obtained for the two lines is 6.45×10 −15 erg s −1 cm −2 , with one quarter of the flux in the λ4959 line and the rest in the λ5007 line. The fit indicates that the core component is unresolved. This result may not be real, but only a product of the limited spectral resolution. Therefore, no robust conclusion can be obtained on the [O III] line components.
We also tried to reproduce the [O III] lines by modeling them with a skewed Gaussian function. The function can be expressed as where λ 0 represents the central wavelength, σ s represents the width of the skewed Gaussian, α is the skewness parameter, and A 0 is a constant. Physically, this model seem to indicate that the ionized gas producing the [O III] lines is distributed in a bipolar outflow, with one of the outflows partially obscured by an intervening medium (i.e., the central engine). We fixed the parameters of the λ4959 line as described for the previous model. This new function provides a slightly worse representation of the line profile than the double Gaussian (χ 2 ν = 13.4, ∆χ 2 ν = −6.7). However, it is worth noting that the skewness parameter is α = −1.07 ± 0.07, suggesting a slight asymmetry on the blue side. This may indicate that the receding outflow could be partially obscured, as we would expect in a system observed not too far from its symmetry axis as a type 1 AGN.
Hβ line
The modeling of Hβ, shown in Fig. 4, was carried out after the subtraction of the Fe II multiplets. At first we attempted simple fits with a single Gaussian profile or a single Lorentzian profile. The single Gaussian profile is the worst fit (χ 2 ν = 11.28), because it cannot reproduce correctly neither the core nor the wings of the line. A single Lorentzian provides a slight improvement (χ 2 ν = 9.06), but it shows that the line profile is not perfectly symmetric, since the Lorentzian function fits the red side of the line well, but fails to reproduce the blue side and the peak.
Since neither function is good enough to fully reproduce the whole profile, we first added to the Lorentzian function a single Gaussian to represent the narrow component of Hβ (LG model). Its flux was fixed at 1/10 of the total flux of the [O III]λ5007 line, that is 4.8×10 −15 erg s −1 cm −2 , since this ratio is often observed in type 1 AGN (Véron-Cetty et al. 2001 ). The [O I]λ6300 line would also be a good indicator but, as seen in Fig. 1, it is weaker than [O II] and therefore less apt for this. Nevertheless, it is worth noting that its FWHM∼830 km s −1 (corrected for instrumental resolution) is not very different from that of [O II]. The addition of the narrow component leads to a significant statistical improvement of the fit (χ 2 ν = 5.34). However, the blue side of Hβ is still not perfectly reproduced, and the same is true for the peak of the line. We therefore tried to reproduce the line using three Gaussians (3G model), two representing the broad component and one the narrow component. The latter had the same characteristics as before, while all the parameters of the two broad Gaussians were left free to vary. Statistically this produces the best fit for the line (χ 2 ν = 0.5), since it can reproduce the asymmetry of the profile by shifting the center of the very broad component. It is worth noting that such shift is consistent with the outflowing component that may be present in [O III]. Using this model, we can derive the fundamental parameters of Hβ, that is FWHM(Hβ) = 1617 ± 8 km s −1 , and total integrated flux F = (3.17 ± 0.01) × 10 −15 erg s −1 cm −2 .
Using the LG model, we find that the width of the Hβ broad component is FWHM(Hβ b ) = 1577 ± 10 km s −1 , while the 3G model provides a FWHM(Hβ b ) = 1804 ± 62 km s −1 , while the second-order moment of the line, defined as is σ(Hβ b ) = 1007 ± 20 km s −1 . The σ of the LG model cannot be estimated because the second-order moment of a Lorentzian function is, by construction, infinite. The total flux of the line broad component estimated by the LG model is (2.76 ± 0.01) × 10 −15 erg s −1 cm −2 , while the 3G model provides (2.67 ± 0.02) × 10 −15 erg s −1 cm −2 . Using the mean of these two values, we calculated R4570 = F (Fe II) / F (Hβ b ) = 1.14±0.14. According to the classification by Marziani et al. (2018b), PKS 2004-477 therefore belongs to population A3 of the quasar MS.
Hα region
We modeled the Hα profile by fixing it to the Hβ parameters. Specifically, we fixed the flux ratio of the different components in the LG and 3G model, the velocity shifts between the components, and the velocity associated to the FWHM of each component. This essentially leaves only one free parameter to be determined, that is the height of the narrow component. Finally, we added two more Gaussians to reproduce the [N II]λλ6548,6584 lines, which are blended with Hα due to its width and to the low resolution. Their FWHM is identical in both lines but free to vary, their flux ratio was fixed to the theoretical value of 2.95, and their positions were fixed to restframe. Therefore this leaves two free parameters. This lead to the results shown in Fig. 5. Both fits are not perfect, because the Hβ profile used as reference seems to have a more prominent red wing, which is not observed in Hα. Anyway, the adoption of the double Gaussian instead of the Lorentzian to reproduce the broad component leads to a very significant improvement in the χ 2 ν , with ∆ χ 2 ν = 25.55 (χ 2 ν values are reported in Fig. 5). This seems to indicate that the double Gaussian is better at reproducing the broad component of both Hα and Hβ. The reason for the asymmetry observed in Hβ but not in Hα could be some residual Fe II in the former that the template could not account for, or a real physical difference due to a different kinematics of the emitting gas. Only better data will allow us to disentangle between these two possibilities.
After fitting Hα, we estimated the internal reddening due to the dust by studying the Balmer decrement. We calculated the ratio R between the flux of the narrow component of Hα and Hβ, which is R ∼ 4.86. Assuming a theoretical ratio of 2.86, and following Cardelli et al. (1989) we found an internal extinction A(V) = 1.66 mag. This result is in agreement with what was found by (Gallo et al. 2006), who derived A(V) = 1.9 ± 1.5 from the AAT spectrum 3 . As they already pointed out, this extinction is significantly higher than what can be estimated from the X-ray spectra, which instead show negligible absorption, as confirmed by more recent observations Berton et al. 2019a;Gokus et al. 2021). They also suggested that a possible explanation for this is a very different gas/dust ratio than what is seen in the Milky Way, and the jet may play a role in this by transferring material from the nucleus into the NLR. However, given the significant amount of uncertainty on the A(V) value, we decided not to apply an additional correction for internal exctinction in the calculations that will follow. Using these ratios, we can provide an estimate of the electron density and temperature. We used the temden task of IRAF, which is based on a 5-level atomic model described by De Robertis et al. (1987). The observed line ratios are reproduced if the electron density is n e = 7.6 × 10 2 cm −3 , and the electron temperature is T e = 3.1 × 10 4 K. While the density value we derived is rather typical for AGN (Congiu et al. 2017b), it is worth noting that, if some internal absorption is present as possibly suggested in the previous section, the temperature value we obtained should be treated as a lower limit.
Black hole mass
To estimate the black hole mass, we used different techniques. All of them are based on the assumption that the gas orbiting the black hole is virialized. In this case, the black hole mass can be calculated using the virial theorem, where R BLR is the radius of the BLR, v is the rotational velocity of the gas, G is the gravitational constant, and finally f is the so-called scaling factor. The weight of this factor is still largely unknown, therefore we will for now fix f = 1, and later discuss the results under different assumptions.
Dependence on line width and velocity
The two key parameters to determine are the radius of the BLR and the rotational velocity. To obtain the former, we used two relations calibrated in the literature. Both of them are based on the assumption of photoionization equilibrium. The accretion disk radiation is what causes the formation of BLR lines and pushes away the clouds due to radiation pressure. If the ionizing continuum coming from the disk is strong, the BLR will have a large radius. Indeed, there is a relation which seems to connect the BLR radius measured via reverberation mapping technique and the luminosity of the continuum at λ5100Å. The coefficients were estimated by Bentz et al. (2013), where the BLR radius is expressed in light days. However, as pointed out before, the ionizing continuum produced by the accretion disk is also responsible for the formation of the emission lines. Therefore, the intensity of the lines is also proportional to the accretion disk luminosity, and the BLR radius depends on the luminosity. The relation between these quantities was derived by Greene et al. (2010), log R BLR l.d. = (1.85±0.05)+(0.53±0.04) log L(Hβ) 10 43 erg s −1 , (6) where L(Hβ) is the integrated luminosity of the line. In principle, this second method should be less contaminated by the jet contribution, which is a non-negligible factor in the λ5100 luminosity .
To derive the gas velocity, we used the Hβ line decomposition previously described. The best results were obtained when the broad component was fitted by either two Gaussians or a Lorentzian function. For both of the profiles, we adopted the FWHM of the broad component as a proxy of the rotational velocity of the gas. In the case of the double Gaussian, we also estimated the second-order moment σ of the broad component, defined in 2. The use of σ instead of FWHM provides generally better results, especially in low-contrast lines, and it is less affected by inclination effects and BLR geometry (Peterson et al. 2004;Peterson 2011;Peterson & Dalla Bontà 2018). We did not use this method for the Lorentzian profile because the σ of a Lorentzian function is, by definition, infinite.
In conclusion, we had six different combinations to use for the calculation of this virial product, whose results are shown in Table 2. The estimate of the errors on the mass was performed in two ways. The first source of error is in the fitting procedure, and we evaluated it using a Monte Carlo technique. We calculated the virial product one thousand times by varying the Hβ profile adding a Gaussian noise proportional to the root mean square measured in the λ5100 continuum. This source of error is relatively small, typically around 0.03 dex, likely because of the high S/N of our spectrum. Another source of errors are the uncertainties on equations 5 and 6. We estimated one thousand times the BLR radius in both ways by applying to the coefficients a Gaussian noise proportional to their errors, and used these different values in the final calculation. Finally, we applied the normal propagation of errors to the virial theorem of equation 4. We calculated the weighted average of all the estimates obtained in Table 2, and the resulting product is (6.0±0.4)×10 6 M . In conclusion, Table 2 shows that different methods can lead to statistically significant differences in the black hole mass calculation. Given the high S/N of the spectrum we used for the fitting procedure, the main source of error is not the fit itself, but instead the propagation of the errors on all the uncertain quantities such as the BLR radius.
Dependence on the f factor
To this point we neglected the scaling factor f which appears in Equation 4, which is another major source of uncertainty in the black hole mass estimate. The scaling factor accounts for the difference between the mass obtained with the product R BLR v 2 /G and the actual black hole mass. This difference strongly depends on the geometry and inclination of the BLR. If the BLR has a flattened geometry, it is clear that when observed pole-on there would be no velocity component parallel to the line of sight, thus causing a severe underestimate of the gas rotational velocity and, as a consequence, of the black hole mass (e.g., Decarli et al. 2008). In case of a more sphere-like geometry this effect would be less evident. Our knowledge of the structure of the BLR is still relatively limited. While it is clear that Keplerian motion of the clouds is present (Peterson & Wandel 1999;Gravity Collaboration et al. 2018), there may be some additional components such as turbulent vertical motion, possibly originating in Article number, page 7 of 14 A&A proofs: manuscript no. pks (Kollatschny & Zetzl 2011, 2013b, the effect of inclination may not be so prominent in NLS1s and population A objects (Vietri et al. 2018;Berton et al. 2020).
Several estimates of the f factor, both f σ and f FWHM , exist in the literature, and they are mostly based on reverberation mapping observations. Typical values can span between ∼0.8-5.0 (Mandal et al. 2021). In the case of PKS 2004-447 this means that its black hole mass, using the weighted average estimated above, ranges between (4.8-30.0)×10 6 M , within the typical range of NLS1s (Peterson 2011). A weak dependence on the FWHM(Hβ), and also on the ratio between the FWHM of the line and its dispersion, seem to be present. Collin et al. (2006) calculated different values of f depending on how the rotational velocity is estimated (FWHM, σ), on the ratio between FWHM and σ, and on the FWHM(Hβ) itself. In the case of PKS 2004-447, we decided to use their values, that are f σ = 3.93 for σbased measurements, and f FWHM = 2.12 for FWHM-based measurements. By applying these values to the virial products calculated in Table 2, and taking the weighted average, we obtained a mass value of (1.5±0.2)×10 7 M . In the following, we will use this value for our calculations.
Lines and continuum
Our goal is to study the long-term variability of PKS 2004-447 spectra. As previously mentioned, we accounted for the variability induced by different observing conditions by rescaling our spectra using the [O III]λ5007 as a reference. The measurement All the spectra were rescaled to the value derived in Sect. 4.2, that is 4.84×10 −15 erg s −1 cm −2 . The two parameters we measured are the continuum flux at 5100Å, and the Hβ total flux. The errors on the line fluxes were estimated as previously described, while the uncertainty on the continuum at 1 σ confidence level is provided by the standard deviation of a spectral region dominated by continuum emission with no (or very little) contamination by strong emission lines between 5075 and 5125 Å. The results are shown in Fig. 6. The most apparent result is the major flux increase that was observed in the continuum on 2019-10-31, which is remarkably followed by the highest measured Hβ flux one month later (2019-11-29). This is due to the flaring activity of PKS 2004-447 measured in those days. The flare was detected in γ-rays by the Fermi Satellite on 2019-10-25 (Gokus 2019) and by AGILE until 2019-10-27 (Verrecchia et al. 2019), with enhanced activity measured at other frequencies as well Berton et al. 2019b;Blaufuss 2019). Even if PKS 2004-447 has been included in the γ-ray catalogs since its very first detection soon after the launch of Fermi (Abdo et al. 2009c), this was its first recorded major flare (Gokus 2019). Indeed, our long-term spectroscopy shows that the continuum flux has been remarkably constant during the last forty years. The values we measured at each epoch are reported in Table 3. Beside the flare, the continuum has varied between minimum and maximum values (5.3±2.4) and (9.1±0.6)×10 −17 erg s −1 cm −2 Å −1 , recorded on 2016-03-29 and 2016-03-18, respectively. The Hβ flux is instead more variable. A local maximum is seen in the 1984-05-02 spectrum. The Hβ flux seems to also be increased in the three spectra following the maximum continuum value measured on 2016-03-18. Furthermore, we investigated the presence of a correlation between the continuum flux and the equivalent width of Hβ (which we define as positive in emission lines). The result is shown in Fig. 7. A strong anticorrelation was found, with Pearson coefficient of -0.81 and p-value of 0.008. This effect is well known to depend on the jet activity (the higher the jet flux, the lower the equivalent width), and has been observed in other sources (Corbett et al. 2000;Foschini 2012a). In particular, it is interesting to compare what we observed with what was seen during a flare of the the FSRQ 3C 345 in which the emission lines reacted to a significant flux increase in the continuum (Berton et al. 2018b). In that case, the source was deviating from the best-fit relation between the continuum flux and the equivalent width, and that was interpreted as a sign of prominent disk contribution to the typically jet-dominated continuum.
In this case, we do not observe any significant deviation during the flare. Therefore, it seems like the jet emission is dominating over the disk at all epochs. The flux of Hβ may be responding to the continuum variation. The two spectra showing the maximum continuum flux and Hβ flux were observed 29 days apart. As reported in Table 4, the BLR size is ∼15.2 light days. Assuming that this distance is akin to the light crossing time from the central engine to the BLR clouds, it is reasonable to assume that the enhanced Hβ flux we observe in the last spectrum is the echo of the enhanced continuum produced in the nucleus during the flare. However, while the continuum increased by a factor of ∼4, the line flux did so only by a factor of ∼1.3. Due to the sparse sampling of our observations, neither of these measurements probably reflect the behavior of the source during the flare, but they can still provide some insights. Cracco et al. (2016), in a large sample of NLS1s, found that the continuum and Hβ luminosity are related as L(Hβ) ∝ λL(5100) 1.203 . If this relation applies to our source as well, the ionizing continuum should have increased only by a factor of 1.24 to account for the observed variation in Hβ, much lower than what we actually observe. This may indicate that the vast majority of the flux increase was due to an enhanced activity of the relativistic jet, while the accretion disk, which produces most of the ionizing photons and later affects Hβ, played a significantly smaller, although non-negligible, role. This may reflect what is seen in the X-ray spectrum of this source, which is dom- inated by a power-law component coming from the relativistic jet, while the thermal Comptonization model coming from the disk corona accounts for only 2% of the total flux (Gallo et al. 2006;Kreikenbohm et al. 2016;Berton et al. 2019a). It is worth mentioning that Gokus et al. (2021) did not detect any component associated with the accretion disk in their analysis of PKS 2004-447 X-ray spectrum observed after the flare, and this is in good agreement with our conclusions. If the jet flux is strongly enhanced, while the corona contribution does not change significantly, the latter will likely become too weak to be detected. On the opposite, as the soft X-ray excess is associated to a high X-ray flux, hence to high jet activity, it is likely to be the highenergy tail of the synchrotron emission, as it happens for lowfrequency peaked BL Lac objects (Foschini et al. 2009;Foschini 2020).
Eddington ratio
The Eddington ratio is defined as that is the ratio between the bolometric luminosity L bol and the Eddington luminosity L Edd . This parameter is often believed to be the driver of the quasar MS. Sources with high Eddington ratio show the most prominent Fe II multiplets and the narrowest Hβ, while low Eddington sources are characterized by large FWHM(Hβ) and little or no Fe II. In NLS1s, the Eddington ratio is typically between 0.1 and 1 (Boroson & Green 1992;Williams et al. 2002Williams et al. , 2004Grupe et al. 2010;Xu et al. 2012), but even super-Eddington accretion can be observed in a handful of sources (Chen et al. 2018). The bolometric luminosity is usually roughly estimated by adopting a simple linear relation that connects it to the continuum luminosity at 5100Å, L bol = 9λL λ (5100 Å) (Kaspi et al. 2000). However, in the light of what was shown in Sect. 6.1, this approach can be misleading in jetted sources, especially in those like PKS 2004-447 where the jet component dominates the continuum emission. An alternative way is using instead the emission lines luminosity as a proxy for the disk luminosity. As shown in Equation 6, the Hβ luminosity has a linear relation in the log-log plane with the BLR radius. Assuming a photoionization regime, the latter directly depends on the disk luminosity, as R BLR 10 17 cm = L disk 10 45 erg s −1 , (Koratkar & Gaskell 1991;Ghisellini & Tavecchio 2009). The median disk luminosity we obtain from this relation is 2.0 × 10 44 erg s −1 . Under the reasonable hypothesis that the disk luminosity is comparable to the bolometric luminosity without the jet emission, we can use it to estimate the Eddington ratio. The results of both techniques are shown in Fig. 8. On the yaxis a logarithmic scale was adopted to enhance the visibility of the variations. Regardless of the method we used, the Eddington ratio lies within the typical range of NLS1s, with median values 0.29 and 0.09 for continuum-and Hβ-based estimate, respectively. While the continuum-based estimate varies from 0.22 on 2015-10-22 to 1.23 during the 2019 flare, the Hβ-based estimate has a more limited range, going from 0.07 on 2015-10-17 to 0.11 in the last spectrum. It is worth noting that the estimate based on the Hβ luminosity allows us to measure the Eddington ratio as it was 15.2 days ago, i.e. the BLR light crossing time, before the observation date. Considering this, both estimates agree that the Eddington ratio increased during the flare. This may suggest a possible connection between the accretion disk and the jet, as seen in other AGN flares (e.g., Grandi & Palumbo 2004).
The 2019 flare
By using the lines and continuum luminosities of Fig. 1, we tried to derive the scales involved in the inner structure of PKS 2004-447. The results are reported in Table 4. The BLR radius was derived using Equation 6, and the disk luminosity following Equation 8. For the dust sublimation radius and the outer radius of the torus we used the scaling relations between these quantities and the disk luminosity (Elitzur 2008), while to estimate the maximum extension of the NLR we used its correlation with the [O III] luminosity (Fischer et al. 2018). All the values we found are comparable to those derived in the other γ-ray emitting NLS1 1H 0323+342 (Foschini et al. 2019), suggesting that their inner structure is rather similar.
These values can provide us an idea of the region where the γ-ray photons were produced during the 2019 flare. It is known that the minimum Doppler factor needed to account for a source variability is where r is the radius of the emitting region, and τ the observed time scale. Using the 6-hour binned γ-ray light curve derived by Gokus et al. (2021), the variability between adjacent points indicated that the doubling time is of the order of ∼ 2 − 4 hours, consistent with the values observed at hard X-rays by Berton et al. (2019a). If we assume that the jet structure is self-similar, and that its semi-opening angle is 0.1 radians, the size of the emitting region depends on the distance of the dissipation region from the central source. Therefore, if the production of γ-rays is located relatively close to the jet base at ∼ 10 3 r g , the minimum Doppler factor is δ ≥ 0.6 − 1.3, which is a rather weak constraint that can be achieved also with a large viewing angle (Schulz et al. 2016 estimated θ < 50 • ). This is consistent with what is typically observed high-energy emitting radio galaxies (e.g. NGC 6251, θ < 47 • , δ ∼ 3.2, or Cen A, θ < 80 • , δ ∼ 1.2, see Chiaberge et al. 2001Chiaberge et al. , 2003Foschini et al. 2005). Furthermore, the γ-ray spectrum they measured is rather steep. If the production of γ-rays had occurred close to the black hole, we would indeed expect to see a rather steep spectrum, because of absorption of the most energetic photons due to the BLR gas (e.g., Romano et al. 2020). Gokus et al. (2021) argued that the dissipation region is instead close to the molecular torus, but to calculate its radius they assumed a disk luminosity which is one order of magnitude lower than what we derived from Hβ. Using our estimate for the torus inner radius, that is the dust sublimation radius and is based on the observed disk luminosity, the minimum Doppler factor required would be δ > 177 − 354, which are clearly unrealistic. Therefore, we believe that a dissipation region closer to the central source is, in this case, more likely.
Optical classification
Our new, high-quality spectral data can finally confirm with certainty the classification of PKS 2004-447 as an NLS1, as its spectrum respects all the fundamental criteria. As calculated in 4.3, FWHM(Hβ) = 1617 km s −1 , lower than the 2000 km s −1 threshold. The ratio R5007 = F([O III])/F(Hβ) = 1.53, also complies with the R5007<3 limit. Finally, Fe II multiplets are definitely present and rather strong. Specifically, the value of R4570 = 1.14 ± 0.14 we found is above the median value of 0.49 found for NLS1s (Cracco et al. 2016), but well in agreement with what is often observed in these objects and, in general, in population A sources. We remark here that the 2000 km s −1 threshold is an artificially imposed value that does not reflect any physical difference between sources above and below it. As long as they can be classified as type 1 AGN, all sources within population A roughly share the same physical properties (Marziani et al. 2018a). The Hβ profile of PKS 2004-447, when reproduced with a single function, is better described by a Lorentzian profile. However, the best fit is obtained with three Gaussian functions, one representing the narrow component, and two reproducing the broad component. This result supports the view that the BLR is not homogeneous, but rather stratified depending on its chemical composition (Peterson & Wandel 1999;Kovačević et al. 2010). This double Gaussian approach to model the BLR can also reproduce the Hα profile better than a Lorentzian profile, possibly indicating that the structure of the region where the Balmer lines are produced is similar. As we already pointed out, Hβ may have a slight redward asymmetry that is not seen in Hα, but only better data will allow us to determine whether this difference is real or just a residual left after the Fe II multiplets subtraction.
Another noteworthy fact that can be derived from the optical spectrum is the remarkable width of the forbidden lines. All of them have FWHM of the order of 900 km s −1 after correcting for instrumental resolution, which is significantly broader than what is typically seen in AGN (Peterson 1997). Assuming that, as hypothesized by Nelson & Whittle (1996), there is a correlation between the forbidden lines width and the stellar velocity dispersion σ * , we can derive σ * ∼330 km s −1 , using its scaling relation with the FWHM([O II]) derived by Greene & Ho (2005). Such a value would be high even for a large elliptical galaxy (Forbes & Ponman 1999), but the host galaxy of PKS 2004-447 is a spiral galaxy with pseudobulge (Kotilainen et al. 2016). This suggests that, at least in our source, the forbidden lines width does not obviously correlate with the stellar velocity dispersion. A possible alternative origin for the observed large width may instead be interaction through shocks between the NLR gas and the relativistic jet. Shocks can indeed produce turbulent motion in the gas, which in turn causes the lines to become broader (e.g., Whittle 1985;Nesvadba et al. 2008;Morganti 2017). The presence of jet/NLR interaction may also explain the rather high temperature we have derived in Sect. 4.5, which is above the typical value observed in AGN (Osterbrock & Ferland 2006). Further investigation, especially by means of integral-field spectroscopy, can clarify this aspect.
High mass or low mass?
We have verified that the black hole mass of PKS 2004-447 derived from the optical spectrum is well within the typical range of NLS1s (Peterson 2011). However, it is interesting to compare our value with other results in the literature. In particular, we tried to verify how the f factor changes if we use mass indicators which are independent of the inclination. For instance, Kotilainen et al. (2016) used the bulge infrared magnitude to estimate a black hole mass of 9×10 7 M , higher than the value we derived. If we assume that this host-based value is the correct one, and using the weighted average of all our σand FWHMbased measurements, we get f σ = 24.7 and f FWHM = 8.5. Both these values are higher than the average correction needed for type 1 AGN, and possibly suggest that inclination and BLR geometry do play a significant role in this source.
Another value that can be found in the literature is derived from spectropolarimetric observations of PKS 2004-447 (Baldi et al. 2016). These observations are based on the concept that spectropolarimetry can offer a periscopic view of the nucleus, that in this case should be seen edge-on in polarized light. The mass they obtained is much higher than both our estimate and that derived from the host galaxy, 6×10 8 M . In this case, the scaling factors should be f σ = 165.0 and f FWHM = 56.7, which are both extremely high. It is however worth noting that similar observations of other sources revealed that PKS 2004-447 seems to be a unique case among type 1 AGN (Capetti et al. 2021). Indeed, even among their sources, PKS 2004-447 is the only one which would require an unrealistically high value for the scaling factor. This could possibly indicate that the polarization results obtained for PKS 2004-447 are not fully reliable, likely because of the insufficient statistics. More observing time is required to strengthen or to reject these hypotheses.
In fact, there are multiple powerful physics-based arguments against such high mass values in the class of jetted NLS1s. First, as argued by Komossa et al. (2006), if NLS1s as a class had the highest inclination correction factors and therefore much higher black hole masses, than they should show a much higher fraction of beamed systems than comparison samples of broad-line Seyfert 1s (BLS1s). However, the opposite is the case: beaming and radio loudness is systematically less frequent in NLS1s as a class, than in BLS1s. A second argument was given by Foschini (2017). According to the theory of the blazar sequence Ghisellini et al. 1998;Ghisellini 2016), both classical blazar classes FSRQs and BL Lacs have a double-humped spectral energy distribution (SED). The first hump originates from synchrotron radiation, while the second one from inverse Compton. Essentially, the blazar sequence can be interpreted in terms of electron cooling and nuclear environments. FSRQs have a photon-and gas-rich environment, and the electron cooling process occurs efficiently via inverse Compton, in particular external Compton, where the seed photons come from the accretion disk, the torus, or the ISM. BL Lacs instead are the opposite. Their environment is photon-starved, and the only cooling mechanism is synchrotron self-Compton. The SED of γ-NLS1s has a very similar shape to that of FSRQs, but the total emitted power is much lower (cf Fig. 1 in Foschini et al. 2010). This is the case also for PKS 2004-447 (see Paliya et al. 2013, their Fig. 5, left panel). This is in agreement with the findings of Foschini et al. (2015), who calculated the jet power of NLS1s, FSRQs, and BL Lacs. They found that jetted NLS1s have systematically lower jet power than FSRQs, but comparable to BL Lacs. This can be interpreted in terms of mass difference since, as predicted theoretically by Heinz & Sunyaev (2003), the jet power scales non linearly with the black hole mass. The physics behind jetted NLS1s and FSRQs is exactly the same, hence the similar SED, but NLS1s are scaled-down versions of FSRQs because of their lower black hole mass. This result is confirmed by the observations of their radio luminosity function, that clearly showed how jetted NLS1s are the low-luminosity tail of FSRQs (Berton et al. 2016a). Vice versa, in BL Lacs the mass is similar to FSRQs, but the cooling mechanism is different because of the different environment, and this translates into the lower jet power and the different SED. It is worth noting that, if NLS1s had high-mass black holes, they would have a low jet power in a high-density environment. This would indicate that the relativistic electrons of their jets are not cooling despite the photon-rich environment, and this is clearly unphysical (Foschini 2017). Therefore, the only reasonable explanation for the different jet power in FS-RQs and NLS1s is the black hole mass, and NLS1s do not fit in the classical blazar sequence because their behavior is not regulated only by electron cooling. The blazar sequence is missing an ingredient, that is the black hole mass or, if NLS1s are the progenitors of FSRQs, the evolution of AGN (Berton et al. 2017).
A rare γ-ray NLS1/CSS hybrid
We confirmed that the optical spectrum of PKS 2004-447 is definitely that of a rather typical NLS1s. The spectral energy distribution of PKS 2004-447, along with its γ-ray emission, confirm its blazar-like nature (Foschini et al. 2009;Abdo et al. 2009c;Paliya et al. 2013). Also the one-sided morphology of the relativistic jets and the high brightness temperature show that relativistic beaming in this source is not negligible ). Its radio properties, however, clearly indicate the source is also a CSS of relatively low luminosity, with a turn over below 1 GHz ). This result is not new as, historically, PKS 2004-447 was the first source identified as a potential link between NLS1s and CSS (Oshlack et al. 2001;Gallo et al. 2006). The detection of γ-ray emission from a CSS is instead quite rare, as only five objects with this classification have been included in the 4FGL catalog, along with six more classified as young radio galaxies (Abdollahi et al. 2020).
In the case of PKS 2004-447, possible emission from a counter-jet has been found in VLBA observations at 1.5 GHz . If this is the case, the jet of PKS 2004-447 may have a non-negligible inclination with respect to the line of sight (θ < 50 • ). The minimum Doppler factor we have estimated from the observed variability at X-and γ rays (Berton et al. 2019b;Gokus et al. 2021) (see Sect. 6.3), requires that the dissipation occurs close to the black hole. This is also consistent with what is observed in some γ-ray emitting radio galaxies (Foschini et al. 2005).
An alternative option is that the source of γ-ray emission is not located close to the black hole, but farther away from it. In their VLBA map at 1.5 GHz, Schulz et al. (2016) found signs of coherent emission coming from a hotspot ∼45 mas away from the nucleus (projected size ∼170 pc), which corresponds to a change in the jet position angle. This potentially indicates ongoing interaction between the relativistic jet plasma and the ISM. However, according to the jet model by Blandford & Königl (1979), to have a dominant contribution from curvature, ultrarelativistic speed and small viewing angle are required. This is not verified in the present case, as clearly shown by radio maps.
Another interesting possibility is that the inclination of the relativistic jet in the core is different from that at larger scales. If the jet axis has changed its position with time, it is possible that in the past it had a large inclination with the line of sight, while nowadays it is closely aligned with it. An episode of jet realignment from radio galaxy to blazar has already been observed directly (Hernández-García et al. 2017), and precession of the jet axis was also invoked to explain the properties of the jetted NLS1 Mrk 783 (Congiu et al. 2017a(Congiu et al. , 2020. PKS 2004-447 is not the only example of CSS/NLS1 with γ-ray emission, as the strong radio source 3C 286 can also be classified as an NLS1 (Berton et al. 2017;Liao & Gu 2020;Yao & Komossa 2021), and it is included in the Fermi catalog (Abdollahi et al. 2020). For 3C 286, radio observations were able to obtain a rather high inclination value of ∼ 48 • , and alternative mechanisms to produce its γ-ray emission such as jet/ISM interaction have been suggested ). However, a number of jetted NLS1s do actually show a compact morphology at kpc-scale and a steep radio spectrum (Berton et al. 2018a). Most of them, however, show lower luminosities when com-pared to typical CSS, and they mostly belong to the class of lowluminosity compact sources (Kunert-Bajraszewska et al. 2010). Given all these similarities between NLS1s and CSS, it has been suggested that some CSS, especially those of relatively low luminosity and with high-excitation emission lines, may represent the parent population of γ-ray NLS1s (Berton et al. 2016a). In other words, some of them may be the exact same kind of source seen at different angles. This tentative unification has not been fully confirmed yet, but the existence of hybrid sources as PKS 2004-447 may be an indication that this idea is at least in part correct (see also Sec. 7.1.3 of O'Dea & Saikia 2021).
It is also worth noting that at least a fraction of luminous CSS are also characterised by moderate to high Eddington ratios typical of Population A sources (Wu 2009). For instance, the high R4570 and other properties of 3C 57 suggest that this luminous CSS source is optically young (or more likely rejuvenated) in addition to being radio young, as CSS have been suggested to be (e.g., Fanti et al. 2011). Considering that luminous CSS have relatively large mass black holes, it seems at least conceivable that they include already massive black holes and be rather evolved sources, rejuvenated by a new accretion episode yielding a moderate-to-high Eddington ratio. Conversely, lowluminosity CSS may be their younger counterparts at their first accretion episodes, and could be directly linked to NLS1s.
Summary
In this paper we analyzed a set of optical spectra of the γ-ray emitting source PKS 2004-447. Thanks to our new data, we can confirm its classification as NLS1 galaxy, belonging to population A3 of the quasar main sequence. From a detailed spectral analysis, we found that the source probably has some absorption along the line of sight, although this result is not consistent with past X-ray observations. The temperature and density values we derived, along with the width of the narrow lines, may suggest that some interaction between the NLR gas and the relativistic jet is present. The black hole mass we derived from the optical spectrum using the Hβ line is (1.5 ± 0.2) × 10 7 M . Despite the uncertainties related to this estimate, several arguments allow us to rule out that PKS 2004-447, and NLS1s in general, are powered by black holes with very large mass.
The long-term variability of the spectra indicates that the source continuum emission is rather stable, with the noteworthy exception of the flare, seen also in γand X-rays, of October 2019. We do not know how and if the Hβ line has been responding to the flare, since the reverberation time is shorter than the gap between our observations, but we can speculate that most of the continuum variation is produced by the relativistic jet and not by the accretion disk. The Eddington ratio is within the typical range of NLS1s (0.09-0.29, depending on how it is measured), and we found a significant increase during the 2019 flare.
We finally discussed the role of PKS 2004-447 in the unification between NLS1s and CSS. The radio properties of our source clearly suggest that it belongs to both classes, and that its relativistic jet may have a relatively high inclination with respect to the line of sight, given the possible detection of its counterjet in past radio observations. PKS 2004-447 is then the second hybrid CSS/NLS1 with γ-ray emission, after 3C 286. Continuous multiwavelength monitoring of this object, particularly by means of optical spectroscopy, may help us to finally solve the puzzle of NLS1s and find their link with CSS and other young radio galaxies. | 2021-06-24T01:16:20.482Z | 2021-06-23T00:00:00.000 | {
"year": 2021,
"sha1": "05913d336b14631bf96c277cbc4a112b5cbd788b",
"oa_license": null,
"oa_url": "https://research.aalto.fi/files/74604994/Hunting_for_the_nature_of_the_enigmatic_narrow_line_Seyfert_1_galaxy_PKS_2004_447.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05913d336b14631bf96c277cbc4a112b5cbd788b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235025834 | pes2o/s2orc | v3-fos-license | Using Expert Knowledge to Generate Data for Broadband Line Prognostics Under Limited Failure Data Availability (cid:63)
: Due to exposure to the driving rain, water ingress can cause faults in electrical joints, junctions and distribution points in broadband lines. Over time, faulting behaviour may grow in magnitude eroding the electrical capability of these lines causing degradation of broadband service. Developing effective data-driven models for broadband line prognostics remains a challenge due to the limited failure data availability in the telecommunications industry. In order to address this problem, we present a technique for generating failure data that realistically reflect the behaviour of degrading broadband lines. To this end, we use the conditional generative adversarial network and more importantly, we control and direct the failure data generation process using expert knowledge on the water ingress failure cause. The proposed technique is evaluated using a real-world case study involving the time-to-failure prediction of two types of broadband lines in a south-west city in England. The prognostics performance is measured using the Kappa statistic and F-score. Benchmark performance is obtained using Random Oversampling, Synthetic Minority Oversampling and Adaptive Synthesis which can be used to oversample failure data by duplicating existing failure data or randomly generating data. Among these techniques, Random Oversampling achieved the best prognostics performance. It is shown that the proposed technique outperforms Random Oversampling technique by a large margin. More specifically, it increased the prognostics performance by 33% (Kappa statistic) and 25% (F-score) for Asymmetric Digital Subscriber Lines, and 17% (Kappa statistic) and 13% (F-score) for Very High Bitrate Digital Subscriber Lines compared to the Random Oversampling technique.
INTRODUCTION
Broadband lines provide a signalling method for transporting multiple signals through coaxial cables, twisted pair and optical fibre transmission mediums.One of their main applications in the telecommunications industry is high-speed internet (Sundaresan et al., 2011).Although the number of consumers who adopt broadband internet delivered completely over optical fibre is increasing, many broadband lines continue to be served in part by metallic paths (i.e.paired wires).Paired wires serving each consumer pass through a variety of underground and overhead electrical junctions and joints, and typically end at distribution points (DPs) located at the top of telegraph poles.DPs subsequently connect to the consumer premises via a drop wire to provide broadband service.
Water ingress is a dominant cause of a variety of faults (e.g.corrosion and electrical shorts) in overhead electrical junctions, joints and DPs in broadband lines due to their exposure to the driving rain (Tencer and Moss, 2002).As this can be a gradual process, faulting behaviour may grow in magnitude eroding the electrical capability of a broadband line causing degradation of broadband service (Tencer and Moss, 2002).This degradation may result in the consumer experiencing dropping connection, poor speed or the complete failure of broadband service (i.e.broadband line failure).
The objective of this paper is to present a technique for predicting the time-to-failure (TTF) of telecommunications broadband lines under the conditions of limited failure data availability.Predicting the TTF of broadband lines with minimal uncertainty would enable telecommunications service providers to identify line degradation and failure before consumers experience them.Consequently, the appropriate proactive interventions can be undertaken to prevent unplanned downtime of broadband service, and hence reduce consumer dissatisfaction whilst reducing maintenance costs.
Data-driven prognostics have become popular for electronic equipment prognostics since they can estimate prognostics model parameters from degradation patterns contained within condition monitoring and/or event data relating to past failures (Wang et al., 2020).However, the long-lasting problem with data-driven prognostics is that they rely on large amounts of historical failure data to estimate model parameters effectively (Wang et al., 2020).Nevertheless, historical failure data are limited due to two major reasons: (i) over-protective maintenance and replacement regimes; (ii) highly reliable equipment (Wang et al., 2020).This causes failures to be rare, and leads to the problem of limited failure data availability for datadriven prognostics of broadband lines which causes prognostics predictions to be associated with high uncertainty (Louzada et al., 2019).Thus, telecommunications service providers are affected by unsatisfied consumers due to unplanned downtime of broadband service and additional costs due to under maintenance, over maintenance and false alarms.
In Ranasinghe et al. (2019), we presented a methodology for generating failure data that realistically reflect the behaviour of degrading equipment (i.e.real-valued failure data) for prognostics under the conditions of limited failure data availability.It allows training datasets used for data-driven prognostics to be augmented so that an increased number of failure data samples is available for prognostics modelling.The methodology generates real-valued failure data by controlling and directing the failure data generation process using auxiliary information pertaining to the failure mode that needs predicting.More specifically, the noise being added to the newly generated failure data samples is conditioned on auxiliary information to prevent different modes of data being generated.Auxiliary information is additional information that adds value to the understanding of failure dynamics of the equipment of interest (e.g.equipment similarity information, expert knowledge on failure causes and failure modes and quality of equipment use).However, the current version of the methodology only provides a way to utilise equipment similarity information as auxiliary information.Hence, the use of other kinds of auxiliary information to generate real-valued failure data remains to be exploited.
In this paper, we present a technique for predicting the TTF of telecommunications broadband lines under the conditions of limited failure data availability.To this end, we extend the aforementioned methodology so that expert knowledge on broadband line failure causes (e.g.water ingress into electrical junctions, joints and DPs) can be used to generate real-valued broadband line failure data.Whilst we discuss all the aspects of the pro-posed technique (i.e.broadband line data preprocessing, approach to the TTF prediction of broadband lines and real-valued broadband line failure data generation), the key contributions of this paper are as follows: (i) empirical results obtained using existing oversampling techniques for broadband line prognostics under the conditions of limited failure data availability; (ii) extension to the realvalued failure data generation methodology which allows utilising expert knowledge on broadband line failure causes to control and direct the failure data generation process; (iii) empirical results which show that the proposed technique increases prognostics performance by a large margin compared to existing oversampling techniques.
Following the problem formulation presented in our previous paper (see Ranasinghe et al. (2019)), this paper commences by introducing historical datasets used in this work for prognostics modelling and the process followed for data preprocessing (Sec.2).The approach to the TTF prediction of broadband lines and benchmark prognostics performance obtained using existing oversampling techniques are presented in Sec. 3. The extension to the realvalued failure data generation methodology is presented in Sec. 4. The results which show the improved prognostics performance are discussed in Sec. 5.The paper is concluded in Sec. 6.
DATA PREPROCESSING
Broadband lines provide internet using two line modes: Asymmetric Digital Subscriber Line (ADSL) and Very High Bitrate Digital Subscriber Line (VDSL).Their key difference is the download and upload speeds of internet service (Sundaresan et al., 2011).ADSL provides a maximum of 8 and 1 Megabits per second (Mbps) download and upload speeds respectively.VDSL is an improved version of ADSL and it provides 52 and 16 Mbps download and upload speeds respectively.We used time series data sampled from ADSLs and VDSLs that had a broadband connection failure due to faults in electrical junctions, joints and DPs.These data are sampled from real-world consumer broadband lines in a south-west city in England.Historically, broadband connection failures occurred in this area are due to faulting behaviour that is strongly correlated with extreme driving rain.
A flowchart of the process followed for preprocessing ADSL and VDSL datasets is shown in Fig. 1.First, low variance features are removed due to their low predictive power.Then datasets converted into run-to-failure datasets by removing the parts of time series belong to the time before the start of equipment degradation and after the failure.
(1) Remove low variance features from the datasets.
(3) Split run-to-failure datasets into training, validation and test sets.
(4) Impute missing data in training set using time interpolation.
Fig. 1.Flowchart of the data preprocessing process.
The TTF of broadband lines is predicted using the fixed time window approach which requires labelling pre-failure time series into segments (also see Fink et al. (2015)).In this study, run-to-failure data are segmented into 5 time windows and each of them has a fixed length of 1 day.Thus, the segments are 1 day before the failure, 2 days before the failure, 3 days before the failure, 4 days before the failure and 5 days before the failure.Then all the data samples are labelled with the corresponding time window identity.That is, data samples belong to 1 day before the failure segment is labelled with 1, data samples belong to 2 days before the failure segment is labelled with 2, data samples belong to 3 days before the failure segment is labelled with 3 and so on.
The labelled run-to-failure datasets are then split into training, validation and test sets containing 60%, 20% and 20% of the data samples contained in the original datasets respectively.The training set is used to train prognostics models, the validation set is used for hyperparameter tuning and the test set is used to evaluate prognostics models on previously unseen data.Missing data in the training ADSL and VDSL sets are imputed using time interpolation.
Prior to performing principal component analysis (PCA), the data are normalised in order to transform all the features into a comparable scale.PCA is then used to reduce the dimensionality of datasets from 25 features to 7 principal components (PCs).The reduction of feature space allows predictive models to improve their learning rates and reduce computation costs.The cumulative explained variance ratio obtained by the first 7 PCs for ADSL dataset is 66% and for VDSL dataset is 77%.These PCA transformed datasets are used to develop and evaluate prognostics models in the next section.
PROGNOSTICS MODELLING
In this section, the approach to the TTF prediction of broadband lines is presented first.Prognostics performance evaluation methods and benchmark prognostics performance used to evaluate the proposed technique are discussed next.
Time-to-failure Prediction of Broadband Lines
The TTF prediction of broadband lines is modelled as a multi-class classification problem as follows: given a data sample x ∈ X and labels y ∈ Y (i.e. 1 to 5 labels created for pre-failure time series segments in the previous section), calculate the conditional probability P r(y | x).
The label with the highest P r(y | x) is the estimated label y for the data sample x.Thus, the time series segment indicated by y is the TTF failure of the broadband line.For example, if the segment indicated by the estimated label y is 3, then the TTF is 3 days.
Evaluation Methods
The prognostics performance produced by classifier-based prognostics models is measured using the F-score and Cohen's Kappa statistic.F-score is the weighted harmonic mean of precision and recall normalised between 0 (i.e.worst value) and 1 (i.e.best value).However, F-score can be affected by statistical fluke (Powers, 2015).Hence, when measuring prognostics performance we also employ the Kappa statistic.It can be used as a statistical method for identifying whether a classifier simply guesses at random (Powers, 2015).Kappa statistic is always less than or equal to 1.
Benchmark Prognostics Performance
The proposed technique for predicting the TTF of broadband lines under the conditions of limited failure data availability is evaluated against the following benchmarks: Benchmark 1: Performance obtained when prognostics models are trained on the original training dataset (i.e. the training dataset that is not augmented) and evaluated on the test dataset.Benchmark 2: Performance obtained when prognostics models are trained on the training dataset that is augmented using existing oversampling techniques and evaluated on the test dataset.
Fig. 2 shows the Kappa statistic, confusion matrixes and F-scores obtained by prognostics models for Benchmark 1.It can be observed that the RF classifier-based prognostics model has obtained the best Kappa statistic value for ADSL (0.62) and VDSL (0.77) datasets.This means there is substantial agreement that the prognostics model performance (i.e.F-scores obtained by the classifier) is not due to random chance.The F-scores are 0.7 and 0.81 for ADSL and VDSL datasets respectively.However, this is a marginal increase in Kappa statistic compared to Benchmark 1 (i.e.8% increase for ADSL and 1% increase for VDSL).Hence, the Kappa statistic is still in substantial agreement with the prognostics model performance.The F-scores are also only marginally improved compared to the Benchmark 1 (i.e.4% increase for ADSL and 1% increase for VDSL).This marginal increase in prognostics performance is since Random Oversampling, SMOTE and ADASYN either duplicate existing failure data or randomly generate data (Weiss, 2004).Therefore, they do not introduce new and realistic failure data samples to augment training datasets (Weiss, 2004).Hence, the fundamental problem of limited failure data availability is not addressed sufficiently.
It can be concluded that Benchmark 1 and 2 failed to obtain almost perfect agreement for the Kappa statistic.Hence, there is low confidence in F-scores produced by prognostics models.In the following section, we show that the proposed technique enables increasing confidence in prognostics model performance by obtaining almost perfect agreement for the Kappa statistic.Moreover, it enables improving F-scores by a large margin compared to the benchmarks.
GENERATING REAL-VALUED BROADBAND LINE FAILURE DATA
The methodology for generating real-valued failure data consists of three phases (see Fig. 4).A detailed description The limitation of the current version of the methodology is that it only provides a way to utilise equipment similarity information as auxiliary information.In this section, we extend the methodology so that expert knowledge on broadband line failure causes can be used as auxiliary information, and hence generate real-valued failure data for predicting the TTF of broadband lines under the conditions of limited failure data availability.
The extension involves following the process outlined in Fig. 5 for Phase 1 (i.e.identification and conversion of auxiliary information).Phase 2 and 3 of the methodology remain unchanged.
Phase 1: Identifying auxiliary information pertaining to the failure mode and converting into a form for integrating into the failure data generation process.
Phase 2: Estimating a generative model that captures the semantic features of the failure mode and evaluating the convergence during training.
Phase 3: Generating real-valued failure data using the estimated generative model and assessing overfitting and evaluating prognostics performance.(1) Identify failure causes.
(2) Identify auxiliary information using expert knowledge.
(4) Identify critical thresholds of failure causes.
Fig. 5. Diagram outlining the steps for identifying and converting expert knowledge on failure causes.
(1) Identify failure causes: Fault tree analysis and historical maintenance records are used to identify failure causes of the failure modes that need predicting (i.e.corrosion and electrical shorts in electrical joints, junctions and DPs).We identified that water ingress is a dominant failure cause of broadband line connection failures.
(2) Identify auxiliary information using expert knowledge on failure causes: To reiterate, the data used in this study are sampled from broadband lines in a southwest city in England.Once water ingress is identified as a dominant failure cause, expert knowledge acquired from maintenance engineers is used to identify auxiliary information related to the failure cause.Maintenance engineers provided two pieces of auxiliary information based on their experience on historical broadband line failures occurred in the south-west city in England: (i) an increase in broadband line failures is expected when it is raining; (ii) an increase in broadband line failures is expected when it is raining and when prevailing winds are easterly.This is since anecdotally, engineering practice favoured placing overhead joints and DPs on the east side of telegraph poles as prevailing winds (and consequently driving rain) are typically westerly or south-westerly.
(3) Validate auxiliary information: When observing the number of failures occurred during different weather conditions (i.e.rain, drizzle, clouds and clear), it was identified that 84% of failures were occurring when it was raining for the majority of the week that the failure has occurred.In order to conduct a more robust experiment, auxiliary information identified from expert knowledge is validated using statistical hypothesis testing.To this end, we used historical maintenance records and weather reports (obtained from the OpenWeatherMap API).Whilst the former provides the date and time of failures, the latter provides rainfall levels and direction of wind when it was raining.
Two statistical hypothesis tests are developed using the following null hypotheses: (i) there is an increase in broadband line failures when it is raining; (ii) there is an increase in broadband line failures when it is raining and when prevailing winds are easterly.The objective of statistical tests is to identify whether the corresponding null hypothesis can be rejected.For the first test, probability values (p-value) of 0.92 (for ADSL) and 0.9 (for VDSLs) are obtained.This means, there is weak evidence against the null hypothesis, thus it is retained.For the second test, p-values of 0.03 (for ADSL) and 0.01 (for VDSL) are obtained.This means, there is strong evidence against the null hypothesis, thus it can be rejected.
To conclude, the increase in broadband line failures when it is raining is identified as a valid piece of auxiliary information.However, there is no strong evidence to support the increase in broadband line failures when it is raining and when prevailing winds are easterly.
(4) Identify critical thresholds: In this step, we identify what thresholds of rainfall impact broadband line failures the most.First, weather data are used to categorise rainfall into the following thresholds: light rain, moderate rain, shower rain and heavy rain.Then each failure is tagged based on what threshold of rainfall occurred for the majority of the week that the failure has occurred.Shower and heavy rainfall thresholds produce the highest number of failures per unit (i.e. per day) compared to light and moderate rainfall thresholds.Thus, shower and heavy rainfall thresholds are identified as critical thresholds of rainfall causing water ingress into electrical joints, junctions and DPs in broadband lines.These critical thresholds are then integrated as auxiliary information into the failure data generation process.
(5) Convert critical thresholds into vector representations: In order to integrate auxiliary information into the failure data generation process, we first convert it into an abstract form.This allows broadband line-specific information to be generalised to all the broadband lines that have failed under the failure modes that need predicting.For instance, if the rainfall in a particular location where a broadband line is located at (during the degradation period of electrical joints, junctions and DPs) is recorded as the rainfall at the location where broadband line A, B and C located at increased from moderate to shower rain, once converted into the abstract form this information becomes some variable X increases.Thus, specific terms such as broadband line A, B and C, rainfall and numerical thresholds are ignored.Then the abstracted information is converted into the statistical form by representing it as some continuous variable C. The continuous variable C can be converted into a distribution between some values y 0 and y 1 .Finally, this distribution can be represented as a vector Y containing some values {y ∈ Y | y 0 < y < y 1 , and y increases}.
As mentioned at the beginning of this section, Phase 2 and 3 of the methodology remain unchanged and directly used to generate real-valued broadband line failure data using the converted auxiliary information.We generated 10,000 ADSL and 10,000 VDSL real-valued failure data samples and then augmented the original ADSL and VDSL training datasets.
RESULTS AND DISCUSSION
The prognostics performance obtained when prognostics models are trained on the augmented training datasets and evaluated on the test datasets is shown in Fig. 6.The RF-based prognostics model has obtained the best value for the Kappa statistic for ADSL and VDSL.It can be observed that the Kappa statistic for ADSL is increased by 33% compared to the previous best performance (i.e.kNN and Random Oversampling).The Kappa statistic for VDSL is increased by 17% compared to the previous best performance (i.e.kNN and Random Oversampling).This means the proposed technique achieved the almost perfect agreement for the Kappa statistic by outperforming Benchmark 2 by a large margin, and hence improved the confidence in prognostics model performance.
As shown in Fig. 6, the confusion matrixes and F-scores are also significantly improved.More specifically, F-score of ADSL is increased by 25% compared to the previous best performance (i.e.kNN and Random Oversampling) and VDSL is increased by 13% compared to the previous best performance (i.e.kNN and Random Oversampling).
CONCLUSION
In this paper, a technique for predicting the time-to-failure of telecommunications broadband lines under the conditions of limited failure data availability is presented.This technique extends the methodology presented in our previous paper (Ranasinghe et al., 2019) so that real-valued broadband line failure data can be generated using expert knowledge on the water ingress failure cause.The impact of the research presented in this paper is that the proposed technique allows predicting real-world broadband line failures with minimal uncertainty when real broadband line failure data are limited.This enables telecommunications service providers to proactively undertake the appropriate interventions to prevent unplanned downtime of broadband service, and hence reduce consumer dissatisfaction whilst preventing costs due to over maintenance and false alarms.
Fig. 3
Fig.3shows the Kappa statistic, confusion matrixes and F-scores obtained by prognostics models for Benchmark 2. In contrast to Benchmark 1, training datasets are now augmented using the following oversampling techniques:
Fig. 2 .
Fig. 2. Performance obtained for Benchmark 1. Random Oversampling, Synthetic Minority Oversampling Technique (SMOTE) and Adaptive Synthesis (ADASYN).The kNN classifier-based prognostics model and Random Oversampling technique have obtained the best Kappa statistic value for ADSL (0.67) and VDSL (0.78) datasets.However, this is a marginal increase in Kappa statistic compared to Benchmark 1 (i.e.8% increase for ADSL and 1% increase for VDSL).Hence, the Kappa statistic is still in substantial agreement with the prognostics model performance.The F-scores are also only marginally improved compared to the Benchmark 1 (i.e.4% increase for ADSL and 1% increase for VDSL).This marginal increase in prognostics performance is since Random Oversampling, SMOTE and ADASYN either duplicate existing failure data or randomly generate data(Weiss, 2004).Therefore, they do not introduce new and realistic failure data samples to augment training datasets(Weiss, 2004).Hence, the fundamental problem of limited failure data availability is not addressed sufficiently.
Kappa statistic of all prognostics models for VDSL.Confusion matrix, Kappa statistic and F-score of best prognostics model for ADSL (left) and VDSL (right).
Fig. 4 .
Fig. 4. Diagram outlining the three phases of the methodology for generating real-valued failure data.
Kappa statistic of all prognostics models for VDSL.(c) Confusion matrix, Kappa statistic and F-score of best prognostics model for ADSL (left) and VDSL (right).
Table 1 .
Values of 0 or less indicate a poor classifier and conversely, 1 indicates a classifier that does not guess at random.A widely accepted schema for the Kappa statistic is shown in table 1(Landis and Koch, 1977).The null hypothesis (H 0 ) used in this schema is: the classifier performance is not due to random chance.Thus, when measuring prognostics performance for each prognostics model, we first observe the value of Kappa statistic to identify whether the classifier performance is affected by statistical fluke.If the classifier performance is not affected by statistical fluke (i.e.Kappa statistic is in almost perfect agreement with H 0 ), we use the F-score of the classifier to quantify the prognostics performance.Schema for Cohen's Kappa statistic(Landis and Koch, 1977) Kappa statistic of all prognostics models for ADSL. | 2020-12-24T09:08:05.778Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "377fe65172422937b0cd0496972c1ed2bb474e47",
"oa_license": null,
"oa_url": "https://www.repository.cam.ac.uk/bitstream/1810/304661/1/paper.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "58216d76fa373fc0e82bb4e5bd103c83adf2ea94",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
91184919 | pes2o/s2orc | v3-fos-license | Influence of the Lignin Content on the Properties of Poly(Lactic Acid)/lignin-Containing Cellulose Nanofibrils Composite Films
Poly(lactic acid) (PLA)/lignin-containing cellulose nanofibrils (L-CNFs) composite films with different lignin contents were produced bythe solution casting method. The effect of the lignin content on the mechanical, thermal, and crystallinity properties, and PLA/LCNFs interfacial adhesion wereinvestigated by tensile tests, thermogravimetric analysis, differential scanning calorimetry (DSC), dynamic mechanical analysis, Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The tensile strength and modulus of the PLA/9-LCNFs (9 wt % lignin LCNFs) composites are 37% and 61% higher than those of pure PLA, respectively. The glass transition temperature (Tg) decreases from 61.2 for pure PLA to 52.6 °C for the PLA/14-LCNFs (14 wt % lignin LCNFs) composite, and the composites have higher thermal stability below 380 °C than pure PLA. The DSC results indicate that the LCNFs, containing different lignin contents, act as a nucleating agent to increase the degree of crystallinity of PLA. The effect of the LCNFs lignin content on the PLA/LCNFs compatibility/adhesion was confirmed by the FTIR, SEM, and Tg results. Increasing the LCNFs lignin content increases the storage modulus of the PLA/LCNFs composites to a maximum for the PLA/9-LCNFs composite. This study shows that the lignin content has a considerable effect on the strength and flexibility of PLA/LCNFs composites.
Introduction
As a renewable bio-based polymer, poly(lactic acid) (PLA) is considered to be a promising alternative to petroleum-based plastics because of its excellent physical properties and thermal process ability [1,2]. However, its disadvantages are inherent brittleness, low degree of crystallization, and relatively poor thermal stability, which limits its wide application in packing, automotive, and biomedical fields [3,4]. Incorporating nanoparticle fillers as a reinforcing agent has been shown to improve the physical, thermal, and crystallization properties of PLA [5]. Cellulose nanocrystals (CNCs) have attracted academic and industrial interest as a reinforcing agent for PLA composites owing to their excellent strength, biodegradability, high specific surface area, and crystallization [6,7]. CNCs are mainly derived from the cellulose of lignin-free wood by acid hydrolysis. The high hydrophilicity of natural CNCs is because of the large amount of hydroxyl groups on their surface, which leads to poor interfacial interactions and compatibility between CNCs and hydrophobic PLA [8,9]. The tendency to form agglomerates upon incorporation into the PLA matrix prevents realization of the full potential of CNCs as a reinforcing phase [10].
Preparation of the LCNFs Suspension
The LCNFs were prepared by sulfuric acid hydrolysis and high-pressure homogenization of lignin-containing cellulose pulp boards, according to a previously reported method [24]. In a typical procedure, 5 g of pulp board was mixed with 100 mL of 15 wt % sulfuric acid at a constant speed for 4 h at 85 • C. The suspension was diluted with deionized water and then centrifuged at 4000 rpm for 10 min to concentrate the residue and to remove H + and SO 4 2− ions. The resultant precipitate was dispersed in DMAc by the solvent exchange method. The suspension was then further dispersed by homogeneous treatment (GEA Niro Soavi homogenizer, Parma, Italy, diameter of 10 mm, and process volume of 100 mL) at a pressure of 100 MPa for 10 cycles. A well-dispersed LCNFs suspension was then obtained. The LCNFs with different lignin contents were prepared by the same procedure, the lignin in the resulting LNCFs were tested and they are referred to as CNFs (0-LCNFs), 5-LCNFs, 9-LCNFs, and 14-LCNFs for lignin contents of~0, 5, 9, and 14 wt %, respectively. The yield of the LCNFs from the pulp boards with acid hydrolysis and homogenization was about 60%.
Preparation of PLA/LCNFs Composite Films
The desired amount of 16 wt % DMAc solution of PLA was mixed with the LCNFs suspension by vigorous stirring with a magnetic stirrer (Shanghai, China). The mixture was sonicated and further stirred at 70 • C for 2 h. The suspension was then cast on glass with a scraper and dried at 80 • C for 30 min on an electric heating board. The obtained composite films were heated at 40 • C under a vacuum for 24 h to ensure that the solvent completely evaporated. By varying the added type of LCNFs, composite films with different lignin contents were prepared. The weight percentage of the LCNFs solid, in the PLA composite films, was 3 wt %.
Characterization
The lignin content was measured by the half-scale kappa test method, which is based on the AS/NZS 1301.201.2002 method, Papro 1.106 kappa number, and TAPPI T236 standard [25]. The morphologies of the LCNFs samples were observed by TEM. A drop of a 0.01 wt % water suspension of LCNFs was deposited on a carbon-coated grid and negatively strained with 2% phosphotungstic acid. The images of the specimens were obtained with a Hitachi H-600 transmission electron microscope (Hitachi Limited, Tokyo, Japan) operated at an acceleration voltage of 80 kV. The initial contact angle (CA) measurements of the LCNFs were performed with three pure liquids with different polarities, namely, water, formamide, and ethylene glycol. The LCNFs-water suspensions were diluted to a 1 wt % solid content in DMAc and sonicated for 1 h at room temperature. Each suspension was poured into an over-pressurized filtration device equipped with a qualitative filter paper with particle retention of 12-15 µm to remove water and retain the fibrils. The CA was determined by a Harke-Space CA instrument (Beijing, China). The surface free energy of the LCNFs was determined by the calculation method provided in the Supporting Information. The FTIR spectra of the PLA/LCNFs composite films were obtained with a FTIR spectrometer (VERTES TOV, Bruker, Germany) over the wavenumber range 400-4000 cm −1 . All of the samples were directly detected. The tensile properties of the composites were measured by a mechanical tester (ZB-WL300, Beijing, China) with a cross-head speed of 20 mm/min, a gauge length of 50 mm, and a 300 N load cell. Rectangular specimen strips 100 mm long, 15 mm wide, and 100 µm thick were tested. At least five measurements were performed for each sample and the data were averaged to obtain a mean value. TGA of the LCNFs and PLA/LCNFs composites was performed with a TG analyzer (TGA Q5000 IR, TA instrument, New Castle, DE, USA) under 100 mL/min nitrogen gas flow. Samples of about 6.0 mg were heated from room temperature (25 • C) to 500 • C at a heating rate of 10 • C/min. The cross-sections of the PLA/LCNFs composite films were observed with a Hitachi S-3400 scanning electron microscope (Tokyo, Japan) under an accelerating voltage of 15 kV. All of the samples were tensile broken in order to expose the internal structure (tensile specimens) before the examination, and the entire surface was sputtered with gold. The thermo mechanical properties of PLA and the PLA/LCNFs composites were determined with a DMA instrument (Q800, TA Instruments, New Castle, DE, USA). The tensile storage modulus and tan delta were determined at a frequency of 1 Hz, a strain rate of 0.05%, and a heating rate of 5 • C over the temperature range 0-100 • C. The test samples were prepared by cutting strips with a width of 10 mm and a length of 40 mm from the films. A TA Instruments Q2000 differential scanning calorimeter (New Castle, DE, USA) was used to record the DSC scans under N 2 . The sample (about 6-8 mg) was first heated from 20 to 200 • C at a heating rate of 10 • C/min and then kept at 200 • C for 5 min to remove the thermal history of the material. The sample was then cooled to 20 • C at 5 • C/min and kept at 20 • C for 5 min before heating to 200 • C again at the same heat rate. The degree of crystallinity X c was calculated with the following equation: where ∆H m (J/g) is the enthalpy of fusion of the polymer composite, ∆H 0 m is the enthalpy of fusion of a PLA crystal of infinite size (assumed to be 93.6 J/g), and Φ PLA is the fraction of PLA in the composite.
Properties of LCNFs with Different Lignin Content
To investigate the stability of the LCNFs suspension, DMAc suspensions with 0.01 g/mL CNFs, 5-LCNFs, 9-LCNFs, and 14-LCNFs were kept in plastic centrifuge tubes for one week at room temperature ( Figure 1A). The LCNFs containing lignin were well-dispersed in DMAc and the color of the suspensions was darker with increasing lignin content. The CNFs without lignin showed stratification through the flocculation process. This is because the high hydrophilicity of the CNFs without lignin means that they easily form agglomerates in DMAc. Furthermore, increasing the hydrophobicity enhances the dispersion stability of the LCNFs. The morphologies of the dispersed LCNFs suspensions with different lignin contents were evaluated by TEM analysis ( Figure 1B). CNFs and LCNFs with widths of 50 nm and lengths of several hundred nanometers are clearly observed in the images. The research of Rojo showed the AFM images of the lignin containing fibrils appeared to be quite similar than CNF. However, some small, globular shaped particles could be identified in addition to the fibrils in the lignin containing fibrils sample. These particles are predominantly located between the cellulosic nano-fibers, forming complex composite structures with the 30 fibrils [26]. These observations are consistent with the role of lignin in native wood cell wall, where it exists as a stiff phase between cellulosic fibers. Thus, in our work, the LNCFs with different lignin contents show the same morphology by TEM, while the globular shaped particles (lignin) could not be obviously observed. The high lignin could form complex composite structures in LCNFs. Thus, the LCNFs are more likely to form a cross-linked network structure with increasing lignin content. In addition, The thermal stability of the LCNFs was characterized by TGA, and the thermal degradation curves of the CNFs, 5-LCNFs, 9-LCNFs, and 14-LCNFs are shown in Figure 1C. The results reveal that the thermal stability is highly affected by the lignin content. The thermal degradation onset temperature (T onset ) of the LCNFs decreases with increasing lignin content (14-LCNFs < 9-LCNFs < 5-LCNFs < CNFs) and the residue weight of the LCNFs also increases. This can be attributed to the high thermal stability of lignin [27,28]. However, the weight loss of lignin is greater than that of cellulose under low temperature (T < 300 • C) [29]. When the temperature is above 400 • C, cellulose is almost completely degraded, whereas lignin shows low weight loss. The presence of aromatic char, originating from the lignin, is responsible for the beneficial effect on the thermal stability of the CNFs. The average CAs for three probing liquids were measured on LCNFs with different lignin contents, and these were used in surface energy evaluation, as shown in Figure 1D, and summarized in Table 1. The highest CAs were obtained with water, whereas ethylene glycol gave the lowest CA in most of the cases. For a given liquid, the CA increases with increasing lignin content. Acid-base theory can be used to evaluate the surface energy components using the CA value. Using the acid-base framework to describe the surface energy has been found to be suitable to explain the properties of the wood surface, and it gives detailed information about the surface chemistry [26,30]. As a characteristic parameter, the surface free energy has a large effect on many interfacial processes, such as absorption, wetting, and adhesion. The literature surface free energy parameters for the test liquids and the surface energy components of the LCNFs, calculated by the acid-base approach, are listed in Table 1. The total surface free energy of the LCNFs with different lignin contents ranges from 47.6 to 53.6 mJ/m 2 , which is consistent with the range of 43.1-53.7 mJ/m 2 previously reported by Peng et al. [31]. The total surface energy decreases with increasing lignin content, from 51.7 mJ/m 2 for the 5-LCNFs sample, to 46.7 mJ/m 2 for the 14-LCNFs sample. Therefore, lignin decreases the surface energy of the nano-fibrils, which is expected because of the large percentage of C-C and C-H bonds, and the lower ratio compared with cellulose [32]. The increasing hydrophobicity of the LCNFs, with increasing lignin content, contributes to the compatibility with the hydrophobic polymer matrix [33,34]. value. Using the acid-base framework to describe the surface energy has been found to be suitable to explain the properties of the wood surface, and it gives detailed information about the surface chemistry [26,30]. As a characteristic parameter, the surface free energy has a large effect on many interfacial processes, such as absorption, wetting, and adhesion. The literature surface free energy parameters for the test liquids and the surface energy components of the LCNFs, calculated by the acid-base approach, are listed in Table 1. The total surface free energy of the LCNFs with different lignin contents ranges from 47.6 to 53.6 mJ/m 2 , which is consistent with the range of 43.1-53.7 mJ/m 2 previously reported by Peng et al. [31]. The total surface energy decreases with increasing lignin content, from 51.7 mJ/m 2 for the 5-LCNFs sample, to 46.7 mJ/m 2 for the 14-LCNFs sample. Therefore, lignin decreases the surface energy of the nano-fibrils, which is expected because of the large percentage of C-C and C-H bonds, and the lower ratio compared with cellulose [32]. The increasing hydrophobicity of the LCNFs, with increasing lignin content, contributes to the compatibility with the hydrophobic polymer matrix [33,34].
FTIR Spectroscopy
The FTIR spectra of PLA and the PLA/CNFs and PLA/LCNFs composites are shown in Figure 2. The PLA/CNFs composite has the same peaks as pure PLA except for a stronger and wider peak around 3343-3432 cm −1 , which can be attributed to the -OH groups of the pyranose rings of the CNFs [35]. This is because of the physical blending mode of PLA and CNFs. Furthermore, the C=O stretching vibration peak at 1749 cm −1 for PLA shifts in the low wave number direction by two cm −1 for the PLA/CNFs composite. This can be ascribed to the weak interaction between PLA and CNFs, which is not sufficient to form strong interface interactions. Comparing the spectra of the PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites, the intensity of the -OH stretching vibration peak shows decreases with increasing lignin content. The C=O stretching vibration peak at 1749 cm −1 for PLA shifts in the low wave number direction by 10 cm −1 for the PLA/5-LCNFs composite (1739 cm −1 ), 16 cm −1 for the PLA/9-LCNFs composite (1733 cm −1 ), and 17 cm −1 for the PLA/14-LCNFs composite (1732 cm −1 ). The intensity of the C=O peak also increases with increasing lignin content. This suggests that there is a strong interaction between the LCNFs and PLA, which indicates that lignin improves the compatibility between the CNFs and PLA. Furthermore, the interaction increases with increasing lignin content.
FTIR Spectroscopy
The FTIR spectra of PLA and the PLA/CNFs and PLA/LCNFs composites are shown in Figure 2. The PLA/CNFs composite has the same peaks as pure PLA except for a stronger and wider peak around 3343-3432 cm −1 , which can be attributed to the -OH groups of the pyranose rings of the CNFs [35]. This is because of the physical blending mode of PLA and CNFs. Furthermore, the C=O stretching vibration peak at 1749 cm −1 for PLA shifts in the low wave number direction by two cm −1 for the PLA/CNFs composite. This can be ascribed to the weak interaction between PLA and CNFs, which is not sufficient to form strong interface interactions. Comparing the spectra of the PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites, the intensity of the -OH stretching vibration peak shows decreases with increasing lignin content. The C=O stretching vibration peak at 1749 cm −1 for PLA shifts in the low wave number direction by 10 cm −1 for the PLA/5-LCNFs composite (1739 cm −1 ), 16 cm −1 for the PLA/9-LCNFs composite (1733 cm −1 ), and 17 cm −1 for the PLA/14-LCNFs composite (1732 cm −1 ). The intensity of the C=O peak also increases with increasing lignin content. This suggests that there is a strong interaction between the LCNFs and PLA, which indicates that lignin improves the compatibility between the CNFs and PLA. Furthermore, the interaction increases with increasing lignin content.
Mechanical Properties
Tensile strength evaluation is important to investigate the fracture flexibility of polymer composites. As shown in Figure 3a, the tensile strength and modulus of the PLA/LCNFs composite tend to increase with increasing lignin content for low lignin content (0 to 9 wt %), but they slightly decrease for high lignin content (14 wt %). The tensile strength and modulus decrease from 35.7 MPa and 1.8 GPa, for PLA, to 31.6 MPa and 1.5 GPa for the PLA/CNFs composite, respectively. For the PLA/LCNFs composites, there are statistically significant changes in the tensile strength and modulus for lignin contents of five and nine wt %, compared with the PLA/CNFs composite. Compared with pure PLA, the PLA/5-LCNFs and PLA/9-LCNFs composites show statistically significant improvements in the tensile strength of 28% (45.7 MPa) and 37% (48.9 MPa), and tensile modulus of 44.4% (2.6 GPa) and 61.1% (2.9 GPa). These observations can be attributed to the better compatibility of the LCNFs than the CNFs with PLA. Moreover, the strong hydrogen-bonding interaction between lignin and cellulose, as well as the van der Waals forces and polar interaction of lignin with PLA, makes it possible for good adhesion between the LCNFs and PLA. However, the tensile strength and modulus of the PLA/14-LCNFs composite (5.5% and 17.2%, respectively) is slightly lower than those of the PLA/9-LCNFs composite. This is probably because, even though the
Mechanical Properties
Tensile strength evaluation is important to investigate the fracture flexibility of polymer composites. As shown in Figure 3a, the tensile strength and modulus of the PLA/LCNFs composite tend to increase with increasing lignin content for low lignin content (0 to 9 wt %), but they slightly decrease for high lignin content (14 wt %). The tensile strength and modulus decrease from 35.7 MPa and 1.8 GPa, for PLA, to 31.6 MPa and 1.5 GPa for the PLA/CNFs composite, respectively. For the PLA/LCNFs composites, there are statistically significant changes in the tensile strength and modulus for lignin contents of five and nine wt %, compared with the PLA/CNFs composite. Compared with pure PLA, the PLA/5-LCNFs and PLA/9-LCNFs composites show statistically significant improvements in the tensile strength of 28% (45.7 MPa) and 37% (48.9 MPa), and tensile modulus of 44.4% (2.6 GPa) and 61.1% (2.9 GPa). These observations can be attributed to the better compatibility of the LCNFs than the CNFs with PLA. Moreover, the strong hydrogen-bonding interaction between lignin and cellulose, as well as the van der Waals forces and polar interaction of lignin with PLA, makes it possible for good adhesion between the LCNFs and PLA. However, the tensile strength and modulus of the PLA/14-LCNFs composite (5.5% and 17.2%, respectively) is slightly lower than those of the PLA/9-LCNFs composite. This is probably because, even though the interface interaction between the LCNFs and PLA improves with increasing lignin content in the LCNFs, the mechanical properties of the LCNFs are also affected by the lignin content.
Morphological Analysis of Fracture Surface
The effect of the lignin content on the miscibility between the LCNFs and PLA can be investigated by morphology analysis. The solubility parameter can be used to estimate the miscibility of polymer blends. The solubility's of PLA and lignin are 20.2 and 19.02 MPa 1/2 , respectively [36], which indicates that there should be good miscibility between PLA and lignin. When LCNFs are blended with PLA, lignin can act as a compatibilizer to form a uniform blend system.
To better understand the results obtained by the tensile tests, the fractured samples were observed by SEM. Figure 4a-e show the representative SEM images of the cross-sectional regions of PLA and the PLA/CNF, PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites, respectively. PLA has relatively smooth fractures, which indicates that PLA is prone to brittle fracture [37]. After CNFs are added to PLA, the fracture surface of the PLA/CNF composite becomes irregular because of plastic deformation. The brittle rupture of PLA changes to ductile rupture with addition of CNFs, whereas the poor dispersion state of CNFs in the PLA matrix results in interfacial compatibility and decreases the mechanical properties of the PLA/CNF composite, compared with pure PLA. When LCNFs with different lignin contents are added to the PLA matrix, the fracture surfaces of the PLA/LCNFs composites (Figure 4c-e) show a different microstructure: At lower lignin content (5 wt%), samples showed roughness and litter wire drawing phenomenon in Figure 4c, while increasing the lignin content (14 wt%), the samples showed obvious tough character and the wire drawing phenomenon [38]. The results indicate that increasing the lignin content of the LCNFs promotes ductile rupture of PLA. From the SEM results, we can conclude that lignin and the lignin content plays important roles in improving the performance of PLA.
Morphological Analysis of Fracture Surface
The effect of the lignin content on the miscibility between the LCNFs and PLA can be investigated by morphology analysis. The solubility parameter can be used to estimate the miscibility of polymer blends. The solubility's of PLA and lignin are 20.2 and 19.02 MPa 1/2 , respectively [36], which indicates that there should be good miscibility between PLA and lignin. When LCNFs are blended with PLA, lignin can act as a compatibilizer to form a uniform blend system.
To better understand the results obtained by the tensile tests, the fractured samples were observed by SEM. Figure 4a-e show the representative SEM images of the cross-sectional regions of PLA and the PLA/CNF, PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites, respectively. PLA has relatively smooth fractures, which indicates that PLA is prone to brittle fracture [37]. After CNFs are added to PLA, the fracture surface of the PLA/CNF composite becomes irregular because of plastic deformation. The brittle rupture of PLA changes to ductile rupture with addition of CNFs, whereas the poor dispersion state of CNFs in the PLA matrix results in interfacial compatibility and decreases the mechanical properties of the PLA/CNF composite, compared with pure PLA. When LCNFs with different lignin contents are added to the PLA matrix, the fracture surfaces of the PLA/LCNFs composites (Figure 4c-e) show a different microstructure: At lower lignin content (5 wt%), samples showed roughness and litter wire drawing phenomenon in Figure 4c, while increasing the lignin content (14 wt%), the samples showed obvious tough character and the wire drawing phenomenon [38].
The results indicate that increasing the lignin content of the LCNFs promotes ductile rupture of PLA. From the SEM results, we can conclude that lignin and the lignin content plays important roles in improving the performance of PLA.
Thermal Properties
In general, the thermal stability of polymers can be improved by blending with nano-fillers. The dispersion state of the nano-filler in the polymer matrix and the interaction between them significantly affect the thermal stability. The TG and derivative TG (DTG) curves of PLA and the PLA/LCNFs composites are shown in Figure 5, and the degradation parameters are summarized in Table 2. The onset temperature (10% weight loss, T10%) of thermal degradation decreases after blending with CNFs, while it increases by blending with LCNFs. This can be attributed to formation of a cross-linked structure, which reduces the chain mobility and inhibits chain unzipping during propagation of the degradation process [39]. As shown in Figure 5, the lignin content has a clear effect on the Tmax (The temperature of the maximum weight loss) value of the PLA/LCNFs composites. The Tmax value of the PLA/9-LCNFs composite is 5 °C higher than that of PLA. The lignin content affects the Tmax value, and the thermal stability of the PLA/LCNFs composite improves with increasing lignin content in the considered temperature region. The thermal stability of the LCNFs decreases with increasing lignin content, so the PLA/14-LCNFs composite has lower thermal stability than the PLA/9-LCNFs composite. It can be concluded that lignin has a positive effect on the miscibility between PLA and the LCNFs, and increasing the lignin content also affects the thermal resistance of the LCNFs. In addition, the residue mass of the PLA/LCNFs composite in the thermal degradation process increases with increasing lignin content.
Thermal Properties
In general, the thermal stability of polymers can be improved by blending with nano-fillers. The dispersion state of the nano-filler in the polymer matrix and the interaction between them significantly affect the thermal stability. The TG and derivative TG (DTG) curves of PLA and the PLA/LCNFs composites are shown in Figure 5, and the degradation parameters are summarized in Table 2. The onset temperature (10% weight loss, T 10% ) of thermal degradation decreases after blending with CNFs, while it increases by blending with LCNFs. This can be attributed to formation of a cross-linked structure, which reduces the chain mobility and inhibits chain unzipping during propagation of the degradation process [39]. As shown in Figure 5, the lignin content has a clear effect on the T max (The temperature of the maximum weight loss) value of the PLA/LCNFs composites. The T max value of the PLA/9-LCNFs composite is 5 • C higher than that of PLA. The lignin content affects the T max value, and the thermal stability of the PLA/LCNFs composite improves with increasing lignin content in the considered temperature region. The thermal stability of the LCNFs decreases with increasing lignin content, so the PLA/14-LCNFs composite has lower thermal stability than the PLA/9-LCNFs composite. It can be concluded that lignin has a positive effect on the miscibility between PLA and the LCNFs, and increasing the lignin content also affects the thermal resistance of the LCNFs. In addition, the residue mass of the PLA/LCNFs composite in the thermal degradation process increases with increasing lignin content.
Thermal Properties
In general, the thermal stability of polymers can be improved by blending with nano-fillers. The dispersion state of the nano-filler in the polymer matrix and the interaction between them significantly affect the thermal stability. The TG and derivative TG (DTG) curves of PLA and the PLA/LCNFs composites are shown in Figure 5, and the degradation parameters are summarized in Table 2. The onset temperature (10% weight loss, T10%) of thermal degradation decreases after blending with CNFs, while it increases by blending with LCNFs. This can be attributed to formation of a cross-linked structure, which reduces the chain mobility and inhibits chain unzipping during propagation of the degradation process [39]. As shown in Figure 5, the lignin content has a clear effect on the Tmax (The temperature of the maximum weight loss) value of the PLA/LCNFs composites. The Tmax value of the PLA/9-LCNFs composite is 5 °C higher than that of PLA. The lignin content affects the Tmax value, and the thermal stability of the PLA/LCNFs composite improves with increasing lignin content in the considered temperature region. The thermal stability of the LCNFs decreases with increasing lignin content, so the PLA/14-LCNFs composite has lower thermal stability than the PLA/9-LCNFs composite. It can be concluded that lignin has a positive effect on the miscibility between PLA and the LCNFs, and increasing the lignin content also affects the thermal resistance of the LCNFs. In addition, the residue mass of the PLA/LCNFs composite in the thermal degradation process increases with increasing lignin content. The glass transition is a complex phenomenon that depends on a number of factors, such as the chain flexibility, the molecular weight, branching, cross-linking, intermolecular interactions, and steric effects [40]. The thermal behavior of PLA and the PLA/CNFs, PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites were determined by DSC measurements (Figure 6). The thermal properties of these materials are given in Table 3. The glass transition temperature (T g ) of pure PLA is 61.2 • C, and the PLA/CNFs composite has a slightly lower T g of 60.8 • C. This can be ascribed to the fact that the CNFs and PLA do not form strong interfacial interactions. However, the PLA/LCNFs composites have considerably lower T g values than PLA. The T g value of the PLA/14-LCNFs composite is 52.6 • C. This probably indicates that a relatively small amount of LCNFs is sufficient to change the PLA chain mobility within the glass transition region. This observation shows that there are strong interfacial forces between the LCNFs and PLA, and the lignin content has a positive effect on compatibility. The glass transition is a complex phenomenon that depends on a number of factors, such as the chain flexibility, the molecular weight, branching, cross-linking, intermolecular interactions, and steric effects [40]. The thermal behavior of PLA and the PLA/CNFs, PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites were determined by DSC measurements (Figure 6). The thermal properties of these materials are given in Table 3. The glass transition temperature (Tg) of pure PLA is 61.2 °C, and the PLA/CNFs composite has a slightly lower Tg of 60.8 °C. This can be ascribed to the fact that the CNFs and PLA do not form strong interfacial interactions. However, the PLA/LCNFs composites have considerably lower Tg values than PLA. The Tg value of the PLA/14-LCNFs composite is 52.6 °C. This probably indicates that a relatively small amount of LCNFs is sufficient to change the PLA chain mobility within the glass transition region. This observation shows that there are strong interfacial forces between the LCNFs and PLA, and the lignin content has a positive effect on compatibility.
Blending with CNFs and LCNFs decreases the crystallization temperature (Tc) of PLA, and Tc decreases with increasing lignin content, as shown in Table 3. The lower Tc in heat flow indicates that the crystallization rate of PLA markedly increases. The results show that the CNFs in the LCNFs can be used as a nucleating agent in PLA composites.
The melting temperature (Tm) of all of the PLA composites is about 148 °C (Table 3). This indicates that the lignin content plays an important role in increasing the melting enthalpy (ΔHm) of the LCNFs-reinforced PLA matrix ( Table 3). The degree of crystallinity increase from 12.1% for pure PLA to 14.7% for the PLA/CNFs composite, 17.9% for the PLA/5-LCNFs composite, 18.7% for the PLA/9-LCNFs composite, and 17.5% for the PLA/14-LCNFs composite. There is a significant increase in the degree of crystallinity of PLA when high lignin content LCNFs are used, which is because the nucleating agent concentration in the LCNFs is low for high lignin content (14-LCNFs). Table 3. DSC data of PLA and PLA/LCNFs composites with different lignin contents.
Samples
T Blending with CNFs and LCNFs decreases the crystallization temperature (T c ) of PLA, and T c decreases with increasing lignin content, as shown in Table 3. The lower T c in heat flow indicates that the crystallization rate of PLA markedly increases. The results show that the CNFs in the LCNFs can be used as a nucleating agent in PLA composites.
The melting temperature (T m ) of all of the PLA composites is about 148 • C (Table 3). This indicates that the lignin content plays an important role in increasing the melting enthalpy (∆H m ) of the LCNFs-reinforced PLA matrix ( Table 3). The degree of crystallinity increase from 12.1% for pure PLA to 14.7% for the PLA/CNFs composite, 17.9% for the PLA/5-LCNFs composite, 18.7% for the PLA/9-LCNFs composite, and 17.5% for the PLA/14-LCNFs composite. There is a significant increase in the degree of crystallinity of PLA when high lignin content LCNFs are used, which is because the nucleating agent concentration in the LCNFs is low for high lignin content (14-LCNFs).
Dynamic Mechanical Properties
DMA is a useful technique to investigate the relationship between the structure and viscoelastic behavior of polymers and polymer-nano-filler composites. The storage modulus (E ) indicates the tendency and ability of energy storage in materials, and it is also directly related to the Young's modulus. Figure 7a shows the plots of E versus temperature for pure PLA and the PLA/LCNFs composites with various lignin contents. When LCNFs are added, E is significantly higher in the whole temperature range studied. For example, E at 20 • C increases from 1800 MPa for pure PLA to 3650 MPa for the PLA/9-LCNFs composite. The increase in E can be explained by the reinforcing effect provided by lignin of the LCNFs on the PLA matrix. The thickness of the interface is affected by the lignin content and it affects the transition of stress, so the PLA/14-LCNFs composite has a lower E than the PLA/9-LCNFs composite, which is consistent with the mechanical strength results.
Dynamic Mechanical Properties
DMA is a useful technique to investigate the relationship between the structure and viscoelastic behavior of polymers and polymer-nano-filler composites. The storage modulus (E′) indicates the tendency and ability of energy storage in materials, and it is also directly related to the Young's modulus. Figure 7a shows the plots of E′ versus temperature for pure PLA and the PLA/LCNFs composites with various lignin contents. When LCNFs are added, E′ is significantly higher in the whole temperature range studied. For example, E′ at 20 °C increases from 1800 MPa for pure PLA to 3650 MPa for the PLA/9-LCNFs composite. The increase in E′ can be explained by the reinforcing effect provided by lignin of the LCNFs on the PLA matrix. The thickness of the interface is affected by the lignin content and it affects the transition of stress, so the PLA/14-LCNFs composite has a lower E′ than the PLA/9-LCNFs composite, which is consistent with the mechanical strength results.
The effect of LCNFs with different lignin contents on the damping behavior of PLA was investigated by plotting tan Δ against temperature (Figure 7b). The tan Δ peak shifts to lower temperature with increasing lignin content in the LCNFs. The tan Δ peaks for the PLA/CNFs composite, PLA, and the PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites are at 61, 60, 57, 55, and 54 °C, respectively. Moreover, the intensities of the tan Δ peaks for the PLA/LCNFs composites are considerably higher than the peak for pure PLA. This can be attributed to the liberation effect of the LCNFs enhancing the chain mobility of the amorphous region for the PLA/LCNFs composites with higher lignin content.
Mechanism Analysis of the Interfacial Interaction
The mechanism of the interfacial interaction between the LCNFs and PLA is shown in Figure 8. Lignin can be used as an additive to improve the compatibility between CNFs and PLA. Considering the LCNFs as an ensemble, lignin combines well with cellulose through hydrogen bonds and dipole-dipole interactions, and van der Waals interactions occur among the non-polar groups of lignin and PLA. Moreover, the polar and polarizable groups of lignin, such as hydroxyl and phenyl groups, form hydrogen bonds or dispersion interactions with the ester groups of the PLA matrix, The effect of LCNFs with different lignin contents on the damping behavior of PLA was investigated by plotting tan ∆ against temperature (Figure 7b). The tan ∆ peak shifts to lower temperature with increasing lignin content in the LCNFs. The tan ∆ peaks for the PLA/CNFs composite, PLA, and the PLA/5-LCNFs, PLA/9-LCNFs, and PLA/14-LCNFs composites are at 61, 60, 57, 55, and 54 • C, respectively. Moreover, the intensities of the tan ∆ peaks for the PLA/LCNFs composites are considerably higher than the peak for pure PLA. This can be attributed to the liberation effect of the LCNFs enhancing the chain mobility of the amorphous region for the PLA/LCNFs composites with higher lignin content.
Mechanism Analysis of the Interfacial Interaction
The mechanism of the interfacial interaction between the LCNFs and PLA is shown in Figure 8. Lignin can be used as an additive to improve the compatibility between CNFs and PLA. Considering the LCNFs as an ensemble, lignin combines well with cellulose through hydrogen bonds and dipole-dipole interactions, and van der Waals interactions occur among the non-polar groups of lignin and PLA. Moreover, the polar and polarizable groups of lignin, such as hydroxyl and phenyl groups, form hydrogen bonds or dispersion interactions with the ester groups of the PLA matrix, which makes it possible for lignin to be in a well-dispersed state in the PLA matrix. As a result, the interfacial compatibility between LCNFs and PLA is enhanced.
Polymers 2018, 10, x FOR PEER REVIEW 11 of 14 which makes it possible for lignin to be in a well-dispersed state in the PLA matrix. As a result, the interfacial compatibility between LCNFs and PLA is enhanced. Figure 8. Reaction mechanism of PLA and LCNFs. Figure 9 shows the mechanism of the effect of the lignin content on the compatibility between PLA and the LCNFs. It can be concluded that increasing the lignin content has a positive effect on the compatibility, and the interfacial layer thickness increases with increasing lignin content, which results in increased flexibility of the PLA/LCNFs composite.
Conclusions
Addition of LCNFs with different lignin contents to PLA changes the mechanical properties, thermo-mechanical properties, thermal behavior, and morphology. The tensile strength and modulus are the highest (48.9 MPa and 2.9 GPa) when the lignin content is 9 wt %. The elongation at break increases with increasing lignin content, and the thermo-mechanical properties show the same trend as the tensile strength. FTIR spectroscopy shows that there is a clear interaction between the LCNFs and the carbonyl groups of PLA. The Tg decreases from 61.2 °C for pure PLA, to 52.6 °C for the PLA/14-LCNFs composite, indicating increased flexibility with LCNFs addition with increasing lignin content. The crystallization temperature of the PLA/LCNFs composite decreases with increasing lignin content in the LCNFs, indicating that the crystallization efficiency and degree of crystallinity improve. The thermal stabilities of the PLA/LCNFs composites are higher than that of PLA, which was below 380 °C, and the thermal stability increases with increasing lignin content in the LCNFs. The PLA/9-LCNFs composite has the highest Tmax value (4.4 °C higher than that of pure PLA). The cross-sectional morphologies of PLA and the PLA/LFCNFs composites show the Figure 9 shows the mechanism of the effect of the lignin content on the compatibility between PLA and the LCNFs. It can be concluded that increasing the lignin content has a positive effect on the compatibility, and the interfacial layer thickness increases with increasing lignin content, which results in increased flexibility of the PLA/LCNFs composite. which makes it possible for lignin to be in a well-dispersed state in the PLA matrix. As a result, the interfacial compatibility between LCNFs and PLA is enhanced. Figure 8. Reaction mechanism of PLA and LCNFs. Figure 9 shows the mechanism of the effect of the lignin content on the compatibility between PLA and the LCNFs. It can be concluded that increasing the lignin content has a positive effect on the compatibility, and the interfacial layer thickness increases with increasing lignin content, which results in increased flexibility of the PLA/LCNFs composite.
Conclusions
Addition of LCNFs with different lignin contents to PLA changes the mechanical properties, thermo-mechanical properties, thermal behavior, and morphology. The tensile strength and modulus are the highest (48.9 MPa and 2.9 GPa) when the lignin content is 9 wt %. The elongation at break increases with increasing lignin content, and the thermo-mechanical properties show the same trend as the tensile strength. FTIR spectroscopy shows that there is a clear interaction between the LCNFs and the carbonyl groups of PLA. The Tg decreases from 61.2 °C for pure PLA, to 52.6 °C for the PLA/14-LCNFs composite, indicating increased flexibility with LCNFs addition with increasing lignin content. The crystallization temperature of the PLA/LCNFs composite decreases with increasing lignin content in the LCNFs, indicating that the crystallization efficiency and degree of crystallinity improve. The thermal stabilities of the PLA/LCNFs composites are higher than that of PLA, which was below 380 °C, and the thermal stability increases with increasing lignin content in the LCNFs. The PLA/9-LCNFs composite has the highest Tmax value (4.4 °C higher than that of pure PLA). The cross-sectional morphologies of PLA and the PLA/LFCNFs composites show the
Conclusions
Addition of LCNFs with different lignin contents to PLA changes the mechanical properties, thermo-mechanical properties, thermal behavior, and morphology. The tensile strength and modulus are the highest (48.9 MPa and 2.9 GPa) when the lignin content is 9 wt %. The elongation at break increases with increasing lignin content, and the thermo-mechanical properties show the same trend as the tensile strength. FTIR spectroscopy shows that there is a clear interaction between the LCNFs and the carbonyl groups of PLA. The T g decreases from 61.2 • C for pure PLA, to 52.6 • C for the PLA/14-LCNFs composite, indicating increased flexibility with LCNFs addition with increasing lignin content. The crystallization temperature of the PLA/LCNFs composite decreases with increasing lignin content in the LCNFs, indicating that the crystallization efficiency and degree of crystallinity improve. The thermal stabilities of the PLA/LCNFs composites are higher than that of PLA, which was below 380 • C, and the thermal stability increases with increasing lignin content in the LCNFs. The PLA/9-LCNFs composite has the highest T max value (4.4 • C higher than that of pure PLA). The cross-sectional morphologies of PLA and the PLA/LFCNFs composites show the strengthening and toughening effect of LCNFs, and the lignin content has a considerable effect on the flexibility of the PLA/LCNFs composite. | 2019-04-05T00:37:53.531Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "f7eef6e70a8cb9333e5e46ac1156b2bf5817fc76",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/10/9/1013/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7eef6e70a8cb9333e5e46ac1156b2bf5817fc76",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
262024866 | pes2o/s2orc | v3-fos-license | Network Information Platform Construction Based on Computer Data Mining and Processing
: The network information platform contains a large amount of data in various types. However, due to the limitations of data organization mode, fragmentation and other problems, the application value of data is difficult to be fully mined. The application of computer data mining and processing technology can, on the basis of following certain rules, dig deep into massive and scattered data, find out the regular parts, and reasonably classify, summarize and summarize the data. Therefore, based on computer data mining processing, flexible construction of network information platform can not only improve the effect and quality of data management, but also help to improve the rate of data utilization and maximize the application value of data. In view of this, the in-depth analysis of the network information platform, the specific application of data mining processing technology in the platform construction, and feasible countermeasures for platform safe operation are put forward.
Computer data mining processing technology and algorithm
The data mining and processing technology based on computer development mainly refers to the in-depth analysis and sorting of fragmented data and fragmented data through the rational use of smart phones, terminal computers and other electronic devices, so as to dig out the valuable information.During the period of computer data mining processing, strengthening the use of network technology, paying attention to the data sorting and mining, can provide technical support for the development of various fields and industries.Through the use of this technology, it can not only be used as a source of information, but also promote the role and value of the network information platform.During the construction of the network information platform, in order to standardize the data processing and highlight the advantages of data mining and processing technology, the utilization of various algorithms should be strengthened.At the present stage, the common algorithms are mainly as follows:
Cluster algorithm
This method is a typical unsupervised learning, the applied sample category is unknown, but the algorithm model is known, the purpose is to build the physical or logical internal correlation between the data, and finally form the data cluster.If the data clusters are consistent, the characteristics are basically the same and the similarity is high.However, if the data cluster is different, the difference between the internal data is significant
Association algorithm
In the background of information, massive data needs to be stored with the help of database, such as relational database, transactional database, etc., and the connection between each data is close.And during the specific application period, most people will present specific patterns.Through the application of this algorithm, this correlation pattern can be quickly discovered and refined [1] .
Classification algorithm
In the process of utilization, this method is mainly trained into a classifier through the application of a batch of samples with classification labels, and the classifier should form regular data support.When applying such algorithms, classification models can be built before data classification, leaving the data in the form of different concept sets.
Construction of the overall platform architecture
In the construction of the network information platform, we must first build a perfect platform architecture.During the actual operation of the platform, the massive data and information should be stored and analyzed efficiently.Through the flexible use of the server and the continuous optimization of the data mining procedures, the front-end interface should adopt interpersonal interaction to make the whole operation process more convenient.When the overall architecture of the platform is constructed, the design of the database is difficult, and the stable operation of the database is direct.Therefore, in order to improve the efficiency of data mining and improve the reliability and security of platforms, servers and databases, we should improve in the following aspects:
Server
All kinds of requests sent by the browser client will be processed and responded to by the server, and the data mining algorithm is deployed on the server, so feasible and reasonable forms should be taken during the specific construction period.In the design of the server, the scale of requests and the frequency of data processing should be considered comprehensively.If a single server is arranged, the response ability and speed is difficult to meet the expected.Therefore, in the engineering practice stage, it is necessary to strengthen the construction of the data cluster to ensure that the responsiveness of the server can be improved overall.In the operation stage of the network information platform, the number of users is large, and the number of requests and the scale of data processing in the same time are large.Therefore, during the construction of the platform, the establishment of server clusters should be emphasized to promote the improvement of system data calculation and processing efficiency [2] .
The database
In the process of building the network information platform, the data storage volume is large, and only the big data has the value of data mining.In addition, data security, storage capacity, fault recovery, and response speed are closely related to database design.Therefore, during the establishment of the network information platform, in order to make the overall architecture more sound, attention should be paid to database design to avoid blocking problems.
Front-end design
In the process of building the network information platform, on the basis of clarifying the overall architecture of the platform, the front-end design work should be done well, specifically from the following two aspects.
Functional requirements
In the front-end design, to provide the necessary entrance for the user operation management.Because the industry is different, the data characteristics are often quite different, so different ways should be adopted in the data analysis.In order to improve the rationality of the front-end design and make the platform run stably in the later stage, during the design of the front-end interface, the different needs of users should be comprehensively considered, and different types of display Windows and function buttons should be set on the interface.When operating, users only need to click the corresponding button on the interface, and complete the classification and aggregation of data by quickly calling the database.The front-end interface has many functions, such as automatic customization.However, it should be noted that the design of the front-end interface should be kept the same as the database table structure and the underlying data code, highlighting the systematic characteristics of the design [3] .
Correct selection and application of the front-end development framework
In the previous technical route, through the application of the front-end development language, such as HTML, CSS, etc., the page is reasonably compiled to achieve color rendering.With the continuous development of technology, the more powerful front-end JS development framework is widely used, such as Jquery, Zepto, etc., which can be used jointly with the front-end UI framework, making the types rich and diverse.In the application process, this kind of framework can achieve the efficiency of front and back end analysis, make the whole development mode more advanced, and make the data interaction between the front and rear end through the interface.Network information platform is the web page as the basis, further development and application.
Database construction
Whether the construction of the database is reasonable or not has a direct impact on the processing and application of the subsequent network data information.Therefore, in this process, attention should be paid to the selection, function and security of the database, and the database should be flexibly constructed according to the requirements of the platform construction.
Database selection
Currently, for some large storage databases, the main type is relational databases.Mysql is the most commonly used product in China.The level and maturity of this kind of technology is high, and the database storage volume developed is large.Through the reasonable application of cloud storage technology, the maturity of large-scale commercial cloud storage services in China will be higher and higher, and the choice range of users will be broadened.
Database query capability
Under the long time application of the database, the stored items will be more and more.If the established value is reached, the retrieval ability cannot maintain the original state.This situation is related to the query pattern of the relational database.Once the responsiveness of the database is reduced, the user experience is difficult to achieve the best.At the technical level, the existing problems can be solved scientifically through the utilization of subtable, partition, distributed and storage methods.In the specific operation process of the network information platform, there is usually a large amount of data information.In order to ensure the best carrying capacity of the database system, in practice, the use of advanced technologies should be strengthened, such as distribution, database clustering, and deployment on different physical machines, so that the databases can be used together to ensure the maximum storage capacity of the database [4] .
Security of the database
During the specific application of the database, the risk probability of physical storage area damage is large.If it cannot be effectively prevented, the prevention plan can be reasonably formulated.Once problems occur, it will inevitably cause massive data loss, leakage and other problems, resulting in the reduction of data security of the network information platform.Therefore, in the process of database construction, we should strengthen the perfection of the backup mechanism to ensure that in the case of warehouse damage and data loss, the backup library can respond in the first time, store the original data in the database, and recover the data in a short time.
Specific application of the network information platform
Combine the association rules and the clustering analysis method to ensure the orderly construction of the network information platform.Through the application of data mining processing technology, there are in-depth analysis of different groups, according to the characteristics of each group, the final identification reasonable analysis.At the same time, different groups are accurately classified, so that while the management ability is improved, and the application of network information platform is more targeted and feasible.Through the use of software, cluster analysis is carried out for each group, the differences between the groups are observed, the characteristics and rules of the network are mined and processed according to various aspects such as online frequency and voice frequency, and the content is concerned around the network information platform and specific application tools, and the subsequent behaviors are predicted through summary and induction.Starting from the analysis at the objective level, through the use of network information platform, the behavior of a certain group can be deeply analyzed to understand the correlation and make targeted adjustments.
In practice, the functions on the front-end interface are displayed through text.The user clicks the button and sends the request according to their own needs.After receiving the request, the server sends the request to the database again by applying data mining code and algorithm, which obtains the data and processes it into a specific form.
Strengthen the prevention of network viruses
Network virus types and attack modes have diversified characteristics, fast transmission speed, once invaded by the network virus, the damage and impact can be imagined.In the process of network virus prevention, ordinary computer users have no advanced technology as the support, and their ideology needs to be strengthened urgently.Although some technical personnel can fully understand the attack mode of the virus, transmission route and other characteristics and rules, the overall level is high, but the continuous stream of viruses, the simple prevention of the virus has been unable to make the network information platform in an absolutely safe environment.In order to solve such problems, in the operation stage of the network information platform, we should strengthen the use of data mining and processing technology, mine new virus types, and ensure that the network virus can be prevented in advance [5] .
Strengthen the collection and processing of basic data
The types and characteristics of various viruses have obvious differences, such as Trojan virus, worm virus, etc., and the formation of these viruses is usually regular to follow.Therefore, during the prevention period, data mining processing technology should be applied to deeply dig the risky code, increase the investigation of the network virus, improve the efficiency and accuracy of the investigation, and kill the network virus in the early stage.
Data processing
After the potential network virus code is found, it needs to be further processed and verified.With the help of data mining processing technology, in the process of network virus processing, the automation and precision, the application of scientific processing methods, will turn the code into a non-execution format to ensure that it does not have the ability to attack the network information platform.During the application of computer data mining and processing technology, the new types of viruses can also be accurately mined, and the targeted prevention countermeasures can be formulated, so that the network information platform is always in a safe environment.
Strengthen the optimization of the network information security environment
During the period of data mining network information management, a safe network operation environment is very important, and the system of relationship platform operation is stable.Therefore, in order to avoid the security problems during the operation of the platform, the security maintenance of the network environment should be strengthened.From the physical and logical perspective, the user access network authorization is strictly managed and controlled to ensure the timeliness of basic network security management.Attention should be paid to the application of network intrusion detection technology, scientific analysis of bad behaviors, in view of malicious sabotage, the construction of detection and early warning mechanism should be strengthened, regular security monitoring of the internal situation of computer networks, to deal with risks in computer systems.The security and reliability of computer network operating environment can be improved by using intrusion detection technology.In the operation of the network information platform, we should strengthen the use of anti-virus technology, conduct quantitative analysis of network security risks, constantly improve the virus protection and virus early warning architecture, and strengthen the optimization of audit analysis mode.In addition, in-depth analysis of the system access rights, from the specific operation status of the platform, to ensure that the probability of malicious attacks can be reduced, to facilitate the in-depth development of data mining.If there is a malicious attack, the application of network backup and disaster recovery mode should be increased, so that the platform can resume normal operation in a short period of time.
Strengthen network information security precautions
Relying on computer data mining and processing, there are many potential security problems during the operation of the network information platform.Therefore, we should not only pay attention to the optimization of the security environment, but also strengthen the prevention of network information security.The specific analysis is as follows:
Formulate reasonable safety precautions
The emergence of network viruses will seriously threaten the network information security, in the development and application of modern technology, the virus types are diverse, the difficulty of mining increases.And the computer installed anti-virus software is difficult to quickly find the virus, the prevention effect cannot meet the expectations.In order to solve this problem, we should strengthen the use of web technology, enhance the depth and breadth of information mining, timely warning and prevention of network virus risks, increase the tracking of viruses, to ensure that the hidden security of network virus will not spread.
Strengthen the mining of server data
During web browsing, the web server will automatically save and record the information viewed by the user.Therefore, with the help of data mining processing technology, network regulators can deeply dig the log files of the server and accurately judge the security of the network.At the same time, in view of the security of network intrusion, strengthen prevention, so that the network information security is guaranteed.
Conclusion
In general, the purpose of building the network information platform is to enable users to obtain reliable, true and accurate data services, but it is difficult to give full play to the potential application value of data.Through the use of data mining processing technology, different algorithms can be integrated, the data relations are deeply analyzed, and the application value of data can be mined from different levels, showing many advantages.Therefore, during the construction of the network information platform, we should strengthen the utilization of data mining and processing technology, and constantly optimize the data service function.Enhance the stability and security of the system platform operation. | 2023-09-18T15:12:00.012Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "f6074103a9257bfe3ff98b9fefa0778fa2740a59",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/07/20/article_1689846835.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "222ecdc3f0fe8e624709b4fdb9538034dcba8390",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
222272247 | pes2o/s2orc | v3-fos-license | On nonnegative solutions for the Functionalized Cahn-Hilliard equation with degenerate mobility
The Functionalized Cahn-Hilliard equation has been proposed as a model for the interfacial energy of phase-separated mixtures of amphiphilic molecules. We study the existence of a nonnegative weak solutions of a gradient flow of the Functionalized Cahn-Hilliard equation subject to a degenerate mobility M(u) that is zero for u<=0. Assuming the initial data u0(x) is positive, we construct a weak solution as the limit of solutions corresponding to nondegenerate mobilities and verify that it satisfies an energy dissipation inequality.
Introduction
The Functionalized Cahn-Hilliard (FCH) free energy was introduced in [12].It is an extension of the model of Gompper and Goos [13], proposed to describe the free energy of microemulsions of amphiphilic molecules and solvent.Amphiphilic molecules are formed by chemically bonding two components whose individual interactions with the solvent are energetically favorable and unfavorable, respectively.When blended with the solvent, amphiphilic molecules have a propensity to phase separate, forming amphiphilic rich domains that are thin, generically the thickness of two molecules, in at least one direction.For a binary mixture with composition described by u on Ω ⊂ R d , the FCH free energy takes the form where Ω is a bounded domain in R d with the boundary ∂Ω.The FCH equation, which is the gradient flow of the FCH energy functional, is written as where T > 0 is a given number.This equation is often subject to periodic or zero-flux boundary conditions on ∂Ω.We also prescribe initial values u(x, 0) = u 0 (x) for all x ∈ Ω, where u 0 ∈ H 2 (Ω) is a given function.The function µ is the chemical potential defined by the first variational derivative of the FCH energy functional (1).The diffusion mobility M : R → [0, ∞) is nonnegative and continuous.The double well potential W : R → R is smooth enough and has two unequal depth local minima at 0 and b + > 0 for which W (0) = 0 > W (b + ), and W ′ has exactly three zeroes at 0, b + , and b 0 ∈ (0, b + ).The parameter η > 0 characterizes key structural properties of the amphiphilic molecules.The function u is the order parameter, representing the relative volume fraction of amphiphilic materials, with u ≡ 0 being pure solvent and u ≡ b + being pure amphiphile.Highly amphiphilic lipids have long hydrophobic tails.The energy of low concentrations of highly amphiphilic materials grows exponentially with the tail length [2].This leads to models in which the concavity of the left well, W ′′ (0), is large [3].Indeed, arguing formally, a perturbation v of a u ≡ 0 distribution satisfies the linear diffusion equation To prevent spuriously high diffusivity at low concentrations, it is natural to take the mobility of lipids so that the product M (0)(W ′′ (0)) 2 remains bounded.To compensate for the high energy of dispersed amphiphilic molecules requires mobilities that are zero or asymptotically zero.In this paper we fix the low-density energy and establish the existence and energy dissipation of weak solutions with vanishing mobility.
In [8] it was proved that for there exists a weak solution for this class of degenerate FCH equation.Here 0 < m < ∞ if the spatial dimension d ≤ 4, and 0 The degenerate mobility ( 6) is an appropriate choice for the degenerate Cahn-Hilliard (CH) equation, which models phase separations in composite materials [4,5,6,7,15].It has been shown by numerical simulations and by asymptotic analysis that for d ≥ 2 the degenerate mobility (6) does not guarantee the weak solution for the degenerate CH to remain positive, even if the initial value is positive [4,5,6,15].This is a consequence of the Gibbs-Thomson effect (see also [1,14,16,17] for discussions on the role of degenerate diffusions, and [18] for the one-dimensional case).The situation is different for the degenerate FCH equation, since there is no Gibbs-Thomson effect [9,10].Instead, a more important feature we need to guarantee is that the relative volume fraction of amphiphilic materials u remains nonnegative.It is the purpose of this paper to show that, with a specific form of degenerate mobility there exists a nonnegative weak solution to the FCH equation ( 2)-( 4) when the initial data u 0 (x) is positive for all x ∈ Ω.Since the FCH equation is a gradient flow for the FCH functional, it is natural to expect the weak solution to satisfy an energy dissipation inequality for F(u).In [8] this energy dissipation property for F was stated informally and without a proof.
In this paper we establish that the weak solutions dissipate the FCH energy.
Main result
In this paper, we assume the dimension d = 1, 2, 3 and the set Ω = (0, 2π) d , and consider the periodic boundary condition on the boundary ∂Ω.We choose the mobility M (u) to be (7), which is degenerate at u = 0.The degeneracy of mobility at u = 0 presents the technical difficulties.We also assume that W ∈ C 4 (R) and there exist positive constants C 1 , C 2 , C 3 , C 4 such that for all z ∈ R, for a constant 1 < p < ∞ if d = 1, 2, and 1 < p One example of such a potential W is with p = 2.Under these assumptions, along with the cut-off degenerate mobility (7), we will prove that the FCH equation ( 2)-( 4) has a nonnegative weak solution that is not zero everywhere in Ω T , assuming that the initial data u 0 (x) is positive in Ω.
Our analysis follows the same strategy as in [7,8,11] and involves two steps.The first step is to approximate the degenerate mobility M (u) by a non-degenerate mobility M θ (u) defined for θ ∈ (0, 1) by The positive lower bound of M θ (u) allows us to find a sufficiently regular weak solution to (2)-( 4) with the positive mobility M θ (u).
The estimate (16) in part (v) of Theorem 1.1 is essential.It is the key to prove the existence of a nonnegative weak solution to the equation ( 2)-( 4) with the degenerate mobility (7).
The second step is to consider the limit of u θ as θ → 0. The limiting function u of u θ does exist and, in the weak sense, solves the FCH equation ( 2)-( 4) with the mobility M (u) defined by (7).It can be interpreted that u solves the FCH equation in the open set U T = U × (0, T ) ⊂ Ω T , where U is any open subset of Ω with ∇∆ 2 u ∈ L q (U T ) for some q > 1.As for the set where u does not have enough regularity, that set is contained in the set where M (u) is degenerate and another set of Lebesgue measure zero.Moreover, if the initial data u 0 (x) is positive in Ω, we obtain a nonnegative weak solution to the equation ( 2)-(4) that is not constantly zero in Ω T .
Theorem 1.2.Let u 0 ∈ H 2 (Ω).With the potential W (u) satisfying ( 8)- (12) and the mobility M (u) defined by (7), for any given constant T > 0, there exists a function u that satisfies the following conditions: (iv).u can be considered as a weak solution for the FCH equation ( 2)- (4) in the following weak sense: (a) Let P be the set where M (u) is not degenerate, that is, There exists a set B ⊂ Ω T with |Ω T \B| = 0 and a function for all φ ∈ L 2 (0, T ; H 2 (Ω)).(b) Let ∇∆ 2 u be the generalized derivative of u in the sense of distributions.If for some open set U ⊂ Ω, ∇∆ 2 u ∈ L q (U T ) for some q > 1, where U T = U × (0, T ), then we have where ω = −∆u + W ′ (u).(c) In addition, for any t ≥ 0, the following energy inequality holds: (v).If u 0 (x) > 0 for all x ∈ Ω, then u(x, t) ≥ 0 for all (x, t) ∈ Ω T , and u(x, t) is not constantly zero in Ω T .
Notation
In this paper, we use C to denote a generic positive constant that may depend on d, T, Ω, η, u 0 and C j (j = 1, 2, 3, 4) but nothing else, in particular not on θ.We also use C θ to denote a generic positive constant that may depend on d, T, Ω, η, u 0 , C j (j = 1, 2, 3, 4) and θ.
This paper is organized as follows.In Section 2 we prove Theorem 1.1 using the Galerkin method.In Section 3 we prove Theorem 1.2 that establishes the existence of a weak solution to the equation ( 2)-( 4) with degenerate mobility.
Weak solution for the positive mobility case
In this section we prove Theorem 1.1.The proof for the existence of a weak solution u θ can be found in Section 3 in [8], which is based on Galerkin approximations.We just sketch the idea of that proof, as well as state the convergences and estimates that are necessary for later parts.The main purpose of this section is to prove the energy inequality (15), and the estimate (16) when the initial data u 0 (x) is positive in Ω.
Galerkin approximation and Weak solution
Let {φ j : j = 1, 2, ...} be the normalized eigenfunctions, in the sense that φ j L 2 (Ω) = 1, of the eigenvalue problem −∆u = λu in Ω subject to periodic boundary condition on ∂Ω.The eigenfunctions φ j are orthogonal in the H 2 (Ω) and L 2 (Ω) scalar product.Without loss of generality, we assume that λ 1 = 0, hence We consider the Galerkin approximation for the equation ( 2)-( 4): where ω N = −∆u N + W ′ (u N ).This gives an initial value problem for a system of ordinary differential equations for c N 1 , ..., c N N : Since the right hand side of (22) depends continuously on c N 1 , ..., c N N , the initial value problem (22)-(24) has a local solution.From the Subsections 2.2 and 3.1 in [8], we have the following estimates for u N and µ N .Lemma 2.1.Let u N be a solution of the system (19)-( 21), we have The estimates in Lemma 2.1 give the uniform bound for c N 1 , ..., c N N .Therefore a global solution for the initial value problem ( 22)-(24) exists.With the specific form (13) of the positive mobility M θ (u), we obtain the following bound for ∇µ N .Lemma 2.2.Let u N be a solution of the system (19)-( 21), we have Proof.By the definition of M θ (u) in (13), M θ (u) ≥ θ.Combining with (26) we obtain the estimate (34).
Then by taking the limit as N → ∞ in (39), we get the energy inequality (15).
Weak solutions for the degenerate mobility case
In this section we prove the main theorem that is Theorem 1.2.We now consider the FCH equation ( 2)-( 4) with the degenerate mobility M (u) defined by (7).The proof for the existence of a weak solution u and a function ζ that satisfy ( 17) is similar to that in Section 4.4 in [8].Again, here we just sketch the idea of that proof, as well as state the convergences and estimates that are necessary for later parts.In this section we provide a more detailed proof for the relation between u and ζ, as well as prove the energy in equality (18).Moreover, we prove the existence of a nonnegative solution that is not constantly zero in Ω T when the initial data u 0 is positive in Ω.
The relation between ζ and u
Since u ∈ L ∞ (0, T ; H 2 (Ω)), the function ω := −∆u+W ′ (u) is completely defined, and ω ∈ L ∞ (0, T ; L 2 (Ω)).The desired relation between ζ and u is But the terms ∇ω and ∇∆ω are only defined in the sense of distributions and may not even be functions.So we need some higher regularity conditions on u.
Claim.For any open set U ⊂ Ω such that ∇∆ 2 u ∈ L q (U T ) for some q > 1 that may depend on U , where U T = U × (0, T ), we have By the equation (71) in [8], we have where q > 1 and 0 < α < 1/2, we have for the values of q and α indicated.By (65), there exists a subsequence of By (68), we have We see that the bounds on the right hand side of (30) and (31) depend only on d, T, Ω, η, u 0 and C j (j = 1, 2, 3, 4) but not on θ.So there exists a constant By (77), ( 72) and (75), we have Since |Ω T \B| = 0 and M (u) = 0 in Ω T \P , the value of ζ outside of (B ∩ P ) ∪ ΩT does not contribute to the integral on the right hand side of (70), so we may just let ζ = 0 outside of (B ∩ P ) ∪ ΩT .
Nonnegative weak solution with positive initial data
Assume, in addition, that the initial data u 0 (x) > 0 for all x ∈ Ω.By (16), there exists a constant C independent on {θ i } ∞ i=1 ⊂ (0, 1) such that for each i = 1, 2, ..., By the convergence in (67) and (68), and since u is continuous in Ω T , passing to the limit as i → ∞ in (81) yields u(x, t) ≥ 0 for all (x, t) ∈ Ω T .Furthermore, since u 0 > 0 in Ω, by the continuity of u in Ω T , u is not constantly zero in Ω T .This completes the proof of Theorem 1.2. | 2020-10-12T01:01:05.257Z | 2020-10-09T00:00:00.000 | {
"year": 2020,
"sha1": "f1641ad0aad226d4c3432bbde36d7122ff19f494",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rinam.2021.100195",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f1641ad0aad226d4c3432bbde36d7122ff19f494",
"s2fieldsofstudy": [
"Mathematics",
"Chemistry"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
54054108 | pes2o/s2orc | v3-fos-license | Studies for the effect of the positive Q-value neutron transfers on sub-barrier heavy-ion fusion
This contribution reports the recent experimental studies for the coupling effect of the positive Q-value neutron transfer (PQNT) channels on near-barrier fusion with heavy-ions, measured by an electrostatic deflector setup. This effect is expected to be important in some situations. In this presentation, the experimental studies for the nearbarrier fusion of S + Zr, O + Ge, O + Ge, and O + Ni are reported. Also a method, based on the CCFULL calculations with suitable inelastic couplings, to sort out the PQNT effect is presented.
Introduction
Near-barrier fusion with heavy-ions offers a good platform to study the two basic mechanisms of tunneling and coupling [1,2] in the quantum world.The involvement of the couplings may lead to strong sub-barrier fusion enhancement in some situations.So far, couplings to these collective excitation states can be described well in near-barrier fusion by using the coupledchannels (CC) theory [2].While couplings to nucleon transfers are even troublesome and have not yet been well understood.For the transfer reactions, neutron transfers might be more important than proton transfers especially at sub-barrier energy region.The possible effect of positive Q-value neutron transfers (PQNT) on near-barrier fusion was first discovered by Beckerman et al. [3] in the experimental comparison of the fusion excitation functions of 58,64 Ni + 58,64 Ni and, explained firstly by Broglia et al. [4] by considering the gained kinematic energy for the (virtual) intermediate states as an important doorway to fusion [5].Afterwards, many studies [1,6] have been devoted to PQNT effect.But it is still not clarified for the relevant reaction dynamics up to now.
This contribution reports the recent experimental studies for the coupling effect of PQNT on near-barrier fusion with heavy-ions measured by an electrostatic deflector setup.Also a method, based on the CCFULL [7] calculations with suitable inelastic couplings, to sort out the PQNT effect is presented.
Recent near-barrier fusion experiments at CIAE
The experiments were performed at the HI-13 tandem accelerator of CIAE, Beijing.Fusion evaporation residual (ER) cross sections have been measured by using an electrostatic deflector (see Ref. [8] for more details). 90,94,96
Zr
The first studied case is 32 S + 90,94,96 Zr [9].The idea is to further search for a correlation of sub-barrier fusion enhancement with PQNT channels.The fusion excitation functions measured for 32 S + 90,94,96 Zr are shown in fig. 1, where the energy scale E c.m. (energy in the center-of-mass system) was corrected for the target thickness and the error bars represent purely the statistical uncertainties.It can be seen that the experimental fusion of 32 S + 94,96 Zr really shows a strong relation of sub-barrier fusion enhancement to the PQNT channels.Fusion cross section enhancement reflects the whole effect of couplings to all the relevant channels.For understanding the PQNT effect, the key is to exclude the impact of the collective excitation states.For sorting out the effect related to PQNT, residual enhancement (RE) is defined as the ratio of the experimental fusion data (σ Exp ) to the CC calculation result (σ CC ) [7] including the major inelastic couplings.That is RE = σ Exp /σ CC , by excluding the inelastic coupling effect considering the good description of the CC theory for the inelastic couplings [2].
The corresponding RE for 32 S + 90,94,96 Zr is shown in fig. 2. It can be seen that the experimental fusion excitation function of 32 S + 90 Zr is reproduced well.While RE shows deviation from unity with the decreasing energy for 32 S + 94,96 Zr.Notably, bigger RE shows for 32 S + 94 Zr than 32 S + 96 Zr at the same energy, although of the smaller neutron transfer Q gg -values for the former system and larger neutron separation energies for 94 Zr.The obtained bigger RE for 32 S + 94 Zr is reasonable considering the similar experimental fusion cross sections for the two systems but the larger inelastic (3 − ) coupling effect for 96 Zr.This suggests that more should be considered for a full clarification of the relevant dynamic reaction processes.For simplifying the problem, the systems with only one positive Q −2n -value neutron transfer channel were selected to study.At first, the fusion of 18 O + 74 Ge with Q −2n = +3.75MeV and lower inelastic coupling (Z p Z t ) effect was studied experimentally [6].Fusion of 16 O + 76 Ge was also measured as a reference.The results of the experimental fusion excitation functions are shown in fig. 3 (left).Usually, a method is used in order to remove the trivial geometric effect for a comparison of the coupling effect between different systems.The right panel shows the reduced fusion excitation functions for the two systems in a reduced energy scale, where V B is the Coulomb barrier energy and R B is the fusion barrier radius.One can see that the two reduced fusion excitation functions almost overlap at the whole energy region.This means no visible sub-barrier fusion enhancement related to the positive Q −2n -value neutron transfer channel for 18 O + 74 Ge at the measured energy region.This conclusion is consistent with the result that obtained from the experimental studies of 36 S + 58 Ni [10] and 18 O + A Sn [11] with positive Q −2n -value neutron stripping channels.For the moment, we have tried to complete the near-barrier experimental fusion of 18 O + 58 Ni with rather good accuracy by extending the measurement to the sub-barrier energy region.The fusion of 16 O + 58 Ni was also measured as a reference system.The Preliminary results of the experimental fusion excitation functions of 16,18 O + 58 Ni are shown in fig. 4 (left), where the energy scale E c.m. is corrected for the carbon backing (faced to the beam) and the target thickness.The right panel is what obtained by a representation of the fusion excitation functions in a reduced energy scale.It can be seen that the two reduced fusion excitation functions deviate from each other at near-and sub-barrier energy region.This means the appearance of the PQNT effect on fusion considering the minor coupling effect to the collective inelastic excitation states for the two lighter systems.
Summary
The PQNT effect has been studied experimentally.These experimental results further show a complicated PQNT effect, which should be further studied for a full understanding of the underlying physics in the nuclear reactions with heavy-ions.
Figure 3: Experimental fusion excitation functions of 16 O + 76 Ge and 18 O + 74 Ge (circles and squares, respectively) in absolute (left) and reduced (right) scales.The role of the neutron orbital populations, such as shell closure of 90 Zr and sub-shell closure of 96 Zr, should be considered theoretically.Certainly, experimental measurements for the transfer reactions of the94,96Zr-involved systems should give a meaningful clue for this quantitative correlation.
2.2 16 O + 76 Ge and 18 O + 74 Ge [12]re 4: Preliminary experimental fusion excitation functions of 16,18 O + 58 Ni (circles and squares) in absolute (left) and reduced (right) scales.For further studying the effect of the neutron transfer channels with positive Q −2n -values, another ideal system 18 O + 58 Ni with bigger Q −2n -value and smaller Z p Z t was studied.The fusion excitation function of 18 O + 58 Ni was once measured[12]but only at near-barrier energies and with lower data quality.Fusion of 18 O + 58 Ni is interesting considering the bigger Q −2n /V B and therefore has attracted many studies, but precise fusion data down to sub-barrier energy region is still absent. | 2018-11-28T10:00:07.544Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "9f8418e102f4f0a6b5129d80d1fb17aaeea75397",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/12/epjconf_nn2016_08014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f8418e102f4f0a6b5129d80d1fb17aaeea75397",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
96450045 | pes2o/s2orc | v3-fos-license | Re-sequencing and transcriptome analysis reveal rich DNA variations and differential expressions of fertility-related genes in neo-tetraploid rice
Autotetraploid rice is a useful germplasm for polyploid rice breeding, however, low seed setting is the major barrier in commercial utilization of autotetraploid rice. Our research group has developed neo-tetraploid rice lines, which have the characteristics of high fertility and heterosis when crossed with autotetraploid rice. In the present study, re-sequencing and RNA-seq were employed to detect global DNA variations and differentially expressed genes (DEGs) during meiosis stage in three neo-tetraploid rice lines compared to their parents, respectively. Here, a total of 4109881 SNPs and 640592 InDels were detected in neo-tetraploid lines compared to the reference genome, and 1805 specific presence/absence variations (PAVs) were detected in three lines. Approximately 12% and 0.5% of the total SNPs and InDels identified in three lines were located in genic regions, respectively. A total of 28 genes, harboring at least one of the large-effect SNP and/or InDel which affect the integrity of the encoded protein, were identified in the three lines. Together, 324 specific mutation genes, including 52 meiosis-related genes and 8 epigenetics-related genes were detected in neo-tetraploid rice compared to their parents. Of these 324 genes, five meiosis-related and three epigenetics-related genes displayed differential expressions during meiosis stage. Notably, 498 specific transcripts, 48 differentially expressed transposons and 245 differentially expressed ncRNAs were also detected in neo-tetraploid rice. Our results suggested that genomic structural reprogramming, DNA variations and differential expressions of some important meiosis and epigenetics related genes might be associated with high fertility in neo-tetraploid rice.
Introduction
Polyploidy plays an important role in plant evolution and could be an important source for plant breeders in future [1,2]. Over 70% of all angiosperm species have experienced whole genome duplication during the evolutionary process [3,4,5]. Polyploidy offers many advantages over diploid progenitors, such as increased variations in the expressions of dosage-regulated genes that evolved new biological functions, the largest vegetative organs, longer panicles, and high levels of heterosis [6,7,8,9,10].
Autotetraploid rice is a useful germplasm resource obtained by colchicine treatment, which showed higher genetic variation, greater ability of resistance against abiotic and biotic stresses, and higher biomass production than diploid rice [10,11,12]. Intersubspecific hybrids (indica × japonica) of autotetraploid rice have a powerful biological and yield potential, and it is expected to become a new way to breed rice in the future [13]. However, low seed setting is the major barrier in commercial utilization of polyploid rice [14,15]. Partial pollen sterility is one of the most important reasons for low fertility in autotetraploid rice, which caused by different factors, such as abnormal microtubule distribution pattern and chromosome behavior [16,17,18]. These abnormalities might be caused by the abrupt changes in the expression patterns of genes and miRNAs associated with meiosis [2,19,20]. Polyploidy could increase the interactions between pollen sterility loci and cause high meiosis abnormalities that lead to high pollen sterility in autotetraploid rice hybrids [18]. Another study revealed that pollen sterility mechanism is very complex and sequence variation, differential levels of methylation and differentially expressed genes have a strong influence on the fertility of autotetraploid rice [21]. Recently, the breeding procedure of neo-tetraploid rice that could overcome the sterility of autotetraploid rice hybrids has been reported. Moreover, they also employed transcriptome analysis of the neo-tetraploid rice and their hybrids to reveal differential expression patterns of genes associated with fertility [13].
Technological advances allow sequencing to be performed more economically and efficiently than ever before, and providing excellent opportunities to investigate biological problems. Re-sequencing technology had been utilized successfully in diploid rice and revealed huge genome-wide DNA variations involved in various agronomic traits [22,23,24,25,26]. However, little is known about DNA genome structural variations and gene expression in neotetraploid rice compared to autotetraploid parents. Therefore, we performed whole genome re-sequencing to detect the DNA genome wide variations between neo-tetraploid rice and their autotetraploid parents in the present study. Meanwhile, mRNA-seq was also employed to identify differentially expressed genes during meiosis, and to detect genes that might be associated with high seed setting in neo-tetraploid rice. The results of this study may help to explain the molecular mechanism of high fertility in neo-tetraploid rice.
Ethics statement
No specific permissions were required for these locations/activities because all cultivars/lines were grown at our research station (farm of South China Agricultural University). We are doing research on these cultivars from more than two decades and our research group has generated these lines by crossing. We also confirmed that the field studies did not involve any endangered or protected species. which were developed by the same parents, Jackson-4x (maternal, T45) and 96025 (paternal, T44). A total of 20 plants from each autotetraploid rice parent (T44 and T45) and three neotetraploid rice lines were harvested from the field at maturity. Agronomic traits, including plant height, effective number of panicles per plant, filled grains per plant, empty grains per plant, total grains per plant, and grain yield per plant, were measured. These traits were selected from the Descriptors and Data standard for Rice (Oryza sativa L.) to describe the genetic variations between autotetraploid rice parents and three neo-tetraploid rice lines [27]. The single factor variance analysis of each trait (different combinations) was done by SPSS 16.0. Multiple comparison was done by Duncan's New Multiple-Range test (DMRT), using α = 0.05 significance level.
Classification of chromosome behavior
Spikelets were collected from the rice plants with -2 to 2 cm between their flag leaf cushion and the second to last leaf cushion and fixed in Carnoy solution (ethanol: acetic acid [3:1, v/v] for at least 24 h. The samples were washed three times using 50% (v/v) ethanol and then stored in 70% (v/v) ethanol at 4˚C. Anthers were removed from the floret using forceps and a dissecting needle and placed in a drop of 1% (w/v) acetocarmine on a glass slide. After 3 to 5 min, the glass slide was covered with a coverslip and examined under a microscope (Motic BA200). Meiotic stages were classified according to Wu et al. (2014) [19].
Investigation of pollen and embryo sac fertility and seed setting
Five mature spikelets were collected from each line and all of them were fixed in Carnoy solution for 24 h to investigate the pollen fertility, and Potassium Iodide solution (I 2 -KI, 1%) was used to stain the pollen grain, which was observed under microscope. Pollen fertility was divided into four categories based on the color and morphology of pollen grain, i.e., normal fertile pollen, stained abortive, spherical abortive and typical abortive pollens [28]. WE-CLSM was used to observe embryo-sac structure, and embryo sac fertility was investigated according to Shahid et al. (2010) [9]. Seed setting was counted according to the method of Shahid et al. (2013b) [28].
Genome re-sequencing
Young Leaves of neo-tetraploid rice lines and autotetraploid rice parents were collected and stored at -80˚C for DNA isolation. Genomic DNA was extracted using a modified CTAB method [29]. Sequencing library was prepared according to the standard protocol of Illumina. Then pair-end sequencing was conducted by Illumina HiSeqTM 2500 and Hiseq X Ten platform (BioMarkers, Beijing, China). The generated FASTQ file quality was evaluated using FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). Then low quality reads, including reads with sequencing adapter, reads with more than 10% N content, and reads with more than 50% low quality bases (<10), were filtered. After filtration, clean data were aligned to the Nipponbare reference genome (http://plants.ensembl.org/Oryza_sativa/Info/Index) by Burrows-Wheeler Aligner (BWA) software [30]. MarkDuplicates in Picard (https:// sourceforge.net/projects/picard/) was used to eliminate the PCR duplication. We used Genome Analysis Toolkit (GATK) for base recalibration and realignment near insertion or deletion regions. SAMtools was used to estimate reference genome coverage [31].
Identification and analysis of variations
The filtered alignment files were used for the identification of SNPs and InDels. The following SNPs and InDels were filtered: two or more SNPs in a 5 bp or shorter window, SNPs near (5 bp or less) InDel, and two or more InDels in a 10 bp or shorter window. We further retained the SNPs and InDels with a coverage depth ranged from 5 to 100. Presence/absence variations (PAVs) were identified by using the BreakDancer software, and structural variations (SVs) with a coverage depth ranged from 6 to 100 were retained [32]. The SNPEFF software was used to annotate SNPs and InDels, and PAVs were annotated based on the GFF file of Nipponbare reference genome [33].
RNA-seq analysis
Anthers were collected from autotetraploid rice parents and three neo-tetraploid rice lines at meiosis stages. Floret lengths at meiosis stage in autotetraploid rice (T44 and T45) and neo-tetraploid rice (134, 66 and H3) were 4.8-5.3 mm, 5.5-6.0 mm, 5.4-5.8 mm, 5.3-5.7 mm and 5.3-5.7 mm, respectively. All samples were collected in three biological replicates and stored at -80˚C for RNA isolation. The total RNA from each sample was extracted from the anthers, ovaries and leaves according to the manual instruction of the TRlzol Reagent (Life technologies, USA). The samples from anther of each biological replicate were mixed for RNA extraction. The quantity and quality of each RNA sample was assessed by using 1% agarose gel and a Nanodrop 1000 spectrophotometer (Nanodrop, USA). RNA integrity number and concentration were checked using an Agilent 2100 Bioanalyzer (Agilent Technologies, USA). The mRNA was isolated by NEBNext Poly (A) mRNA Magnetic Isolation Module (NEB). The enriched and purified mRNA was broken into approximately 200nt short RNA inserts, which were used to synthesize the first-strand cDNA and the second cDNA. The double-stranded cDNA were used to perform end-repair/dA-tail and adaptor ligation. The suitable fragments were isolated by AgencourtAMPure XP beads (Beckman Coulter, Inc.), and enriched by PCR amplification. Finally, the constructed cDNA libraries of the samples were sequenced on a flow cell using an Illumina HiSeq 2500 sequencing platform.
Transcriptome analysis was done by using reference genome-based reads mapping. Low quality reads, such as adaptor sequences, unknown nucleotides>5%, or Q20 <20% (percentage of sequences with sequencing error rates <1%), were removed by NGSQC software [34]. Then the clean reads were mapped onto the Nipponbare (IRGSP-1.0 pseudomolecule/MSU7) reference genome by using Cufflinks and Cuffmerge software [35,36]. Gene expression levels were estimated using FPKM values (Fragments Per Kilobase of transcript per Million fragments mapped) by the Cufflinks and Cuffmerge software [36,37]. The false discovery rate (FDR) control method was used to identify the threshold of the P-value in multiple tests in order to compute the significance of the differences. Here, only genes with an absolute value of Fold Change�2 and FDR significance score <0.01 were used for the subsequent analysis.
Verification analysis
Primer 5 software was used to design the primers, and PCR was performed using PrimeSTAR Max DNA polymerase (TaKaRa) according to the manufacturer's instructions. Amplification reactions (20μl) contained 30 ng DNA template, 0.15 μmol/L each primer (S1 Table), and 1×PrimeSTAR max Premix. The PCR reaction was programmed as follow: 94˚C for 3 min, 35 cycles consisting of 94˚C for 15 s, 60˚C for 5 s and 72˚C for 30 s, and a final extension at 72˚C for 5 min. PCR products were examined by agarose gel electrophoresis and sequenced by the Beijing Genomics Institute (Guangzhou China). The sequencing results were assembled by DNAMAN software, and further aligned to the reference genome sequences to validate the variations of polymorphic loci using Bioedit software. Total RNAs obtained from rice anthers were reverse transcribed using the PrimeScript RT reagent kit with gDNA Eraser (TaKaRa). The qRT-PCR was performed using the SYBR Premix Ex Taq II kit (TaKaRa) according to the manufacturer's instructions. Amplification reactions (20 μl) contained 1 μl of cDNA sample, 10 μl of SYBR Premix Ex Taq (2×), and 0.2 μM of each primer (S2 Table). Actin was used as a reference gene. The PCR cycling conditions comprised one denaturation cycle at 94˚C for 90 s, followed by 40 amplification cycles (94˚C for 10 s, 61˚C for 15 s, and 72˚C for 20 s). All qRT-PCR amplifications were carried out in triplicate, and the results are presented as mean ± standard deviations. The relative expression levels of genes were calculated by the 2 -ΔΔCT method [38].
Morphological and cytological observations of neo-tetraploid rice
A total of 20 plants from each autotetraploid rice parent and three neo-tetraploid rice lines were harvested from the field at maturity. Three neo-tetraploid rice lines displayed significant differences in agronomic traits, including plant height, number of filled grains per plants, 1000-grain weight and seed setting (Fig 1, Table 1). Importantly, the seed setting and pollen fertility of neo-tetraploid rice lines were significantly higher than their parents. Three neo-tetraploid rice lines produced 68.3%, 68.1% and 71.8% seed set, while it was only 31.5% and 25.4% in autotetraploid parents (Table 1). Pollen fertility of neo-tetraploid rice lines was more than 92%, while parents produced 65.83% and 76.75% pollen fertility (Fig 2, Table 1). Non-significant differences were detected in embryo sac fertility between neo-tetraploid rice lines and autotetraploid parents (Table 1), and normal embryo sacs of parents and neo-tetraploid lines are shown in the Fig 2. Main embryo sac abnormalities were the number and location of abnormal polar nuclei, degenerated embryo sac, the degradation of female reproductive units and egg apparatus degradation in autotetraploid parents and three neo-tetraploid rice lines. The seed setting of hybrid lines generated by crossing with various autotetraploid lines were significantly higher than autotetraploid parents (S3 Table).
Discovery of SNPs and InDels in neo-tetraploid rice
A total of 447 million reads were generated for neo-tetraploid rice lines and its autotetraploid parents. After the removal of low-quality reads, about 98% of the reads were retained as clean data and used for further investigation. The high quality reads were further mapped onto the reference genome (MSU7.0) using BWA software. Overall, almost 92% of these reads were uniquely mapped, and covered about 90% of the reference genome with at least 10× coverage depth, and the average coverage depth was approximately 90% in three neo-tetraploid rice lines and its autotetraploid parents (S4 Table).
Compared with the reference genome sequence, a total of 4750473 polymorphic sites including 4109881 SNPs and 640592 InDels were discovered in neo-tetraploid rice lines. We applied following two criteria to decrease the rate of false-positive SNPs and InDels: 5< read depths > 100; sequencing score of 30 (Q30), which indicates an error rate of one per 1000 reads. Based on these filter conditions, the total number of DNA polymorphisms were 1104015 to 1631038 in three neo-tetraploid rice lines (134, 66 and H3) and two parents, and the percentages of heterozygous DNA polymorphisms were 13.4%, 10.8%, 16.7%, 11.8% and 6.8%, respectively (S5 Table). Further, we identified SNPs and InDels between three neo-tetraploid rice lines and their autotetraploid parents. The numbers of DNA polymorphisms were about 1.38, 1.36 and 1.19 times higher for 134 vs T44, 66 vs T44 and H3 vs T44 as compared to 134 vs T45, 66 vs T45 and H3 vs T45, respectively. The total numbers of SNPs were 133187, 107978 and 167136 for neo-tetraploid lines (134, 66 and H3) compared to two parents and the percentages of High fertility in neo-tetraploid rice heterozygous SNPs were 73.54%, 51.96% and 79.11%, respectively. In total, 79259, 8085 and 26236 InDels were detected in neo-tetraploid lines (134, 66 and H3) compared to two parents, and the percentages of heterozygous SNPs were 9.6%, 52.1% and 38.52%, respectively (S6 Table). The sequencing results were further validated by PCR amplification, and 76 randomlyselected variations sites were sequenced. The results showed that the DNA variation sites by Sanger sequencing were consistent with the re-sequencing data (S7 Table).
Genomic distribution and analysis of SNPs and InDels in neo-tetraploid rice
The distribution of DNA polymorphisms in neo-tetraploid rice/autotetraploid parents was analyzed across the twelve rice chromosomes, and the results indicated that total number of SNPs and InDels on a chromosome were proportional to chromosome length (S8 Table and S9 Table). In neo-tetraploid rice vs T44, the largest number of SNPs was detected on chr7, while chr5 had the smallest number of SNPs. Similarly, the higher numbers of InDels were observed on chr7, chr1 and chr7, while chr5, chr9 and chr9 had smaller numbers of InDels in 134, 66 and H3, respectively. The highest SNP density was found on chr7. In neo-tetraploid rice lines relative to T45, the largest numbers of SNPs were detected on chr11, while chr10 had the smallest number of SNPs. Similarly, the highest numbers of InDels were observed on chr11, chr11 and chr1 in 134, 66 and H3, while chr10 had the smallest number of InDels in three neo-tetraploid rice compared to T45. The highest SNP frequency was detected on chr7 chr9 and chr10 in 134, 66 and H3 compared to T45, respectively (Fig 4).
Moreover, we observed that SNPs were not uniformly distributed across the chromosomes. In neo-tetraploid rice lines (134, 66 and H3) compared to T44, a total of 1302, 905 and 1162 high-density (>250) SNP regions of 100kb were identified. Similarly, a total of 243, 476 and 320 low-density (<5) SNP regions of 100 kb were detected in H3, 66 and 134, respectively. In neo-tetraploid rice lines compared to T45, 931, 1529 and 1138 high-density (>250) SNP regions, and 462, 78 and 362 low-density (<5) SNP regions of 100 kb were detected. The genomic regions with no SNP were also detected in neo-tetraploid rice lines. The frequency of transition (A/G and C/T; Ts) was much higher than transversions (A/C, A/T, G/C, and G/T; Tv), and the ratio of Ti/Tv was 2.58, 2.60, 2.58 in neo-tetraploid rice lines (134, 66 and H3) compared to T44, while it was 2.62, 2.64 2.65 in neo-tetraploid rice lines compared to T45 (Fig A in S1 File). The frequency of both A/G and C/T transitions was similar. However, among transversions, the frequency of A/T was higher than G/C. Further, analysis on the length distribution of InDels detected in neo-tetraploid rice/autotetraploid parents displayed that about half of InDels (48.9%) were 1 bp (mononucleotide insertion-deletion), 32% were 2-5 bp and 20% were �6 bp (Fig A in S1 File).
PAVs (>100bp) are a major source of genome structural variation and have profound effects on phenotypic and genomic variation in plants. So, we further analyzed PAVs in neotetraploid rice. A total of 596, 644 and 565 specific PAVs were detected in neo-tetraploid rice lines compared to autotetraploid parents, and these PAVs influenced the length of chromosome in varying degrees. The highest numbers of PAVs were detected on chr1, chr1, and chr7, while the smaller chromosomes, such as chr12, chr10 and chr10, had the lowest number of PVAs in H3, 66 and 134, respectively. However, the biggest influence of chromosome size was detected on chr9 as well as on the smallest chr10, chr3 and chr4 of H3, 66 and 134, respectively (S10 Table).
Annotation and effect of SNPs and InDels on amino acid substitution in neo-tetraploid rice
The annotation of rice genome revealed the distribution of SNPs and InDels within various genomic regions, such as intergenic and intragenic. Overall, a similar distribution pattern of SNPs and InDels was observed in neo-tetraploid rice/ autotetraploid parents (Fig B-D in S1 File). Approximately 50% of SNPs were identified in intergenic region. About 12% of the total SNPs were detected in the genic regions, and significant proportions of SNPs were detected in 2 kb upstream and 1 kb downstream regions. Within the genic region, more than 6% of SNPs were present in the introns. The 3'UTR and 5'UTR regions also showed the presence of SNPs (0.5-1.0%). Similarly, about 42% of InDels were identified in intergenic regions in both types of rice. Only 0.5% of InDels were present in the exonic regions, whereas upstream and downstream regions contained about 20% InDels. Within the genic region, almost 7% of InDels were present in the introns. Similar to SNPs, InDels (0.3-0.8%) were also observed in 3'UTR and 5'UTR regions. High fertility in neo-tetraploid rice The reasons for high seed setting of three neo-tetraploid rice lines were common mutated sites, so we further analyzed the mutated genes of same site that might be associated with fertility in three neo-tetraploid rice lines. A total of 9397 and 6980 genes, harboring at least one of the SNPs or/and InDels, exhibited mutations in neo-tetraploid rice lines compared to T44 and T45, respectively, and 1362 genes showed variations in neo-tetraploid rice lines compared to their parents, and variation sites were present in upstream, downstream, intron, and coding regions.
We analyzed the effect of SNPs on amino acid substitution, and high proportion of the SNPs in CDS region was found to be non-synonymous in neo-tetraploid rice lines and autotetraploid parents. These non-synonymous substitutions were present in 3622 genes in neo-tetraploid rice lines compared to T44 and 2804 genes compared to T45. Of the mutated genes, 181 genes were present in both groups i.e. neo-tetraploid rice lines vs T44 and neo-tetraploid rice lines vs T45 (S11 Table).
We further analyzed the distribution of large-effect SNPs and InDels, which are predicted to have a pronounced effect on the loss of gene function. A total of 305 and 224 large-effect SNPs loci in neo-tetraploid rice lines were detected compared to T44 and T45, respectively. Of these, 291 and 208 genes affected the integrity of encoded proteins in neo-tetraploid rice lines ( Table 4). The large-effect SNPs included disruption of splice sites, loss of translation initiation codon, introduction of premature stop codon and loss of stop codon. Similarly, we identified 441 and 392 InDels in 315 and 269 genes, which cause frame shift, disruption of splice sites or introduction of premature stop codon (Table 4). Overall, 545 and 441 genes harbored at least one large-effect SNP and/or InDel. Among these genes, 28 common mutated genes were found in neo-tetraploid rice lines compared to T44 and T45 (S12 Table). These genes were not found to be involved in any biological process.
New mutation in neo-tetraploid rice
New mutations in neo-tetraploid rice are different compared to their parents at the same site and might be associated with fertility in neo-tetraploid rice. So, we further investigated the existence of new mutations in neo-tetraploid rice lines, and many new SNPs and InDels were detected in neo-tetraploid rice genome. Of 1362 mutated genes, 324 genes harbored at least one new peculiar variation site in neo-tetraploid rice lines (S13 Table), and we focused on these 324 important genes. GO analysis revealed that these 324 genes were significantly enriched in polar nucleus fusion, RNA 3'-end processing, cellular protein modification process, phosphorylation, and protein ubiquitination (S14 Table). Co-expression analysis revealed that 19 of the 324 specific mutated genes were co-expressed genes. Of these 19 putatively coexpressed genes, the biological functions of Os05g0209000, Os06g0558900, Os11g0513700 and Os11g0513900 are still unknown. The other genes were Os01g0715600 (auxin efflux carrier component), Os05g0208550 (gibberellin 2-beta-dioxygenase 1), Os05g0212200 (Leucine Rich Repeat family protein), Os05g0211100 (cytochrome P450), Os05g0519700 (heat shock protein), Os06g0549900 (reticuline oxidase-like protein precursor), Os06g0552900 (FT-Like12 homologous to Flowering Locus T gen), Os06g0553800 (plastocyanin-like domain containing protein), Os06g0603600 (SPX domain containing protein), Os06g0604000(AP2 domain High fertility in neo-tetraploid rice containing protein), Os11g0514500 (brassinosteroid insensitive 1-associated receptor kinase 1 precursor), Os11g0562100 (cycloartenol synthase), Os11g0565300 (OsWAK receptor-like protein kinase), Os12g0553200 (RGH1A), and Os12g0559200 (lipoxygenase 2.1). Meiosis is a vital process during pollen development and low pollen fertility and abnormal chromosome behaviors were observed in autotetraploid rice. We focused on the polymorphic genes that could be associated with meiosis by comparing with meiosis-related and stage-specific genes reported in rice and other plants [18,38,39]. Of these 324 genes, we found 52 meiosis-related genes (S15 Table), but their functions are unknown during meiosis. Moreover, we detected eight epigenetics related genes (Table 5). Of these genes, one codon insertion and one synonymous variation were detected in Os06g0535200. One intron variation and codon deletion were detected in Os06g0537500. Two non-synonymous SNP variations and one frame shift mutation were identified in Os01g0719100, and all aforementioned genes annotated E3 ubiquitin-protein ligase. Os05g0392400 annotated SNF2 domain-containing protein, which had three mutations in intron region. One non-synonymous SNP was identified in Os08g0289400, which annotated Serine/arginine-rich splicing factor SR45. Two synonymous SNP variations were detected in Os10g0357800, which annotated N-dimethylguanosine tRNA methyltransferase, and one synonymous SNP variation was detected in Os12g0211400 that annotated adenine DNA glycosylase. Two non-synonymous SNP variations and one frame shift mutation were detected in Os04g0572600, which encoded DNA-directed RNA polymerase IV subunit 1. The tissue-specific analysis indicated that Os06g0535200 was not expressed in anther, and specifically expressed in root, while Os05g0392400 was specifically expressed in anther. The highest amount of Os01g0719100 transcripts were detected in embryo and anther, and Os04g0572600, Os10g0357800 and Os12g0211400 displayed high levels of expressions in anther, panicle and inflorescence, respectively.
The changes in gene expression patterns detected by RNA-seq during meiosis
To further investigate the influence of mutations and transposon elements on the expression of genes, transcriptome sequencing was used to detect the putative and meiosis related genes during meiosis. Genes showed more than two fold up-or down-regulation between the neotetraploid rice lines and parents were classified as ''differentially expressed genes (DEGs)". In total, 3471, 3117 and 3794 genes showed differential expression patterns between three neotetraploid rice lines and T44. Of these genes, 1905, 1969, 2030 and 1566, 1148, 1764 were found to be up-and down-regulated in 134, 66 and H3, respectively. In neo-tetraploid rice lines compared to T45, 2371, 1929 and 4448 genes showed differential expressions in 134, 66 and H3 respectively. Of these genes, 924, 746, 2376 and 1447, 1183, 2072 were up-and downregulated, respectively. The reason for high seed setting of all neo-tetraploid rice lines is that they may have common differentially expressed genes during meiosis. So, we further analyzed common differentially expressed genes in three neo-tetraploid rice lines by Venn analysis. 1473 and 766 genes were common and differentially expressed in neo-tetraploid rice lines compared to T44 and T45 (S16 Table, S17 Table). Of 1473 DEGs, 132 genes were noncoding RNAs (ncRNA) and 41 genes were transposon elements (S18 Table, S19 Table). 129 genes only expressed in neo-tetraploid rice, while 177 genes only expressed in T44 (S20 Table). mRNA level of 23 specific mutant genes exhibited significant changes in neo-tetraploid rice (S21 Table). Among these 23 genes, 8 meiosis related genes were differentially expressed, including Os01g0716200 (uncharacterized in meiosis stage), Os01g0719100 (E3 ubiquitin-protein ligase, but uncharacterized in meiosis stage), Os05g0519300 (uncharacterized in meiosis stage), Os05g0527700 (uncharacterized in meiosis stage), Os06g0556300 (uncharacterized in meiosis stage), Os06g0559400 (uncharacterized in meiosis stage), Os11g0513900 (uncharacterized in meiosis stage) and Os11g0558400 (uncharacterized in meiosis stage). Meanwhile, three specific mutant epigenetics related genes also differentially expressed in neo-tetraploid rice lines compared to both parents, and Os01g0719100 was found to be down-regulated in neo-tetraploid rice lines, while Os04g0572600 and Os05g0392400 were up-regulated in neo-tetraploid rice lines.
Moreover, 57 genes were differentially expressed in neo-tetraploid rice lines compared to their parents, including 7 specific mutant genes. Of these 7 genes, 5 meiosis related genes were differentially expressed, including Os01g0716200, Os01g0719100, Os05g0527700, Os06g0556300 and Os11g0513900, Epigenetics related genes, including Os04g0572600 (DNAdirected RNA polymerase IV subunit 1) and Os05g0392400 (SNF2 domain-containing protein) were up-regulated, while Os01g0719100 (E3 ubiquitin-protein ligase) was down-regulated (S26 Table). These results suggested that specific mutations affect genes expression and function, which might be associated with fertility in neo-tetraploid rice.
To confirm the expression levels of differentially expressed genes in neo-tetraploid rice and autotetraploid rice, 61 genes were selected for qRT-PCR analysis at meiosis stage, including nine genes only expressed in autotetraploid rice, nine genes only expressed in neo-tetraploid rice, 18 up-regulated genes in neo-tetraploid rice, 10 down-regulated genes in neo-tetraploid rice, seven up-regulated ncRNA in neo-tetraploid rice, and eight down-regulated ncRNA in autotetraploid rice. Based on the qRT-PCR analysis, the expression patterns of all these genes were consistent with RNA-seq data (Fig 6).
Mutations in novel meiosis and epigenetic-related genes may associate with fertility in neo-tetraploid rice
Meiosis plays significant roles in the life cycle of all sexually propagating eukaryotes, and a number of key genes have been identified and functionally studied in rice and other plants [39,40]. Here, four genes showed specific mutations and differentially expressed between neo-tetraploid rice and autotetraploid rice during meiosis, but their functions are unknown, including Os01g0716200, Os05g0527700, Os06g0556300 and Os11g0513900. Moreover, these genes were also found to be differentially expressed during meiosis in the previous studies [13,18,19]. It indicates that Os01g0716200, Os05g0527700, Os06g0556300 and Os11g0513900 might be related to fertility in neo-tetraploid rice. Protein ubiquitination is post-translations modification, and it has been demonstrated that components of the ubiquitin system are involved in the regulation of a specific protein's degradation [41,42]. E3 ubiquitin-protein ligase, a multiprotein complex, is responsible for targeting ubiquitination to specific substrate proteins [43]. Ubiquitination has been demonstrated to be involved in chromosome segregation and polar body extrusion [44,45,46]. Os01g0719100, annotated E3 ubiquitin-protein ligase, displayed specific mutation and differentially expressed between neo-tetraploid rice and autotetraploid rice, which indicated that Os01g0719100 may play a key role during meiosis in neo-tetraploid rice. These mutations altered the expression level of Os01g0719100 and even functions, which affected modifications of meiosis related protein.
SNF2 domain-containing protein, RDR2 and NRPD1a are required for the production of endogenous 24-nucleotide short interfering RNAs in Arabidopsis thaliana [47]. The 24nt-siRNA regulates epigenetic silencing by directing DNA methylation through RNA-directed DNA methylation pathway [5,48,49]. Recent studies revealed that 24nt-siRNA related to DNA methylation of class II transposable elements suppressed the expression of nearby genes in autotetraploid rice that were involved in pollen and embryo sac fertility [2,5]. The highest amount of Os05g0392400 (SNF2 domain-containing protein) transcripts were detected in anther, which suggests that Os05g0392400 play an important role in anther development. Three specific mutations were found in intron, and the expression of Os05g0392400 was upregulated in three neo-tetraploid rice lines. This result suggests that three intron variations may affect the level of Os05g0392400 methylation and changed expression levels, which affected the expression levels of some fertility-related 24nt-siRNA in neo-tetraploid rice.
The components of RNA polymerase IV mediate short-interfering RNAs (siRNAs) accumulation and subsequent RNA-directed DNA methylation-dependent transcriptional gene silencing of target sequences [50,51,52]. Some studies suggested that siRNAs are essential to regulate the genes expression and play a crucial role in male meiosis and pollen development [2,5]. Os04g0572600, annotated DNA-directed RNA polymerase IV subunit 1, had two non-synonymous SNP variations and one frame shift mutation, and the mRNA level of Os04g0572600 markedly change in neo-tetraploid rice. We speculated that these mutations altered the expression pattern of Os04g0572600 and even function, which lead to changes in fertility-related 24nt-siRNA in neo-tetraploid rice. the genes only expressed in autotetraploid parents. The lower lane indicated the expression of reference gene in neo-tetraploid rice lines and autotetraploid parents. (B) and (C) represent differentially expressed ncRNA in neo-tetraploid rice compared to autotetraploid rice, (B): the differentially expressed ncRNA in neo-tetraploid rice lines compared to T44, (C): the differentially expressed ncRNA in neo-tetraploid rice lines compared to T45. (D) and (E) represent the differentially expressed genes in neo-tetraploid rice lines compared to autotetraploid rice. (D): the differentially expressed genes in neo-tetraploid rice lines compared to T44, (E): the differentially expressed genes in neo-tetraploid rice lines compared to T45.
Genomic structural reprogramming may associates with fertility in neotetraploid rice
Genetic diploidization following whole-genome duplications in plant may have occurred quite frequently during organismal evolution [53]. Earlier reports suggested that chromosomal rearrangements through processes such as neo-functionalization, sub-functionalization or loss of duplicated segments, recombination, transposable elements and genetic drift, cause differences between formerly homologous chromosomes [54,55]. In the present study, more than 100 genes were specifically expressed in neo-tetraploid rice or autotetraploid rice, and 596, 644 and 565 specific PAVs in neo-tetraploid rice lines influenced the length of chromosome in varying degrees. These results indicated that chromosome breakage, illegitimate recombination and genome rearrangement have altered neo-tetraploid rice genomic structure, which affect transcriptome, proteins and even phenotype. Meanwhile, a large number of SNPs and InDels in neo-tetraploid rice increased genomic polymorphisms. It might be resulted in some homologous chromosome failed to pair together during meiosis, which may reduce homologous recombination rates. It is clear that greater chiasma and multivalent frequency cause low fertility in neo-autopolyploid, and high levels of aneuploidy associated with high numbers of multivalent at metaphase I [56,57]. Hence, low multivalent frequency may associate with high fertility in polyploids. Multivalent frequency in neo-tetraploid rice was significantly lower than autotetraploid, while bivalent frequency was significantly higher in neo-tetraploid rice than autotetraploid. We inferred that genomic structural reprogramming may lead to high fertility in neo-tetraploid rice.
Epigenetic reprogramming may associates with fertility in neo-tetraploid rice
Epigenetics plays a crucial role in various aspects of plant biology, including development, silencing of transposable elements and maintenance of genome stability. In plants, epigenetic regulation involves histone and DNA modifications, and ncRNA [58]. Inter-species hybridization in rice has been shown to be associated with changes in the expression levels of genes involved in epigenetic mechanisms [59]. Neo-tetraploid rice is an intersubspecific hybrid (indica × japonica) of autotetraploid rice, so we inferred epigenetic in neo-tetraploid rice have reprogrammed. In fact, transposon elements and ncRNA displayed specific or differential expressions in neo-tetraploid rice or autotetraploid rice, which showed epigenetic changes in neo-tetraploid rice. Transposon elements are the target of small interfering RNA mediated silencing [60]. Some studies suggested that siRNAs regulate the genes expression and play a crucial role in male meiosis and pollen development [2,20]. Many ncRNAs are functional and involved in regulating genes expression at the transcriptional and post-transcriptional level [58]. So expression levels of many genes may change under the influence of ncRNA and TEs-siRNAs-triggered methylation, particularly during the crucial stage of meiosis, which lead to high fertility in neo-tetraploid rice.
Supporting information S1 File. Additional figures about distribution and annotation of SNPs and InDels in this study. Frequency of substitution types in SNPs and length distribution of InDels in Neo-tetraploid rice lines vs autotetraploid rice lines (T44 & T45) (Fig A). Annotation of SNPs and InDels between autotetraploid rice and Neo-tetraploid rice lines (Fig B, Fig C and Fig D). (PDF) S1 | 2019-04-07T13:03:33.010Z | 2019-04-05T00:00:00.000 | {
"year": 2019,
"sha1": "c84bdba777f2ec1c6c4c5af0d624d35aaeb023f5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0214953&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c84bdba777f2ec1c6c4c5af0d624d35aaeb023f5",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
225040271 | pes2o/s2orc | v3-fos-license | In Search of Robust Measures of Generalization
One of the principal scientific challenges in deep learning is explaining generalization, i.e., why the particular way the community now trains networks to achieve small training error also leads to small error on held-out data from the same population. It is widely appreciated that some worst-case theories -- such as those based on the VC dimension of the class of predictors induced by modern neural network architectures -- are unable to explain empirical performance. A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk. When evaluated empirically, however, most of these bounds are numerically vacuous. Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically. Jiang et al. (2020) recently described a large-scale empirical study aimed at uncovering potential causal relationships between bounds/measures and generalization. Building on their study, we highlight where their proposed methods can obscure failures and successes of generalization measures in explaining generalization. We argue that generalization measures should instead be evaluated within the framework of distributional robustness.
Introduction
Despite tremendous attention, a satisfying theory of generalization in deep learning remains elusive. In light of so many claims about explaining generalization in deep learning, this statement is somewhat controversial. It also raises an important question: What does it mean to explain generalization in deep learning?
In this work, we propose empirical methodology to aid in the search of a precise mathematical theory, allowing us to leverage large-scale empirical studies of generalization, like those in recent work [8,9]. Unlike earlier work, however, our proposal rests on the foundation of robust prediction, in order to catch out, rather than average out, failures.
The dominant approach to studying generalization is the frequentist framework of statistical learning theory. We focus our attention on the simplest setting within supervised classification, where the training data, S, are modeled as a sequence of n random variables, drawn i.i.d. from a distribution D on labeled examples (x, y). In supervised classification, learning algorithms choose a classifier h S , based on the training data S. Ignoring important considerations such as fairness, robustness, etc., the key property of a classifier h is its probability of error, or (classification) risk, One of the key questions is why deep learning often produces classifiers with human-level risk in domains that stymied researchers for decades. In this work, we take an empirical perspective and judge theories of generalizations by the predictions they provide when tested. In the other direction, any systematic rule for predicting generalization-whether learned or invented-can be thought of as a theory that can be tested.
We consider families of environments, defined by data distributions, architectural choices, train set sizes, learning algorithms and their tuning parameters, etc. Given a particular family of environments, a strong theory achieves a desired level of precision for its predictions, while depending as little as possible on the particular details of the environments. At one extreme, explanations based on the VC dimension of the zero-one loss class of neural networks would pin the success of deep learning on empirical risk minimization. In practice, these explanations are poor, not just because the ensuing bounds are numerically vacuous for the size of networks and datasets used in practice, but because they fail to correctly predict the effect of changes to network width, depth, etc.
At the other extreme, average risk on held-out data (i.e., a test-set bound) provides a sharp estimate of risk, yet the computation to produce this estimate is inextricably tied to every detail of the learned weights and data distribution. Viewing predictors as theories, the test-set bound is essentially silent. Any satisfactory theory of generalization in deep learning must therefore lie between these two extremes. We must necessarily exploit properties of the data distribution and/or learning algorithm, but we must also be willing to trade precision to decrease our dependence on irrelevant details.
What dependence on the data distribution and learning algorithm is necessary to explain deep learning? Even taking the data distribution into consideration, the fact that stochastic gradient descent (SGD) and its cousins often perform empirical risk minimization cannot explain generalization [28]. There is, however, a picture emerging of overparametrization and SGD conspiring to perform capacity control. In some theoretical scenarios, this control can be expressed in terms of norms. At the same time, great strides have been made towards identifying notions of capacity that can be shown to formally control generalization error, R D (h) −R S (h), uniformly over h belonging to specially defined classes. (HereR S (h) denotes the empirical risk, as estimated by the training data.) Despite this progress, there is still a wide gulf between performance as measured empirically via held-out data and performance as predicted by existing theory. No expert would be surprised to discover that a published bound yields predictions for risk that are numerically vacuous when evaluated empirically. A standard retort is that the constants in most bounds are suboptimal or that the purpose of bounds is to identify qualitative phenomena or inspire the development of new algorithms. Even ignoring the issue of numerically vacuous bounds, many bounds demonstrably fail to account for the right dependencies. As a case in point, recent empirical work [17] identifies state-of-theart bounds on generalization error that grow with training set size, while the generalization error actually shrinks. Indeed, many bounds are distribution-or data-dependent and so the question of whether they explain generalization in practice is an empirical question.
Large-scale empirical studies
Recent work proposes large-scale empirical investigations to study generalization [9]. (See also [8].) While it is becoming more common for theoretical work to present empirical evaluations, among recent empirical studies [2, 3, 5, 11, 12, 19, etc.], most are limited. One motivation for large-scale empirical studies is to leverage massive computing resources in the pursuit of a scientific challenge that has largely been approached mathematically. Another motivation is to go beyond simply measuring correlation towards measuring aspects of causation. (Several authors of this work-Dziugaite, Neal, and Roy-have each advocated for this publicly.) Given how influential these proposals by Jiang et al. [9] may be, we believe they deserve critical attention. (Indeed, recent preprints have already started to integrate their methodology.) Jiang et al. [9] propose to use Kendall correlation coefficients and independence tests to evaluate a suite of so-called generalization measures. Many of these generalization measures are frequentist bounds, though with missing constants or lower-order terms. Others are only loosely inspired by bounds.
Jiang et al. [9] propose to average evaluation metrics over a range of experimental settings. In contrast, we argue that average performance is not a suitable way to measure the strength of a generalization measure as a theory of generalization. In particular, a satisfying theory of generalization should admit a generalization measure that offers reasonable predictions of generalization across a range of experimental settings, e.g., those arising from different hyperparameter choices, datasets, etc. A theory-as realized by a generalization measure-is as strong as its weakest component: a satisfying theory cannot simply predict well on average.
The study of prediction across a range of environments is the subject of distributional robustness [1,4]. An extreme form of robustness is obtained when one seeks to predict well in all environments that may arise from all possible interventions to an experimental setting. This extreme form of robustness can be linked to a weak form of causality [4,16].
Crucially, we do not aim for robustness over all possible environments. To achieve some level of generality, useful theories must necessarily have limited scope. As we demonstrate in Section 2, frequentist generalization bounds can exploit noncausal correlations that can be seen to stand in for unknown properties of the data distribution because of properties of the learning algorithm. Such bounds have an important role to play in our search for a theory of generalization in deep learning, but we cannot expect them to explain generalization under interventions that upset these noncausal correlations. Bounds that depend on properties of the data distribution have an important role to play, though one hindered by the statistical barriers of unknown distributions, accessible only through a limited pool of data. More general theories (that minimize this dependence) can pinpoint key data properties.
Contributions. Theories of generalization yield predictions: How should we evaluate these predictions empirically? In this work, we adopt the proposal of [9] to exploit large-scale empirical studies, but critique the paper's guidance that we should evaluate the predictions of these theories in much the same way that we evaluate typical ML benchmarks. Based on the specific scientific goals of understanding generalization, we propose that the framework of distributional robustness is more appropriate, and suggest how to use it to evaluate generalization measures.
Besides theoretical contributions, we make empirical contributions: We collect data from thousands of experiments on CIFAR-10 and SVHN, with various values for width, depth, learning rate, and training set size. We adopt the ranking task and sign-error loss introduced by [9], but use the collected data to perform a robust ranking evaluation, across the 24 candidate generalization measures on over 1,600,000 pairs of experiments.
We find that no existing complexity measure has better robust sign-error than a coin flip. Even though some measures perform well on average, every single measure suffers from 100% failure in predicting the sign change in generalization error under some intervention. This observation is not the end of the evaluation, but the beginning.
To better understand the measures, we evaluate them in families of environments defined by interventions to a single hyperparameter. We find: (i) most, though not all, measures are good at robustly predicting changes due to training set size; (ii) robustly predicting changes due to width and depth is hard for all measures, though some PAC Bayes-based measures are robust across a large fraction of the environments tested; (iii) norm-based measures outperform other measures at learning rate interventions.
By focusing on robust evaluation, we force ourselves to dig into the data to uncover the cause of failures-failures which might otherwise go undiscovered by looking at average performance. As such, robustness provides better guidance to the scientific challenge of explaining generalization. The rest of this paper is organized as follows. In Section 2, we present a concrete example of a frequentist analysis of a learning algorithm, which reiterates some of the high-level points above. We then introduce distributional robustness in Section 3 and describe how the framework can be used to analyze large-scale empirical studies of generalization measures in Section 4. We detail our experimental setting in Section 5 and summarize our experimental findings in Section 6 before ending with a discussion.
2 A Motivating Example: SVMs, Norm-based Capacity Control, and the Role of Causality In this section, we study support vector machines (SVMs) to demonstrate some of the challenges in understanding and explaining generalization error. This section owes much to [27]. The intuition extracted from this simple model motivates our methodological choices for the rest of this paper. In particular, we see that frequentist generalization bounds derived under one set of conditions may rely on quantities that do not have a direct causal relationship with the generalization error under other conditions. This highlights that frequentist bounds can be expected to have limits to the predictive powers under intervention, but also that asking for causal measures of generalization may rule out measures that nonetheless work well in a range of scenarios.
Consider linear prediction, based on an embedding of inputs into R p . As usual, we index the space of linear predictors by nonzero vectors w ∈ H = R p , where the decision boundary associated to w is the tangent hyperplane {x ∈ R p : w, x = 0}, passing through the origin. Assuming labels take values in {±1}, the zero-one classification loss of the predictor w on a labeled input (x, y) is (w, (x, y)) = 1 2 (1 + y sgn( x, w )). Note that the loss is invariant to the magnitude of the vector w, and so the set of hyperplanes can be put into correspondence with the unit vectors w := w/ w . We focus on the realizable setting, i.e., data are assumed to be labeled according to some hyperplane. In this case, every finite data set admits a positive cone of empirical risk minimizers.
, and let w S be chosen according to the SVM rule: min w w 2 , subject to the constraint that y i x i , w ≥ 1 for all i ∈ [n]. The constraint demands that, for each data point, the functional margin, y i x i , w S , be greater than one. Thus the hyperplane w S indeed separates the data and achieves zero empirical risk. However, among the vectors that satisfy the margin constraint, w S has the smallest L2 norm. Geometrically, the hyperplane w S is that with the largest geometric margin, min i y i x i , w S .
Why does the SVM classifier generalize? The best explanation may depend on the situation. The VC dimension of the space of p-dimensional linear predictors is p, and so, with high probability over the sample S, uniformly over all separating hyperplanes w, the difference between the empirical risk and risk isÕ(p/n). If n p, then this reason alone suffices to explain strong performance. The details of the SVM rule are irrelevant beyond it returning an empirical risk minimizer.
Suppose that we consider a family of embeddings of growing dimensionality and find that the SVM rule generalizes equally well across this family. The VC theory cannot explain this. A theory based on the maximum-margin property of the SVM rule may. To that end, assume there exists a hyperplane w * such that y w * , x ≥ 1 with probability one over the pair (x, y). To fix a scale, assume x ≤ ρ with probability one. By exploiting strong convexity, and the fact that w S ≤ w * , one can show that the risk of w S is bounded byÕ(ρ w * /n). Note that this bound has no explicit dependence on the dimension p. Instead, it depends on the quantity ρ w * , whose reciprocal has a geometric interpretation: the distance between the separating hyperplane w * and the nearest data point, normalized by the radius of the data. Therefore, this analysis trades dependence on dimension for dependence on the data distribution's density near the decision boundary. When ρ w * n, SVM's inductive bias is sufficient to explain generalization, even if p n.
In fact, we can always build a bound based on the norm of the learned weights: with high probability, for every ERM w S , the risk is bounded byÕ(ρ w S /n). One might prefer such a bound since w * is often presumed unknown. Even if this bound matches risk empirically, it has a strange property: The bound depends on the norm w S even though risk is independent of norm. Thus, we cannot expect the bound to remain valid if we intervene on the norm after training, e.g., to test for a causal relationship between norms and risk. Norms are the effect of the data and SVM interacting.
This example highlights that there may be multiple overlapping explanations depending on the range of environments in which one wants to understand generalization. We cannot, however, expect a theory to be robust to arbitrary interventions. Identifying a theory with limitations may lead us to more general ones, once we understand those limitations. All of this motivates a careful design of experimental methodology, in order to navigate these tradeoffs. In particular, we demand that a theory is robustly predictive of generalization over a carefully designed family of environments.
Preliminaries on Robust Prediction
In this section, we introduce the framework of robust prediction, borrowing heavily from Bühlmann [4], Peters, Bühlmann, and Meinshausen [23], and Rothenhäusler et al. [25]. In the next section, we cast the problem of studying generalization into this framework.
Consider samples collected in a family F of different environments. In particular, let (Ω, A) denote a common (measurable) sample space and, in each of these environments e ∈ F, assume the data we collect are drawn i.i.d. from a distribution P e on Ω. We will think of environments as repre-senting different experimental settings, interventions to these experiments, sub-populations, etc. For example, each sample might be a covariate vector and binary response, i.e., Ω = R p × {0, 1}. A well-studied setting is where the distributions P e all agree on the conditional mean of the response given the covariates (i.e., the regression function), but disagree on the distribution of the covariates.
Prediction is formalized by a loss function. In particular, a loss function for a set Φ of predictors is a map : Φ × Ω → R. The error or risk (of a predictor φ ∈ Φ in an environment e ∈ F) is then the expected loss, E ω∼P e [ (φ, ω)]. If we focus on one environment e ∈ F, it is natural to seek a predictor φ ∈ Φ with small risk for that individual environment. However, if we care about an entire family F of environments, we may seek a predictor that works well simultaneously across F. In the setting of distribution robustness, the performance of a predictor relative to a family F of environments is measured by the robust error (or risk) The goal of robust prediction is to identify a predictor with small robust risk.
Connection to causality. If taken to an extreme, then robust prediction is closely related to learning causality. Specifically, suppose that (X, Y ) is induced by a common causal model Y := f (X).
If F represents all possible interventions on subsets of X, then the causal predictor f (X) also minimizes the robust risk. See [4,24] for more details.
Studying Generalization via Distributional Robustness
We are interested in understanding the effects of changes to a complex machine learning experiment, with a focus on effects on generalization. In this section, we cast this problem into the framework of distributional robustness. In order to study generalization, we view theories of generalization as yielding predictors for generalization under a range of experimental settings. We use the term generalization measure to refer to such predictors.
Experimental Records and Settings
In the notation of Section 3, points ω ∈ Ω represent possible samples. In our setting, each sample represents a complete record of a machine learning experiment. An environment e specifies a distribution P e on the space Ω of complete records.
In the setting of supervised deep learning, a complete record of an experiment would specify hyperparameters, random seeds, optimizers, training (and held out) data, etc. Ignoring concerns of practicality, we assume the complete record also registers every conceivable derived quantity, not only including the learned weights, but also the weights along the entire trajectory, training errors, gradients, etc. Formally, we represent these quantities as random variables defined on the probability spaces (Ω, A, P e ), e ∈ F. Among these random variables, there is the empirical riskR and risk R of the learned classifier, and their difference, G, the generalization error/gap.
Each distribution P e encodes the relationships between the random variables. Some of these relationships are common to all the environments. E.g., the generalization error G always satisfies G = R −R, and the empirical riskR is always the fraction of incorrectly labeled examples in the training data. Some relationships may change across environments. E.g., in a family F designed to study SGD, changes to, e.g., the learning rate, affect the distribution of the trajectory of the weights.
In machine learning, environments arise naturally from learning algorithms applied to benchmarks under standard hyperparameter settings. In order to evaluate theories that explain the effect of, e.g., hyperparameter changes, we also consider environments arising from perturbations/interventions to standard settings. E.g., we may modify the hyperparameters or data, or intervene on the trajectory of weights in some way. Every perturbation e is captured by a different distribution P e .
With respect to a family of environments F, a generalization measure is preferred to another if it has smaller robust error (2). In Sections 5 and 6, we restrict our attention to F induced by varying hyperparameters, data distributions, training datasets, and dataset sizes. In this work, we do not intervene on the dynamics of SGD. However, intervening on the trajectory induced by SGD might be an interesting future direction that could allow one to tease apart the role of implicit regularization.
Prediction tasks
The predictions associated with a theory of generalization are formalized in terms of a map C : Ω → R, which we call a generalization measure. We will study ad hoc generalization measures as well as ones derived from frequentist bounds. In both cases, we are interested in the ability of these measures to predict changes in the generalization.
One important aspect of a generalization measure is the set of (random) variables (i.e., covariates) it depends on. Indeed, there is an important difference between the task of predicting generalization using only the architecture and number of data and using also, e.g., the learned weights. Formally, let We may prefer one generalization measure to another on the basis of the covariates it uses. As a simple example, if a generalization measure offers comparable precision to another measure, but is measurable with respect to a strict subset of variables, then this increased generality may be preferred.
Goals of the prediction. We are broadly interested in two types of prediction tasks, distinguished by whether we train one or two networks.
In coupled-network experiments, we train two networks, such that they share all hyperparameters except one. We are interested in trying to predict which network has smaller generalization error.
Some of the generalization measures we consider are based on generalization bounds from the literature. Given that generalization bounds are often numerically vacuous, it would not be informative to evaluate their predictions directly at this stage. It is, however, reasonable to evaluate whether they capture the right dependencies. Indeed, one desirable property of evaluating generalization measures by the rankings they induce in coupled-network experiments is that the rankings are invariant to monotonically increasing transformations of the measure.
In single-network experiments, we try to predict the numerical value of the generalization error for that network based on a linear or affine function of a generalization measure. Generalization measures that perform well in such a task would serve as accurate predictors of generalization, and could be used for, e.g., model selection. However, such measures would not necessarily serve to be useful in generalization bounds. We describe the experimental details and results of single-network experiments in Appendix B due to space limitations.
Experimental methodology
In coupled-network experiments, we evaluate the ranking that the generalization measure induces on training networks. The approach we describe here is a robust analogue of the Kendall-τ -based approach advocated by Jiang et al. [9]. 2 This change is deceptively minor. We highlight the very different conclusions drawn using our methodology in Section 6.
Evaluation criterion. In more detail, recall that a coupled-network environment e determines a distribution P e on pairs (ω, ω ) of variable assignments, each representing a full record of an experiment. We evaluate a generalization measure, C, and the realized generalization error, G, on both assignments, ω and ω . We use the ranking of C values to predict the ranking of G values. Then, the sign-error of a generalization measure C for this task 3 is given by (3) Given a family F of coupled-network environments, the robust sign-error of a generalization measure C is sup e∈F SE(P e , C). The Ψ summary proposed by Jiang et al. [9] is analogous to the average sign-error, |F| −1 e∈F SE(P e , C). 4 In our experiments, we use a modification of the loss in Eq. (3) in order to account for Monte Carlo variance in empirical averages. We use a weighted empirical average, where the weight for a sample (ω, ω ) is calculated based on the difference in generalization error |G(ω) − G(ω )|. We discard samples for which the difference in generalization error is below the Monte Carlo noise level. In effect, we control the precision to which we want our generalization measure to predict changes: when the difference is insignificant, we do not predict the sign. See Appendix A for the details on how we use the Monte Carlo variance to choose what environments are being considered. Other details of data collection are described in Appendix C.
Environments. In our experiments, variable assignments (ω) are pairs (H, σ) of hyperparameter settings and random seeds, respectively. The hyperparameters are: learning rate, neural network width and depth; dataset (CIFAR-10 or SVHN), and training set size. (See Appendix C for ranges.) Each environment e is a pair (H, H ) of hyperparameter settings that differ in the setting of one hyperparameter (e.g., depth changes from 2 → 3 between H and H and the remaining hyperparameters are identical). The distribution P e for a pair e = (H, H ) is the distribution of (ω, ω ) = ((H, σ), (H , σ )), where the random seeds σ, σ are chosen uniformly at random. That is, the expectation in Eq. (3) is taken only over a random seed.
Empirical Findings
In Fig. 1, we present a visualization of 1,600,000 ranking evaluations on 24 generalization measures derived from those used in [9]. A full description of these measures can be found in Appendix C.6. Motivated by the discussion in the introduction, we seek strong predictive theories: generalization measures that increase monotonically with generalization error and for which this association holds across a range of environments. Such a measure would achieve zero robust sign-error (Eq. (3)).
As described in Section 5, each environment contains a pair of experiments that share all hyperparameters but one (learning rate, depth, width, train set size, dataset). In each environment, we calculate the weighted empirical average version of the sign-error over 100 samples from P e (10 networks runs with different seeds per ω. Note that we discard environments where too many samples have differences in generalization error below the Monte Carlo noise level (see Appendix A.2 for details). This is in contrast with the protocol proposed by Jiang et al. [9] where such noise is not filtered and can significantly undermine the estimation of sign-error (see Appendix A.3).
In the remainder of this section we interpret the results of Fig. 1, highlight some significant shortcomings of the generalization measures, and point out cases where these shortcomings would have been obscured by non-robust, average-based summary statistics like those used by Jiang et al. [9].
How to read Fig. 1. This figure presents the empirical cumulative distribution function (CDF) of the sign-error across all environments and generalization measures. Every row shows the CDF over a subset of environments (e.g., those where only depth is varied). The 'All' row shows the same but over all environments. The number of environments in each subset is given on the left of each row. Each bar in the figure is the empirical CDF of all sign-errors in the set of environments. A bar's y-axis corresponds to the range of possible sign-errors and the internal coloring depicts the distribution (starting at the median value for improved readability). We annotate the bars with the max (i.e., robust sign-error; green), the 90th percentile (magenta), and the mean (orange). The latter statistics do not measure robustness over all environments. However, a low 90th percentile value means the measure would have had low empirical robust sign-error restricting to some 90% of the environments tested. If the max is at 1.0, then there exists at least one environment where the measure fails to predict the sign of the change in generalization on all random seeds. If the max is below 0.5, then the measure is more likely than not to predict the correct sign on all environments in the set. Identifying subfamilies in which a measure is robust is one of our primary objectives.
No measure is robust. As illustrated in the 'All' row, for every one of the 24 measures, there is at least one environment in which the measure always incorrectly predicts the direction of change in generalization. Nonetheless, some measures have low robust error over large fractions of environments, as reflected by the 90th percentiles of the sign-error distributions. Notice how the average-based summaries proposed by Jiang et al. [9] do not reflect robustness, which implies their inability to detect the causal associations that they seek. Given these poor results, we must dig deeper to understand the merits and shortcomings of these generalization measures. Therefore, we study their performance in natural subfamilies E ⊆ F of environments. Our analyses of the 'Train Size', 'Depth', and 'Width' rows below are examples of this. While no measure is robust across the CIFAR-10 and SVHN datasets considered here, we find measures that are quite robust over a 90% fraction of environments for SVHN only (see Appendix D.1).
Robustness to train set size changes is not a given. In the 'Train size' row, most measures correctly predict the effect of changing the train set size. (In general, generalization error decreases with train set size.) It may seem a foregone conclusion that a bound of the formÕ( c/n) would behave properly, but, for most of these measures, the complexity term c is a random variable that can grow with more training data. In fact, while many measures do achieve a low robust signerror, some measures fail to be robust. In particular, some bounds based on Frobenius norms (e.g., prod.of.fro; Appendix C. 6.4 and [20]) increased with train set size in some cases. Such corner cases arose mostly for shallow models (e.g., depth 2) with limited width (e.g., width 8) and were automatically identified by our proposed method. Note that the same finding was recently uncovered in a bespoke analysis [17], and we may have missed this looking only at average sign-errors, which are usually low.
Robustness to depth. In the 'Depth' row, we depict robust sign-error for interventions to the depth. Again, robust sign-error is maxed out for every measure. Digging deeper, these failures are not isolated: many measures actually fail in most environments. However, there are exceptions: a few measures based on PAC-Bayes analyses show better performance in some environments. In Fig. 2, we dig into the performance of pacbayes.mag.flatness (Appendix C.6.6) by looking at the subset of environments where it performs well (e.g., varying depth 3 → 4), fails but shows signs of robustness (e.g., 3 → 6), completely fails (e.g., 4 → 5), and those were a conclusion cannot be reached (e.g., 5 → 6). Looking into the data, we found that almost all environments where the measure fails are from the CIFAR-10 dataset, where the smaller networks we test suffer from significant overfitting. This illustrates how our proposed methodology can be used to zero-in on the limited scope where a measure is robust.
Robustness to width is surprisingly hard. In the 'Width' row, all measures have robust signerror close to 1. Looking into the data, we discover that generalization error changes very little in response to interventions on width because the networks are all very overparametrized. In fact, of the 4,000 available width environments, only 328 remain after accounting for Monte Carlo noise. The red X indicates that no environments remained after accounting for Monte Carlo noise.
Comparison to Jiang et al. [9] Our contribution is primarily a methodological refinement to the proposals in [9]. We describe how to discover failures of generalization measures in specific environments by looking at worst-case rather than average performance. We note that there are several reasons that even our average-case results are not directly comparable with those in [9]. First, their analysis considers the CIFAR-10 and SVHN datasets in isolation, whereas we combine models trained on both datasets. Second, they do not account for Monte Carlo noise, which we found to significantly alter the distribution of sign-errors (see Appendix A.3). This is important since we found that many environments had to be discarded due to high noise (e.g., only 8.2% of the width environments remain after filtering out noise in our analysis). Third, the hyperparameters and ranges that they consider are different from ours and, consequently, both studies look at different populations of models. For example, the majority of models in [9] use dropout, whereas our models do not. Such differences can alter how generalization measures and gaps vary in response to interventions on some hyperparameters and lead to diverging conclusions. For instance, in our results, no measure has an average-case performance much better than a coin-flip in the 'Depth' environments for CIFAR10, while Jiang et al. [9] find measures that perform well in this context. Nevertheless, there are some general findings that persist across both studies; for instance, we see the good average-case performance of the path-norm (Appendix C.6.5) and PAC-Bayes-flatness-based (Appendix C.6.6) measures in contrast to the poor performance of spectral measures (e.g., prod.of.spec; Appendix C.6.3). We also find more specific similarities, such as the poor average-case performance of most measures in 'Width' environments for CIFAR-10 (Appendix D.2), in contrast to the better performance of path.norm (Appendix C.6.5), path.norm.over.margin (Appendix C.6.5), and pacbayes.mag.flatness (Appendix C.6.6).
Discussion
The quest to understand and explain generalization is one of the key scientific challenges in deep learning. Our work builds on recommendations in [9] to use large-scale empirical studies to evaluate generalization bounds. At the same time, we critique some aspects of these recommendations. We feel that the proposed methodology in [9] based on taking averages of sign-errors (or independence tests, which we have not pursued) can obscure failures. Indeed, for a long time, empirical work on generalization has not been systematic, and as a result, claims of progress outpace actual progress.
Based on an understanding of the desired properties of a theory of generalization, we propose methodology that rests on the foundation of distributional robustness. Families of environments define the range of phenomena that we would like the theory to explain. A theory is then only as strong as its worst performance in this family. In our empirical study, we demonstrated how a family can be broken down into subfamilies to help identify where failures occur. While the present work focused on the analysis of existing measures of generalization, future work could build on the robust regression methodology of Appendix B and attempt to formulate new robust measures via gradient-based optimization.
The development of benchmarks and quantitative metrics has been a boon to machine learning. We believe that methodology based on robustness with carefully crafted interventions will best serve our scientific goals.
Broader Impact
Our work aims to sharpen our understanding of generalization by improving the way that we evaluate theories of generalization empirically. The proposed methodology is expected to aid in the quest to understand generalization in deep neural networks. Ultimately, this could lead to more accurate and reliable models and strengthen the impact of machine learning in critical applications where accuracy must be predictable. We believe that this work has no direct ethical implications. However, as with all advances to machine learning, long-term societal impacts depend heavily on how machine learning is used. Contents A Importance sampling schemes to account for Monte Carlo noise Generalization error G(ω) is estimated from a held-out test set, since we do not have access to the data distribution. The size of the test set determines the precision at which true generalization error can be approximated. Let G, G denote true generalization errors for ω, ω , respectively, and similarlyĜ,Ĝ estimates of generalization made on a test set of size m. Let = |Ĝ −Ĝ |/2. Then
A.1 Filtering environments
The weighting scheme proposed in Eq. (6) allows to downweigh (and in some cases discard) pairs of experiments for which the generalization errors do not differ significantly. Consequently, some environments may be left with very few samples. Let the i th sample have a weight κ i . To avoid calculating the expected sign-error on too few data points, we discard environments where the effective sample size, defined as is smaller than 12. In our case, the weights are as defined in Eq. (6). The choice of 12 samples means we estimate the expected loss to a precision of around standard deviation divided by 3 ( [14][Ch. 29, p. 380]).
In Fig. 3, we show the number of environments remaining at various n eff cutoffs. Notice that very few environments are included for the width hyperparameter, even at n eff ≥ 12. This is because the variations in generalization error due to width are often negligible in our data. Therefore, many of the environments where the width is varied are automatically discarded.
A.3 Ablation study: What is the effect of Monte Carlo noise?
In this section, we investigate how filtering Monte Carlo noise using the aforementioned procedure affects the estimated sign-errors. This is done by running an ablation study where the sign-errors are computed with and without Monte Carlo noise filtering and the resulting error distributions are compared. Specifically, for the with filtering case, pairs (ω, ω ) in each environment are weighted using κ(ω, ω ) as described at Eq. (6). Conversely, for the without filtering case, pairs are weighted with κ(ω, ω ) = 1, allowing pairs with very small differences in generalization gap to be included in the expectation. Intuitively, Monte Carlo noise should increase the sign-error in environments where it occurs, since it can randomly push sgn(Ĝ(ω) −Ĝ(ω )) to be +1 or −1, making it unpredictable.
The results reported in Fig. 4 support this hypothesis: including noisy ranking pairs generally leads to larger sign-errors. Indeed, for many generalization measures, the mean sign-error decreases with noise filtering (Fig. 4a). Moreover, extreme values such as the maximum do not seem to be affected by noise filtering (Fig. 4c), which was expected since noise is likely not prevalent in each of the considered environments. In light of these results, we conclude that the procedure that we propose to account for Monte Carlo noise (Appendix A) is beneficial and that it should be implemented in studies, such as ours and Jiang et al. [9], that rely on ranking comparisons of generalization gaps.
B Evaluating robust prediction of the numerical value of generalization error
In single-network experiments, we evaluate the ability of generalization measures to predict the exact numerical value of the generalization error.
Evaluation criterion. We rely on a robust mean squared error (MSE) objective. For a transformation f θ (·) of a generalization measure C, the robust MSE is Choosing f a (x) = ax in Eq. (9), we recover robust risk of the linear oracle transformation of a generalization measure C. Similarly, choosing f (a,b) (x) = ax + b, we get robust risk of the affine oracle transformation of a generalization measure C.
Environments. In this setting, each environment e ∈ F is defined by a single hyperparameter configuration H. The data points in each environment are acquired by training a model with hyperparameters H and varying the random seed. For example, an environment could be composed of multiple experimental records where learning rate is 0.01, the model depth is 2, model width is 10, the dataset is CIFAR-10, and the training set size is 50 000. The hyperparameter values considered are those given at Appendix C and we consider ten random seeds, resulting in 1000 environments with ten data points each.
B.1 Experiments and results
In our experiments, we fit an affine oracle for each generalization measure by minimizing the robust mean squared error. This evaluates the ability of each measure to predict generalization error in each environment, up to a common linear rescaling and an additive constant. Note that we constrain the linear coefficients to be non-negative (i.e., a ≥ 0), since we expect these measures to upper bound generalization error. We compare the performance of the affine oracles to that of a baseline that ignores the generalization measures and only fits a bias parameter (i.e., a = 0). The results are reported in Fig. 5. Observe that many generalization measures achieve lower robust mean squared error than the baseline oracle. This suggests that these measures do carry meaningful information about generalization error that transfers across all environments. In addition, notice that many measures that show good robust performance in this setting often perform well in the coupled-network experiments (see Section 6 and Appendix D). For instance, the PAC-Bayes measures based on flatness show signs of robustness (although not perfect) in both types of experiments. Furthermore, measures based on path.norm, which show strong signs of robustness in the SVHN-only coupled-network experiments (see Appendix D.1) are also among the best performers in this setting. Finally, notice how the average mean squared error tends to be very similar across measures, while their robust mean squared errors differ more. This provides additional evidence that averaging can mask failures in robustness and that a worst-case analysis should be preferred.
B.2 Exploring weaker families of environments
In this section, we consider families of environments F which are intermediate between the robust regression described above and empirical risk minimization (where the average MSE is minimized).
Minimizing the robust risk should be an easier task in these environments, due to increasing levels of averaging which mask robustness failures.
Varying a single hyperparameter. In this setting, each environment e ∈ F is composed of runs where a single variable H i ∈ H is allowed to vary. All other variables V \ H i are fixed, though the random seed varies. For example, an environment could be composed of all runs where the model depth is 2, model width is 10, the dataset is CIFAR-10, the training set size is 50 000, and the learning rate takes any of the 5 values considered. When considering the hyperparameter values described at Appendix C, we obtain 1350 environments with between 20 to 50 points each (due to 10 repeats per hyperparameter setting). The results for all experiments over this family of environments are reported in Fig. 6.
As expected, the range of robust MSE values attained per measure is lower than in Fig. 5, since the added averaging makes the task easier. As in the previous section, we can see that the mean MSE over environments obscures a clear ordering over measures that is present when using the robust MSE. Also, notice that the ordering induced by the robust MSEs is similar to that in Fig. 5, with some PAC-Bayesian measures and path.norm measures performing best.
Varying all but one hyperparameter. In this setting, each environment e ∈ F is composed of runs where a single variable H i ∈ H is fixed. All other variables V \ H i and the random seed vary. For example, an environment could be composed of all runs where the model depth is 2 and the other parameters take on every possible value. When considering the hyperparameter values described at Appendix C, we obtain 21 environments with 2000 to 5000 points each. The results for all experiments over this family of environments are reported in Fig. 7.
The gap between the mean and robust RMSEs is much narrower here, caused by averaging the MSE over much more points. Nevertheless, we are still able to see a very similar ordering to that of Fig. 5 and Fig. 6 preserved here.
Availability:
Our code is open-source and available online, along with the data used in the experiments: https://github.com/nitarshan/robust-generalization-measures
C.1 Hyperparameters
Each data point in our analysis is obtained by training a model with a given hyperparameter configuration. Between data points, we vary 5 hyperparameters that alter the model, the learning procedure, and the data distribution. These are:
C.2 Models
We use a fully convolutional "Network-in-Network" architecture similar to that described in [13] and used for the study in [9]. A full specification of our model can be found in our codebase.
While the most successful model architectures of today employ residual connections between blocks of convolutional layers, we are unable to make use of those here due to the unclear applicability of many bounds to models using skip-connections.
C.3 Datasets
We make use of two common vision datasets: CIFAR-10 [10] and SVHN [18]. Both datasets are composed of 32x32 RGB images with 10 classes of natural images, with CIFAR-10's classes corresponding to animals and vehicles, and SVHN's classes corresponding to digits cropped from Street View images. We make use of the full training (50k images) and testing (10k images) splits of CIFAR-10, and randomly sample (without replacement) a subset of the larger training and testing sets of SVHN to match the split sizes of CIFAR-10. We also sample without replacement when generating the smaller training sets of size 25k, 12.5k and, 6.25k, but always make use of the same testing sets of size 10k across all experiments.
We do not make use of data augmentation when passing these images into our models, following the observation in [9] that doing so negatively affects the ability of these models to consistently reach low cross-entropies.
C.4 Training procedure
We use SGD with a momentum parameter of 0.9 for all experiments. We do not use learning rate decay or weight decay. As in [9] we use a cross-entropy stopping criterion, which we set to 0.01 for all experiments, and calculate over the entire training dataset.
C.5 Data collection
We run 10 repeats with different random seeds for each of the 1000 possible hyperparameter combinations, providing a total of 10, 000 experimental runs. Of these, 300 runs failed to meet the cross-entropy criterion as well as an additional training accuracy criterion of being greater than 99%. These data points were filtered out before any analysis.
As in [9] we "fuse" the batch-norm layers of our model with their preceding convolutional kernels before calculating the value of the generalization measures.
C.6 Measures
We look at the following 24 generalization measures for convolutional networks, which are modifications of a subset of those studied in [9]. A key difference is that, while those original measures do not account for dataset size, we correct for this through a normalization of the form C/m for all measures C, where m is the size of the training dataset.
While we provide the mathematical expressions for all our measures here, we direct the reader to [9, Appendix D] for more details. It is worth noting, however, that while many of these expressions are derived from generalization bounds, there is no requirement that generalization measures correspond to bounds. Even among the measures that correspond to bounds, direct comparison is not necessarily meaningful. Some are generalization bounds for the networks learned by SGD. Some, in particular those derived from PAC-Bayes methods, are bounds on stochastic classifiers, e.g., obtained by randomizing the weights of a neural network in some way. Some of these quantities control the generalization error between the risk and empirical risk, while others relate to the difference between surrogate risks, e.g., based on margins.
For reasons of numerical stability, we apply log transformations to some of these measures. As this is a monotonic transformation, it does not affect the ranking results covered in the main paper.
C.6.1 VC Measures
For a convolutional network of d layers, with a k i × k i kernel and c i filters at depth i:
C.6.2 Output Measures
Let γ be the 10th-percentile of margin values over the training dataset.
C.6.3 Spectral Measures
Let W i denote the i th convolutional layer's weight tensor, and W 0 i it's initial value. Let W i 2 denote it's spectral norm, which we calculate using the exact FFT-based method of [26]. Let C log.prod.of.spec.over.margin = log C log.prod.of.spec = log C log.sum.of.f ro.over.margin = log
C.6.5 Path Measures
Define the parameter vector as w = vec(W 1 , . . . , W d ). Below, f w 2 (1)[i] denotes the i th logit output of a network using squared weights where the input is a vector of ones.
C.6.6 Flatness Measures
Let ω denote the number of weights. Let = 1 × 10 −3 . We use the search procedure for σ described in [9], where it is chosen to be the largest number such that E u∼N (0,σ 2 I) L (f w+u ) ≤ 0.1 . Similarly, we choose the magnitude-aware σ to be the largest number such that Fig. 1 of the main text, but leave out all CIFAR-10 environments. The results are reported in Fig. 8.
Notice how all measures still achieve a robust sign-error of 1.0 overall, but that many measures now have a 90th percentile much closer to zero. This indicates that the error distributions of some measures would have been judged to be robust on some large subfamily of environments. Furthermore, observe how some measures now achieve perfect robustness on 'Depth' environments (e.g., path.norm), while none had achieved a sign-error lower than 1.0 in Fig. 1.
Our methodology allows to dig deeper into these results. For instance, we can try to understand where pacbayes.orig fails to be robust by looking at the CDF of sign-errors for every pair of values of depth (Fig. 9a) and width (Fig. 9b). We observe that most failures in 'Depth' environments occur when varying depth from 2 → 3. The sources of non-robustness in width are slightly harder to interpret, but we still observe that the measure is significantly more robust for some pairs of width than others. This example clearly illustrates how our proposed methodology allows to zero-in on cases where generalization measures fail to be robust. We expect that studying the shortcomings of measures at such a detailed level will aid in elaborating new, more robust, theories of generalization. Figure 10: Cumulative distribution of the sign-error across subsets of environments for each generalization measure (in CIFAR-10 only). The measures are ordered based on the mean across 'All' environments. A completely white bar indicates that the measure is perfectly robust, whereas a dark blue bar indicates that it completely fails to be robust.
D.3 Exploring a weaker family of environments
We further evaluate the performance of generalization measures in a family of environments that is intermediate between the one considered in Section 5 and by Jiang et al. [9]. As described in Section 5, each environment contains pairs of hyperparameter settings where, within a pair, one hyperparameter is varied between two specific values (e.g., depth 2 → 3). However, here, the value of the other hyperparameters is allowed to change between the pairs. For example, assuming only two hyperparameters (width (w) and depth (d)), the pairs {[(d = 2, w = 10), (d = 3, w = 10)] , [(d = 2, w = 5), (d = 3, w = 5)]} could belong to the same environment (depth 2 → 3), whereas this would not be allowed in Section 5. Hence, one environment in this setting corresponds to the union of multiple environments in the setting described at Section 5. Achieving robustness in this family of environments may be significantly easier due to further averaging, which can mask failures of robustness in some hyperparameter configurations.
E Methodological comparison to the conditional independence testing method of Jiang et al. [9] In addition to their Kendall-τ -based Ψ measure, Jiang et al. [9] propose a measure based on conditional independence testing (Section 2.2.3 in [9]). This measure attempts to identify the existence of a causal relationship (edge in a postulated causal graph) between a generalization measure and the generalization gap. While the concept of robustness (that our work builds on) and conditional independence testing can both be tied to the causal inference literature, our methods are fundamentally different. Notably, we do not claim, nor seek, to identify causal relationships (see the discussion in Section 2). Below, we highlight some key differences between our approaches.
It can be tempting to see a similarity in the fact that both methods look at extreme values (min and max). Our method looks at the maximum sign-error over all environments, which is analogous to taking the max over every possible intervention on each hyperparameter. The method of Jiang et al. [9] considers the minimum normalized conditional mutual information (Î; Eq. (13) of [9]) over all conditioning sets of two hyperparameters (Eq. (14) of [9]). However, as described in their Eq. (11) and (12), the calculation ofÎ involves averaging over all values of the hyperparameters in the conditioning set (i.e., U S p(U S )). Therefore, while they may appear similar, both approaches are fundamentally different in that one averages over multiple values of the same hyperparameter, while the other (ours) does not.
Furthermore, in the limit where we observe every possible intervention on the HPs, our method identifies measures that may have a causal relationship to generalization (all causal explanations are necessarily robust in this extreme case). However, this is not necessarily true for the IC-based method of Jiang et al. [9]. The reason is that their conditioning sets are of size 2, which may leave open confounding paths in the graph if more than 2 hyperparameters act as confounders, resulting in non-causal mutual information. This means that the method is not guaranteed to detect a causal edge from the generalization measure to generalization gap unless they condition on all hyperparameters. However, if they were to condition on all hyperparameters, their conditional mutual information would collapse to zero, suggesting that there is no causal edge. | 2020-10-23T01:00:49.181Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "f2231e6dbccced314f30a2fc926bc8a977904562",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f2231e6dbccced314f30a2fc926bc8a977904562",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
71335516 | pes2o/s2orc | v3-fos-license | Hepatitis B antibody titers in Indonesian adolescents who received the primary hepatitis B vaccine during infancy
Background method to prevent HB viral transmission, induces protective antibodies decline in titer over time. Studies on the duration of adolescents have been limited. Objectives and the prevalence of non-responders after a HB vaccine booster dosage. Methods This cross-sectional study was performed from February included HB vaccination history, anthropometric measurements, dose of HB vaccine. Results non-protected subjects, of which 33 showed anamnestic responses. into consideration the adolescents with protective anti-HBs before and after the booster dose, serologic protection was demonstrated of non-responders may indicate bias of parents’ recall. Conclusion nti-HBs is detected in less than half of Following booster dosage, anamnestic responses are noted in oneconfirmation with further study is needed. [Paediatr Indones. 2013;53:160-6.].
From the Department of Child Health, 1 University of Indonesia Medical School, Cipto Mangunkusumo Hospital, Jakarta, and, University of Indonesia Medical School, 2 Jakarta, Indonesia.
Reprint requests to: Hartono Gunardi, Department of Child Health, yahoo.comH worldwide public health problem.Two billion people worldwide have serologic evidence of chronically infected and at risk for HBV-related liver disease.
in the general population. 5,6epatitis B vaccination is the most effective method to prevent HBV transmission and its consequences.
Three doses of HB vaccine induces protective antibody to HB surface antigen (antiand young adults.However, anti-HBs decline over time and the vaccine-related protective duration is a completed primary HB vaccine series showed that titers or undetectable anti-HBs.
proportion of healthy individuals who had received complete HB vaccinations did not achieve protective (> Non-immune adolescents are potentially at risk of acquiring HBV infection due to horizontal transmission, in particular those who engage in multiple partners, tattooing, or injection-drug use.Studies on immunity against HBV in adolescence have been reported from a few countries.However, hepatitis B, is limited.Furthermore, data on primary HB vaccinations have not been well documented, as it is often based on the parents' memory recall.We carried out a preliminary study aimed to determine had received the primary HB vaccine series, based on their vaccination history, and their response to a booster dose of HB vaccine given to individuals who had non-protective anti-HBs levels.
Methods
consecutive sampling and given brief questionnaires regarding their history of HB vaccination, as well as presence of immunocompromised conditions (such as immunodeficiency diseases or taking infections.Students who had received complete primary HB vaccine series during infancy based on their parents' recall through written documentation of the vaccination were included.Adolescents who had received a booster dose of HB vaccine or had doses of HB vaccine with appropriate dosage and Subjects underwent anthropometric measurements and blood tests for anti-HBs, which was measured by a commercial microparticle enzyme immunoassay/ A protective anti-HBs response was defined as an mL. 3 HB vaccine (HB Vax ® II intramuscular injection in a deltoid muscle to subjects booster dose of HB vaccine.An anamnestic response was defined as an increase in anti-HBs level to > A nonresponder was defined as an individual who received the primary HB vaccine during infancy, but did not develop a protective anti-HBs titer after the booster dose.The study protocol was approved by the Committee for Medical Research Ethics, University of was obtained for all subjects.
analyze the difference of anti-HBs titers before and after a booster dose of HB vaccine for data showing normal distribution, otherwise the data was analyzed age, and nutritional status groups showing normal distributions, otherwise the data was analyzed using Mann-Whitney and Kruskal-Wallis tests.Results were
Results
received the complete primary HB vaccine series during infancy were included in our study.Subjects were students from three public senior high schools below the 5 th th th percentile th booster doses of HB vaccine.None had evidence of chronic infection, immunodeficiency disease, or were taking any immunosuppressive medicine at the time of study recruitment.
Subjects' serological status against HBV prior to the study was unknown.Based on blood test for pre-booster anti-HBs titers, subjects were classified > Table 1 showed protective anti-HBs with a median of undetectable anti-HBs.
One subject preferred to receive the HB vaccine from her own doctor and could not be analyzed further.No serious adverse events to the booster dose were significant seroconversion among vaccinated subjects Figure 1 achieved seroconversion to protective anti-HBs titers with a median of , anti-HBs titers, hence, they were categorized as nonresponders.Subjects were classified into one of four categories of post-booster anti-HBs titers, as they were for pre-booster anti-HBs titers (Table 1 Subjects were divided into groups according to age, gender, and nutritional status categories to determine anti-HBs responses after the HB vaccine booster doses among groups.There were no significant differences in post-booster anti-HBs titers among the different categories of age, gender, or nutritional titers for all subjects subdivided by age, gender, and nutritional status categories are shown in Table 2.
protective anti-HBs titers.Therefore, a total of had protective anti-HBs, including both pre-booster and post-booster responders.However, after administration of the HB vaccine rate protective anti-HBs titers.Overall, the prevalence
Discussion
Hepatitis B vaccination programs are highly effective and have led to marked declines in chronic carrier rates and the incidence of hepatocellular carcinoma in moderate-to-highly HBV-endemic countries.
the general population.Without reliable long-term immunity, HBV infection may occur in adolescents at risk, such as by household contact with HBV carriers or due to risk-taking behaviors, especially tattoing current priority is to ensure long-term protection of vaccinated adolescents who are at risk of HBV.To primary HB vaccine series during infancy.
the adequate presence of anti-HBs titer.This protection theoretically vanishes when anti-HBs concentrations The duration of vaccine-induced protection in adolescents with complete primary HB vaccinations during infancy has an important implication on indications for booster primary HB vaccines in early infancy, protective anti-et al.
long-term.
et al. reported a long-term follow-up in humoral immune parameters These various results indicate that policies for booster vaccinations should be based on epidemiological studies.
The possibility of waning immunity or eventual loss of the vaccine protectiveness should be are not detected a few years after the third dose of the vaccine, those with non-protective anti-HBs titers should be given a single HB vaccine booster dose, a booster doses.This result indicated that nonprotective anti-HBs titers in most subjects were due to waning immunity.
A rapid increase in anti-HBs represented an anamnestic response and was considered to indicate the presence of HBsAg-specific immune memory.anti-HBs titers may no longer be detectable, and to a vigorous anamnestic response, which prevents acute infection, acute disease, prolonged viremia, and chronic infection.The presence of HBsAg-specific memory after HB vaccination was suggested in a number of studies by epidemiological data showing the absence of disease in a vaccinated population and proven by demonstration of an anamnestic anti-HBsresponse after revaccination.
considering adolescents with pro-adolescents with pro-with protective anti-HBs titers and those who responded to the booster dose, protective anti-HBs responses were et al. reported that persistence of vaccine-induced immune memory among adolescents who had received primary HB of HB vaccine booster doses.The mechanism for continued vaccine-induced protection is thought to be the preservation of immune memory through selective specific B and T lymphocytes.Bauer et al. suggested that individuals who had lost their protective anti-HBs still showed immunologic T cell memory and that these T cells were able to trigger anti-HBs production by B cells activated by revaccination.This data indicated that a high proportion of vaccine recipients retained immune memory and would develop antiof undetectable anti-HBs in adolescents who had years prior.Forty-three percent of subjects with antia booster dose of HB vaccine.These non-responders recipients worldwide failed to produce protective anti-HBs titers after receiving a primary dose of HB vaccine.
The higher proportion of non-responders in this study may have been caused by uncertainty of the primary vaccine administration in the subjects, since no written documentation of vaccinations was obtained.have been confused with other vaccination series.This recall bias may have led to overestimation of the prevalence of primary non-responders.The other cause that may play a role was HBV infection.HB surface virus infection.HB infection could not completely be rule out by history taking only.
Revaccination was recommended for those individuals whose post-booster HB vaccination vaccines at monthly intervals and should be re-tested non-responders.Non-responsiveness status is related to genetic factors, such as human leukocyte antibody response to full-dose HB vaccination.This suboptimal antibody response was not caused by critical event in T cell responsiveness to HBsAg.
To our knowledge, this is the first study to vaccination data was rarely kept by parents through the adolescent period, so parents' memory recall was used as a source of vaccination data.Analyses of vaccination coverage with recall and written vaccination data, have shown that recall may be used to estimate vaccination coverage in a population.
Recall and written vaccination data were correlated affected the identification of vaccination status of an individual child.Mothers tended to underestimate the number of doses actually received in older children.
hepatitis B vaccinations in the recall only group.The validity of a parent's recall depended upon the vaccine, and it decreased with increasing age of the child at vaccination and with an increasing number of vaccines that the parent had to remember.Thus, to have more accurate results, written documentation of vaccination is needed to evaluate long-term HB vaccine-induced protective immunity.
of HB immunity in adolescents who had received complete primary HB vaccines during infancy based on parents' recall.We detected protective anti-HBs titers in less than half of our subjects, as well as anamnestic response in one-third of subjects, and a after a booster dose of HB vaccine.Further study is needed to determine accurate HB vaccine-induced protec-vaccine-induced protection and non-responder prevalence with a larger sample size and written documentation of infant vaccinations.
Figure 1 .
Figure 1.Anti-HBs titers in pre-booster and postbooster groups et al.Waning immunity to plasma-derived hepatitis B vaccine et al.Long-term humoral and cellular immune response to hepatitis B vaccine cell memory in individuals who had lost protective antibodies responsiveness to hepatitis B surface antigen vaccines is et al.Hepatitis B revaccination in healthy non-responder et al.Genetic prediction of nonresponse to Kaslow RA.Human leukocyte antigen and cytokine gene polymorphisms are independently associated with responses Rittner C, et al.Hepatitis B surface antigen presentation and
Table 1 .Table 2 .
Differences in response to a hepatitis B vaccine booster dose among Alaskan children and adolescents et al healthy adult responders and non-responders to recombinant http://www.cdc.gov/growthcharts/charts.htm#set3.MS, et al.Universal hepatitis B vaccination in Taiwan and the incidence of hepatocellular carcinoma in children.tahun dengan riwayat imunisasi dasar hepatitis B lengkap | 2019-03-08T14:23:40.904Z | 2013-06-30T00:00:00.000 | {
"year": 2013,
"sha1": "70a6b789727a1b42a0a1103bf144f1d2cac391b4",
"oa_license": "CCBYNCSA",
"oa_url": "https://paediatricaindonesiana.org/index.php/paediatrica-indonesiana/article/download/274/172",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "70a6b789727a1b42a0a1103bf144f1d2cac391b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204967267 | pes2o/s2orc | v3-fos-license | An Unsettled Promise: The Newborn Piglet Model of Neonatal Acute Respiratory Distress Syndrome (NARDS). Physiologic Data and Systematic Review
Despite great advances in mechanical ventilation and surfactant administration for the newborn infant with life-threatening respiratory failure no specific therapies are currently established to tackle major pro-inflammatory pathways. The susceptibility of the newborn infant with neonatal acute respiratory distress syndrome (NARDS) to exogenous surfactant is linked with a suppression of most of the immunologic responses by the innate immune system, however, additional corticosteroids applied in any severe pediatric lung disease with inflammatory background do not reduce morbidity or mortality and may even cause harm. Thus, the neonatal piglet model of acute lung injury serves as an excellent model to study respiratory failure and is the preferred animal model for reasons of availability, body size, similarities of porcine and human lung, robustness, and costs. In addition, similarities to the human toll-like receptor 4, the existence of intraalveolar macrophages, the sensitivity to lipopolysaccharide, and the production of nitric oxide make the piglet indispensable in anti-inflammatory research. Here we present the physiologic and immunologic data of newborn piglets from three trials involving acute lung injury secondary to repeated airway lavage (and others), mechanical ventilation, and a specific anti-inflammatory intervention via the intratracheal route using surfactant as a carrier substance. The physiologic data from many organ systems of the newborn piglet—but with preference on the lung—are presented here differentiating between baseline data from the uninjured piglet, the impact of acute lung injury on various parameters (24 h), and the follow up data after 72 h of mechanical ventilation. Data from the control group and the intervention groups are listed separately or combined. A systematic review of the newborn piglet meconium aspiration model and the repeated airway lavage model is finally presented. While many studies assessed lung injury scores, leukocyte infiltration, and protein/cytokine concentrations in bronchoalveolar fluid, a systematic approach to tackle major upstream pro-inflammatory pathways of the innate immune system is still in the fledgling stages. For the sake of newborn infants with life-threatening NARDS the newborn piglet model still is an unsettled promise offering many options to conquer neonatal physiology/immunology and to establish potent treatment modalities.
Despite great advances in mechanical ventilation and surfactant administration for the newborn infant with life-threatening respiratory failure no specific therapies are currently established to tackle major pro-inflammatory pathways. The susceptibility of the newborn infant with neonatal acute respiratory distress syndrome (NARDS) to exogenous surfactant is linked with a suppression of most of the immunologic responses by the innate immune system, however, additional corticosteroids applied in any severe pediatric lung disease with inflammatory background do not reduce morbidity or mortality and may even cause harm. Thus, the neonatal piglet model of acute lung injury serves as an excellent model to study respiratory failure and is the preferred animal model for reasons of availability, body size, similarities of porcine and human lung, robustness, and costs. In addition, similarities to the human toll-like receptor 4, the existence of intraalveolar macrophages, the sensitivity to lipopolysaccharide, and the production of nitric oxide make the piglet indispensable in anti-inflammatory research. Here we present the physiologic and immunologic data of newborn piglets from three trials involving acute lung injury secondary to repeated airway lavage (and others), mechanical ventilation, and a specific anti-inflammatory intervention via the intratracheal route using surfactant as a carrier substance. The physiologic data from many organ systems of the newborn piglet-but with preference on the lung-are presented here differentiating between baseline data from the uninjured piglet, the impact of acute lung injury on various parameters (24 h), and the follow up data after 72 h of mechanical ventilation. Data from the control group and the intervention groups are listed separately or combined. A systematic review of the newborn piglet meconium aspiration model and the repeated airway lavage model is finally presented. While many studies assessed lung injury scores, leukocyte infiltration, and protein/cytokine concentrations in bronchoalveolar fluid, a systematic approach to tackle major upstream pro-inflammatory pathways of the innate immune system is still in the fledgling stages. For the sake of newborn infants with life-threatening NARDS the newborn piglet model still is an unsettled promise offering many options to conquer neonatal physiology/immunology and to establish potent treatment modalities.
INTRODUCTION
Respiratory failure is the leading cause of morbidity and mortality in newborn infants regardless of gestational age. Great advances in the construction of neonatal ventilators (continuousflow) and in the development of assisted ventilation devices (e.g., invasive pressure-limited or volume-constant ventilation, continuous positive airway pressure breathing, nasal high-flow therapy) permitted to push back the thread of futile respiratory failure (Owen et al., 2017). Many years ago respiratory distress syndrome of the premature infant (IRDS) was attributed to a lack of surfactant production in the early stage of alveolar development of the immature lung (Farrell and Avery, 1975).
However, respiratory failure of the term infant secondary to obvious damage of the lungs in the perinatal period, such as meconium, bile, and blood aspiration, lung hemorrhage, pneumonia, or severe chorioamnionitis and sepsis, leading to secondary impairment of surfactant function and surfactant amount, has not been officially defined before 2017 when the Montreux definition of neonatal ARDS (NARDS) was published (De Luca et al., 2017).
The Montreux definition of NARDS requires the following clinical conditions: respiratory failure of acute onset; exclusion of IRDS, transient tachypnea of the newborn (TTN), and congenital malformations of the lung; diffuse, bilateral, and irregular opacities or infiltrates by chest-Xray; lung edema of non-cardiac origin; and an oxygenation deficit expressed by the oxygenation index (OI = MAP * %O 2 /PaO 2 , with MAP = mean airway pressure) being mild , moderate , or severe (OI > 16).
Severe inflammation of the lung tissue in adult ARDS (ARDS) patients prompted researchers to investigate the effect of corticosteroids (Bernard et al., 1987;Steinberg et al., 2006;Needham et al., 2014) without being able to proof reduced mortality (except of the study by Meduri et al., 2007). Indeed, a pediatric study involving ARDS patients (PARDS) being subject to corticosteroid treatment showed increased mortality and less ventilator-free days (Yehya et al., 2015) whereas others (Drago et al., 2015;Kimura et al., 2016) could neither show clinical improvements by methylprednisolone infusions nor meaningful changes in plasma biomarker levels comparing methylprednisolone and placebo (e.g., MMP-8, Ang-2, sICAM-1, sRAGE).
In contrast to ARDS (Anzueto et al., 1996;Spragg et al., 2004;Kesecioglu et al., 2009;Willson et al., 2015), NARDS (Lotze et al., 1998) and PARDS (Herting et al., 2002;Möller et al., 2003;Willson et al., 2005) patients profit from their susceptibility to surfactant treatment. As surfactant is able to mitigate many components of lung inflammation (Kunzmann et al., 2013) its use may be universally indicated together with adjuncts specifically tackling pro-inflammatory pathways being central for lung inflammation. Thus, the pharmacologic armamentarium in the treatment of NARDS appears to be more variable and may be applied more individually than the classical immune-suppressive means in respiratory disease of children (i.e., corticosteroids) (de Benedictis and Bush, 2012).
The identification of major pro-inflammatory pathways [by the analysis of serum or broncho-alveolar lavage fluid (BALF)] causing respiratory failure in NARDS/PARDS has so far brought preliminary results only: De Luca et al. identified secretory phospholipase A2 secreted by alveolar macrophages as the main reason for surfactant degradation (De Luca et al., 2008) whereas in PARDS the analysis of serum Ang-2 and vWF yielded equivocal results Zinter et al., 2016), and the analysis of interleukins, IFN, MCP-1, G-CSF, and MMP-8 did not reveal any pathway-typical patterns Schwingshackl et al., 2016). As an example of ambiguity the study by Dahmer et al. (2018) assessing the role of the naturally occurring IL-1 (interleukin-1) receptor antagonist in the augmentation of PARDS is listed here which underlines the high complexity of natural inflammation and anti-inflammation for the disease process. As to surfactant composition, a decrease in saturated phosphatidylcholine (PC) and an increase in unsaturated PC combined with almost stable concentrations of the four surfactant proteins (SP), however an increase in SP-B as a parameter of capillary leakage, was found in children with a maximum OI of 12 (Todd et al., 2010).
In an attempt to better characterize and tackle major pro-inflammatory pathways in NARDS the neonatal piglet is the animal model of choice for reasons of availability, size, similarities of porcine and human lung, robustness, and costs. In addition, the pig's hypervariable region (HVR) of the toll-like receptor 4 shows high identity (and many nucleotide polymorphisms) with the human TLR4 HVR (Palermo et al., 2009), they are equipped with pulmonary intravascular macrophages, and show LPS sensitivity and NO production comparable to humans (Matute-Bello et al., 2008). To prove the advantages of this translational neonatal piglet model of NARDS, the physiologic data (with emphasis on lung function) from three experiments of our group are summarized here. In addition, the systematic review addresses different models of acute lung injury with respiratory failure in neonatal piglets, describes major pro-inflammatory pathways by the analysis of serum, BALF, and lung tissue, and highlights effective experimental interventions by anti-inflammatory substances.
Piglet Studies and Systematic Review: Data Sources and Searching
A compilation of data from three NARDS studies (von Bismarck et al., 2008;Preuß et al., 2012b;Spengler et al., 2018) was used to describe basic physiologic parameters and major inflammatory pathways of the neonatal porcine lung. The studies were approved by the local Ethics Committee for Animal Research at the Ministry of Energy, Agriculture, the Environment, Nature and Digitalization of the federal state of Schleswig-Holstein in accord with the current European directive on the protection of animals used for scientific purposes. Corresponding physiologic parameters from human neonates are provided as a comparison if available and deemed necessary.
In addition a systematic review on major inflammatory pathways following single-hit or multiple-hit acute lung injury in newborn piglets was conducted using PubMed and Google Scholar databases in search of the terms "newborn piglet" combined with "(acute) lung injury, " "mechanical ventilation, " "respiratory failure, " "lung inflammation, " "meconium aspiration, " "airway lavage, " and "lipopolysaccharide/endotoxin." Reference lists and relevant reviews were also checked manually to recruit potentially eligible studies. Pulmonary physiology data and all data assessing inflammatory reactions secondary to acute lung injury protocols or specific interventions were extracted and reported.
Neonatal Piglets, Mechanical Ventilation, Lung Injury Protocols, Interventions, and Statistics The study population was newborn piglets between day 2 and 6 of life and of either sex that were taken from their mother sows without any period of fasting. Genetic variability was assured by the use of mixed country breed (descendants of Danish Landrace) piglets. Their average weight of 2.5 kg allowed to apply the standard equipment of an average neonatal intensive care unit for instrumentation, maintenance and interventions. The number of piglets included into the data analyses were 22 in study 1 (von Bismarck et al., 2008), 29 in study 2 (Preuß et al., 2012b), and 59 in study 3 (Spengler et al., 2018).
Adequate analgesia/sedation was provided by continuous infusions of ketamine (5 mg/kg/h), midazolam (0.5 mg/kg/h), and vecuronium bromide (0.8 mg/kg/h) throughout the whole study period of 24 h (study 1) or 72 h (studies 2 and 3). Nutritional support was provided via a nasogastric tube with 6 * 25 ml/kg/d specialized milk designed for piglets (Babygold, Hamburger Leistungsfutter). Body temperature of 38-39 • C was maintained by positioning the piglets on a homeothermic blanket (Harvard Apparatus) and applying a rectal probe with the servocontrol mode.
All piglets received mechanical ventilation via an orally inserted 3.5 mm endotracheal double lumen tube. Continuousflow pressure-limited neonatal ventilators (Babylog 1, Dräger) were used with the following initial settings: PEEP = 6 mbar, inspiratory time = 0.5 s, f = 25/min, FiO 2 = 0.5, PIP adjusted to maintain a tidal volume = 7 ml/kg as measured by NVM-1 (Bear) throughout the study. To avoid hypo-/hyperventilation and hypoxemia/hyperoxemia, f and FiO 2 were regularly adjusted according to the results of arterial blood gas analyses. An oxygenation index (OI: MAP * %O 2 /PaO 2 , with MAP = mean airway pressure) and a ventilation efficiency index (VEI: 3800/PIP-PEEP * f * PaCO 2 ) were calculated from the parameters of the ventilator and the results of the arterial blood gas analysis. Functional residual capacity (FRC, ml/kg), the alveolar portion of the tidal volume (V A , ml), tidal volume (V T , ml) (specific) compliance of the respiratory system (sC rs , ml/mbar/kg), and resistance of the respiratory system (R rs , mbar/l * s) were assessed by the nitrogen washout method for lung volumes, and the single breath least-squares method for lung mechanics.
Urine output was monitored continuously by the insertion of a suprapubic bladder catheter.
Two different lung injury protocols were used: in study 1, lung injury was provided by repeated airway lavage with warmed normal saline (30 ml/kg) until the PaO 2 was ∼100 mmHg and stayed at that level for at least 20 min (single-hit lung injury). In studies 2 and 3, three consecutive lung injury protocols were carried out of which the first one was repeated airway lavage as described above, followed by a 2 h period of injurious ventilation (by the use of a V T = 15 ml/kg and PEEP = 0 mbar) 24 h later, and by the endotracheal instillation of 2.5 mg LPS (E. coli serotype O127:B8; Sigma-Aldrich) 48 h later (triple-hit lung injury).
Next to the control groups (C) subject to an air bolus only, the piglets received surfactant (poractant alfa, Curosurf, Chiesi) at a dosage of 1 * 100 mg/kg (study 1) or 3 * 50 (200) mg/kg every 24 h apart (studies 2 and 3) as an intervention. In several intervention groups the surfactant was "fortified" by additional immune-suppressive agents: imipramine 5 mg admixed to surfactant (study 1), D-myo-inositol-1,2,6trisphosphate 2/2.5 mg (Cayman) (studies 2 and 3), myoinositol 40 mg (Sigma-Aldrich) (study 2), phosphatidylinositol-3,5-bisphosphate 2.5 mg(Cayman) (study 3), palmitoyl-oleoylphosphatidylglycerol 7.5 mg (Avanti) (study 3), and dioleoylphosphatidylglycerol 7.5 mg (Avanti) (study 3). In this analysis the data of all intervention groups from one study are combined as the treatment group (T); the combination of C and T is reported as the total group in the tables to point out deviations from means and to prove the stability of the model. Study 3 also analyzed a group of piglets not being subject to sedation and mechanical ventilation that is reported as healthy controls (HC).
Next to the assessment of physiologic parameters, a variety of specific pulmonary parameters of the immune response to single-hit/triple-hit lung injury were performed by the use of lung sections (e.g., histology), lung homogenates (e.g., acid sphingomyelinase activity), and broncho-alveolar lavage fluid (BALF, e.g., cell differentials). For further details we refer to the detailed description of all applied methods in the methods sections of the referenced publications (von Bismarck et al., 2008;Preuß et al., 2012b;Spengler et al., 2018).
For repeated-measures data the two-way mixed ANOVA was used to determine whether there were differences of an independent variable [between subject factor: control (C), treatment (T), overall (O)] over time (within subject factor: baseline, 24, 48, 72 h). A normal distribution of the independent variable was assessed by Shapiro-Wilk's test (p > 0.05). Equality of error variances using Levene's test and equality of covariance matrices by Box's M test was carried out for every parameter; in case of heteroscedasticity data were transformed by the Box-Cox transformation before analysis. Mauchly's test of sphericity was performed on every parameter to check for significant two-way interaction (p < 0.05). The within subject factor and the interaction (within subject factor * between subject factor) were calculated by Greenhouse-Geisser correction in case the estimated epsilon was <0.75. The main effect of the between subject factor (group) on the independent variable was considered statistically significant in case of p < 0.05. Single data sets were checked for deviations from normality using the Shapiro-Wilk's test (p > 0.05). Normally distributed data were analyzed by unpaired t-tests, and non-parametric data by Mann-Whitney U tests. All data are presented as means ± SD. The analyses were performed by SPSS version 24 (IBM, Ehningen, Germany).
Systematic Review: Study Selection, Data Extraction, and Assessment of Risk of Bias Two authors (DS and NR) independently screened the titles provided by the combination of different search terms indicated above. The inclusion criteria were: studies published in English within the last 30 years following peer-review, studies reporting information on NARDS in neonatal piglets following distinct experimental lung injury protocols, studies reporting on major inflammatory pathways and their mediators. Publications were excluded if they did not report on a setting of invasive mechanical ventilation with at least one acute lung injury protocol, and if no adequate control group was presented. The quality of studies was independently evaluated by the two authors using the Quality Assessment Tool for Case-Control Studies by the National Heart, Lung, and Blood Institute (NHLBI) 1 .
Circulation
The cardiovascular stability was challenged in the context of direct and indirect manipulations of heart, systemic, and pulmonary circulation. In addition, the possible pharmacologic effects of sedatives/analgetics must be taken into account. For a sufficient stability of the circulation some drug classes, such as barbiturates and opiods seem to be less suited because of their negative inotropic action on the myocardium. In models covering more than 12 h of mechanical ventilation a cumulative effect and a progressive decline in HI and SVRI can be observed. As sufficient analgesia is paramount in any model opioids should be used for instrumentation and for all kinds of painful procedures, however, for long time sedation and analgesia ketamine (in combination with low dose benzodiazepine) seems to be more apt because of its positive inotropic effect even in the presence of muscular blockade.
The combined effects of the triple-hit lung injury protocol (repeated airway lavage, injurious ventilation, and endotracheal endotoxin installation) on cardiovascular parameters are shown in Table 1 covering a time window of 72 h (Spengler et al., 2018). The cardiovascular function is characterized by high stability in heart rate (HR), systolic and diastolic blood pressure (S/DBP), heart index (HI), systemic vascular resistances index (SVRI), intrathoracic blood volume index (ITBI), stroke volume index (SVI), and stroke volume variation (SVV) over 72 h of invasive monitoring despite statistically significant changes in DBP, SVRI, SVI, and SVV (time) and HR (time * group) ( Table 1).
However, no single parameter shows a continuously increasing or decreasing trend. We observed progressing blood pressure instability combined with increasing SVRI and decreasing HI in only 5/67 (7.5%) piglets, a reason for drop-out in this model. Data of circulatory parameters in the non-anesthetized piglet have been published by Eisenhauer et al. (1994) studying chronically instrumented neonatal piglets being individually raised and fed. The heart rate of 187 ± 28 bpm and the mean blood pressure of 66 ± 4 mm Hg are very close to the values obtained in our piglets at baseline being subject to anesthesia and mechanical ventilation, suggesting only minor influences of ketamine/midazolam/vecuronium bromide given as continuous drips on hemodynamic function. This is supported by the data from 5 to 7 days old piglets being subject to anesthesia with halothane and invasive blood pressure monitoring yielding values for SBP of 89 mmHg (CI 84-99) and DBP of 54 mmHg (51-60) (Voss et al., 2004). Using the thermodilution technique HI was 4.04-4.38 ± 1.23-1.42 l/min/m 2 , and the SVI 20.4/20.4 ± 5.7-9.5 ml/m 2 in 13 days old piglets (Gibson et al., 1994), and ITBI 230 ± 76 ml/m 2 in 1-3 days old piglets (Silvera et al., 2011).
Electrolytes and Renal Function
We observed significant (however clinically irrelevant) timedependent changes in electrolytes, creatinine, and GOT ( Table 2). Plasma Na (143 ± 5 mmol/l, 138 ± 3) and K (4.4 ± 0.8 mmol/l, 4.2 ± 0.4) concentrations in 2-5 days old piglets were comparable to our results (Parker and Aherne, 1980;Eisenhauer et al., 1994). The rather low K concentrations in our study (3.2 ± 0.7 mmol/l) suggest that the phase of increased newborn hemolysis yielding higher K serum concentrations is almost completed at the time of baseline measurements. GOT and creatinine in 18 three days old piglets were 36 ± 6 U/l and 0.47 ± 0.03 mg/dl at baseline in a cecal ligation model (Goto et al., 2012). Data on urine production dependent on body weight have not yet been published to the best of our knowledge. Urine production depends on fluid intake and post-natal age and averages in the human infant between 2 and 5 ml/kg/h. The fluid intake in our protocol followed accepted guidelines (Petersen et al., 2003) and consisted of ∼200 ml/kg/d consisting of ¾ enteral nutrition fluids and ¼ intravenous fluids.
Blood Cell Differentials
We observed time-dependent changes in all blood cell lines (monocytes excepted) and a significant interaction for thrombocytes (time * group). Most of the cell lines did not show a clear trend, the administration of LPS at 48 h included ( Table 3). The hematocrit of 2-5 days old piglets was 27 ± 2% (equivalent to a hemoglobin concentration of 9.0 ± 0.6 g/dl) (Eisenhauer et al., 1994) and 8.5 ± 3.2 g/dl in piglets on day 1 and 2 (Park and Chang, 2000). Clearly, the hematocrit of term newborns at 48 h of age is higher [17.7 g/dl ± 1.8 to 19.5 ± 2.1 depending on the mode of cord clamping (Mercer et al., 2017)] thus doubling the oxygen transport capacity and making the human newborn less vulnerable to an impaired gas exchange in the transitional period.
Lung Function
The determination of EVLWI has been performed by the thermodilution method in newborn piglets yielding a value of 20 ± 1 ml/kg (Silvera et al., 2011) and in human neonates following arterial switch operation due to transposition of the great arteries yielding 20 ± 7 ml/kg after extubation (Székely et al., 2011), however data in well babies do not exist because of the invasiveness of the technique. In (adult) humans a value of 3-7 ml/kg is considered normal, however neonates tend to have higher values because of incomplete resorption of lung fluids in the post-natal transitional process and of shunting via a patent ductus arteriosus and foramen ovale. Our baseline data of "total" (13.2 ± 5.5 ml/kg, Table 4) are close to the values of newborn neonatal lambs assessed by multiple indicator dilution methods showing an EVLWI of 10.7 ± 1.4 ml/kg (Sundell et al., 1987).
on V A have not been published by other investigators but were measured with 4.8 ± 0.3 ml/kg in a previous study of our group (Krause et al., 2001). Impairment of oxygenation is a prerequisite of P/NARDS and is usually defined by the OI which is an equation composed of the degree of respiratory support (mean airway pressure, MAP), the oxygen concentration in respiratory gas mixtures, and the partial pressure of O 2 in blood as a measure of gas exchange (MAP * %O 2 /PaO 2 ). By the Montreux definition of NARDS (De Luca et al., 2017), the control group experienced severe NARDS expressed by an OI of 16.1 ± 6.1 at 72 h of mechanical ventilation ( Table 4). Baseline values in our study ("total": 2.3 ± 0.7) are close to those from other investigators: 1.5 ± 0.5 ml/mbar/kg (Khan et al., 1999), 1.4 ± 0.3 (Tølløfsrud et al., 2002), and 1.3 ± 0.3 (Renesme et al., 2013). The VEI in "total" (0.38 ± 0.19) is close to the value of 6 days old piglets at baseline (0.30 ± 0.02) in the lavage study by Sood et al. (1996a) and to the value of 5 days old piglets (0.33 ± 0.08) in the meconium aspiration study by Khan et al. (1999).
Bacteria in Airways
A plentitude of different bacteria in the airways was cultured with the initial lavage mainly belonging to the three groups of (lacto)bacillales, enterobacteriaceae, and soil-based bacteria ( Table 5). Given the relative dominance of soil-based bacteria in the airways of our piglets (Bacillus cereus, Rothia, aerobic spore builder, Corynebacterium sp.) inhalation of these microorganisms due to the use of the piglets' nose for foraging and consecutive colonization of upper and lower airways must be considered. The high frequency in bacillus cereus colonization (in 8/52 cultures from the final lavages) demonstrates the natural resistance to beta-lactams, e.g., ampicillin ± sulbactam as given in our study (Glasset et al., 2018). The increasing prevalence of colonization by multidrug resistant Gram-negative bacteria, such as E. coli and Klebsiella sp. in neonatal intensive care units (NICU) are correlated with length of NICU stay, and-indeed-exposure to ampicillin/sulbactam (Giuffrè et al., 2016).
Lung and Body Weights
We determined a lung/body weight relation of 1.6 ± 0.2% ( Table 6) which is in line with the findings in 6 three days old piglets [1.5 ± 0.2 (Standaert et al., 1991)], of 1.0 ± 0.1 in 8 fourteen days old piglets (Dargaville et al., 2003), 3.0 ± 0.3 in 13 three days old piglets (van Kaam et al., 2004b), and 1.7 ± 0.1 in 27 one day old piglets (Miles et al., 2012). The applicability of the neonatal piglet lung model for studying severe lung diseases is also expressed by the similarities to term human lung/body relations of 1.7 ± 0.4% (De Paepe et al., 2005) and 1.9 ± 0.3 (De Paepe et al., 2014).
Cells in BALF and Apoptosis
There is currently no reliable indicator to assess the amount of epithelial lining fluid recovered by broncho-alveolar lavage (de Blic et al., 2000). Most commonly urea and albumen have been used as reference substances, however, lower serum concentrations of both substances in the smallest children bedevil the interpretation of cellular and non-cellular concentrations in BALF, as do the size of the lungs, the region of interest within the lung (in the context of bronchoscopic BALF recovery), the amount of lavage fluid used, the aspiration technique, and the processing of cellular and non-cellular components. The lavage protocol used in our studies consisted of the instillation and aspiration of 30 ml/kg of warmed normal saline by a syringe hooked up to the adaptor of the endotracheal tube.
An increased BALF total cell count >150 cells/µl is a common characteristic of many lung diseases in infants and children (Riedler et al., 1995). Thus, the total cell count of 633 ± 336/µl ( Table 7) in our study at baseline suggests an important impact of bacterial colonization in the majority of the piglets (43/51 = 84%). The dominance of alveolar macrophages in newborns/young infants with ∼98% in cell differentials changes over time and reaches ∼90% at an age of 7 years (Grigg and Riedler, 2000), linked with an appropriate increase of the lymphocyte counts. Not surprisingly the PMNL count of 32 ± 14% in our study is much higher than in human newborns. Following meconium instillation in one lung lobe and mechanical ventilation of 12 h the total cell count was 1,400 ± 1,100/µl in 17 piglets at day 0-2 of life (Korhonen et al., 2004). Likewise PMNL was the dominating cell line (1,000 ± 900/ml) as also seen in our model (80 ± 4%) (Figure 1). PMNL, monocytes, and lung macrophages express CD14 implicated in the cellular response to LPS (given intratracheally as part of the triple-hit lung injury protocol applied here) together with a plasma LPS-binding protein. MD-2 and the intracellular part of TLR4 are necessary for the transduction of the signal activating cytokine and chemokine genes. The β2-integrin CD18 is also expressed by both, PMNL and monocytes/macrophages, and plays an important role in the migration of cells to areas of the lung containing high concentrations of chemokines, such as C5a. Monocytes recruited into the alveolar space keep phenotypic features of blood monocytes but upregulate CD14 resulting in enhanced responsiveness to LPS with increased cytokine expression (Maus et al., 2001). 28 ± 15% of the cells harvested by BALF ( Table 7) are CD14 + /18 + and belong to either population; their response to LPS and the concomitant (overwhelming) production of TNF-α, IL-1α and IL-1β, IL-6, IL-8, C3a, and C5a (Billman Thorgersen et al., 2009) represents a major proinflammatory pathway in the ARDS lung (Dentener et al., 1993). An important difference in physiologic response of the porcine lung to a variety of agents, such as particulates, bacteria, fibrin, cellular debris, and immune cells are constitutive pulmonary intravascular macrophages (PIM) that express a β3 integrin subunit (CD61) for the clearance of all kinds of proteins from the circulation (Schneberger et al., 2012). The heavy accumulation of PIM in lung tissue is linked with an increase in vascular permeability, edema, hemorrhage, and alveolar septal thickening (2), Klebsiella]. More than one bacterium was grown in some of the BALFs. All piglets received ampicillin/sulbactam at a dosage of 100 mg/kg twice a day. Overgrowth of E. coli, Klebsiella, Bacillus cereus, and aerobic spore builders due to natural or acquired resistance. ( Figure 1) in a piglet model of classical swine fever (Núnez et al., 2018). Alveolar epithelial apoptosis is a typical feature of the ARDS lung (15.0 ± 5.8%, Table 7) and is linked with impairment of oxygenation and ventilation and abrogated barrier functions (Matute-Bello and Martin, 2003). In pediatric patients dying from PARDS the extent of cleaved caspase-3 in alveolar epithelial cells as a surrogate parameter of apoptosis has been quantified by Bern et al. (2010) yielding a percentage of 6.4 ± 1.2 (range 1.0-18.1)%. Apoptosis in severe lung disease must be differentiated from apoptosis during the process of alveolarization and differentiation which continues after birth until the second year of life; thus background apoptosis of 1-2% of AEC must be considered in neonatal organisms when evaluating lung disease (del Riccio et al., 2004). In ARDS the percentage of apoptotic PMNL obtained by lavage was 3 (0-7.3)% in patients who died (Matute-Bello et al., 1997), and 10-20% in a murine ARDS model of intraperitoneal LPS (data on human or porcine neonates unknown) (Wang et al., 2014). Data on macrophage apoptosis are scarce and increase from 10.1 ± 1.1% to 20.2 ± 1.7 following LPS challenge in murine cell cultures (Li et al., 2018).
Surfactant Surface Tension and Alveolo-Capillary Leakage
Regardless of the kind of acute lung injury the surfactant surface tension (Table 8) will increase considerably due to either a loss of the surfactant pool (repeated airway lavage) or disturbances in surfactant function (meconium instillation, LPS instillation, hyperoxia). In a meconium aspiration model minimum surfactant surface tension increased from 17.8 ± 4.8 mN/m to 23.3 ± 4.8 (Wiswell et al., 1994), and in a repeated airway lavage model from 11.1 ± 5.2 to 21.8 ± 2.1 (von Bismarck et al., 2007). Albumen has been identified as a major factor of surfactant inhibition (Seeger et al., 1993) and simultaneously reflects the degree of capillary-alveolar leakage as part of the inflammation of lung tissue and pulmonary capillaries. Albumen concentrations in BALF have been assessed in a hyperoxia model with baseline values of 56 ± 19 µg/ml and a 3-fold increase following lung injury (Davis et al., 1989). SP-D serum concentrations in ARDS increased 3-to 4-fold [from 1.9 µg/ml (0.6-4.4) to 5.9 (2.5-22.7) ; and from 83 ± 33 ng/ml to 476 ± 391 (Endo et al., 2002)]. SPLA 2 has been blamed to play a major role in surfactant degradation in NARDS lungs (
NF-κB, Inflammasome, and Ceramide Pathway
In an experimental pneumonia model with E. coli instilled into the airways of 3-4 weeks old piglets the NF-κB concentration in lung tissue homogenates increased from 0.25 to 0.4 arbitrary units and could be reduced by the application of inhaled nitric oxide or the instillation of surfactant (Zhu et al., 2005). The selective topical inhibition of NF-κB by IKK-NBD peptide via instillation into the airways using surfactant as a carrier substance improved FRC, V A , C rs , R rs , and EVLWI in a newborn piglet lavage model (Ankermann et al., 2005a;von Bismarck et al., 2007). Of note, the reduction of NF-κB activity in the nucleus of pulmonary cells from 100 ± 2% to 32 ± 2 by IKK-NBD peptide was more pronounced than the effect of dexamethasone reaching an activity of only 55 ± 4% (von Bismarck et al., 2009). In porcine alveolar macrophages swine influenza virus induces massive IL-1β production secondary to an increased expression of inflammasome components (NLRP3, ASC, procaspase-1) (Park et al., 2018). In C57BL/6 mice the application of a twohit lung injury by mechanical ventilation and LPS induces IL-1β and KC (a murine functional analog of IL-8), and cell migration into the alveolar space, all of which may be considerably reduced by the administration of the IL-1 antibody anakinra (Jones et al., 2014). NLRP −/− mice exposed to hyperoxia showed significantly lower IL-1β, TNF-α, and MIP-2 concentrations in BALF (Fukumoto et al., 2013). The mutual dependency of the ceramide pathway and the inflammasome NLRP3 has been shown by Kolliputi and our group in alveolar epithelial cells (Kolliputi et al., 2012) and in porcine lung homogenates (Spengler et al., 2018) (Table 9). In tracheal aspirates from preterm infants prone to bronchopulmonary dysplasia (BPD) high IL-1β and IL-1ra concentrations were linked with more severe grades of BPD or death (Liao et al., 2018). In adult patients subject to overventilation (V T = 12 ml/kg) ASCupregulation in alveolar epithelial cells was ∼10-fold compared to normoventilation, the expression of NLRP3 and ASC in alveolar macrophages doubled (Kuipers et al., 2012). The NLRPdependent cytokine IL-1β was treated with either aerosolized or intravenous anakinra in a lavage model of surfactant depletion yielding moderately improved oxygenation, ventilation, C rs , and neutrophil migration into lung tissue (Chada et al., 2008). The IL-1β/β-actin ratio in lung tissue was reduced from 4.9 ± 2.4 (control) to 0.9 ± 0.3 (aerosolized) and 0.8 ± 0.1 (intravenous), respectively. A comparable reduction in the IL-8/β-actin ratio could be demonstrated. In an E. coli LPS model of ARDS treating 4-6 weeks old piglets by the intravenous route, IL-1β concentrations rose from 29 ± 2 pg/ml to 89 ± 18, IL-6 from 18 ± 8 pg/ml to 22 ± 7, and IL-8 from 80 ± 7 pg/ml to 118 ± 10 (Wang et al., 2016).
"Ceramide lances the lungs, " as pointed out by P. Barnes (Barnes, 2004) describes the impact of the activated ceramide pathway on impairment of alveolo-capillary barrier functions in lung inflammation (Göggel et al., 2004). More than 30 years ago high concentrations of galactosylceramide (20-to 40-fold normal) were found in the lavage fluid of mechanically ventilated ARDS patients (Rauvala and Hallman, 1984). In (adult) patients suffering from cystic fibrosis the application of amitriptyline normalizes pulmonary ceramide and improves lung function including susceptibility to infection (Teichgräber et al., 2008). As it is well-known for many years that the porcine organism displays all kinds of glycolipids, such as galactosylceramide, glucosylceramide, ganglioside, and globoside (Kyogashima et al., 1989) there is unfortunately no data for comparing our results with regard to the impact of the acid sphingomyelinase/ceramide pathway on lung function. For the rat it has been shown that sphingomyelin content, sphingosine concentrations, and ceramide concentrations are highest in neonatal compared to fetal or adult lungs (Longo et al., 1997) underlining the important role of the ceramide pathway in neonatal lung physiology. In the newborn rat (Husari et al., 2006) and newborn mice (Tibboel et al., 2013) hyperoxia models, ceramide and sphingomyelin concentrations are increased 2-to 4-fold. In addition stretch applied to alveolar epithelial cells from newborn rat lungs by mechanical ventilation induces autophagy, acid sphingomyelinase activity, and ceramide generation (Yeganeh et al., 2018).
Pro-fibrotic and Pro-inflammatory Parameters
TNF-α in BALF (Table 10) increased from 21 ± 4 pg/ml to 42 ± 22 following meconium instillation into the lungs of 1-2 days old piglets (Korhonen et al., 2004), from 0.03 ± 0.02 U/ml to 0.34 ± 0.58 in a newborn piglet lavage model (Krause et al., 2005), and from 80 ± 84 pg/ml to 1,357 ± 676 in a meconium model with 1-3 days old piglets (Angert et al., 2007). Depending on the kind of acute lung injury the increasing pre-/post-injury factor varies largely between 1:2 and 1: 60 (Table 10). IL-8 concentrations rose from 51 ± 34 pg/ml to 429 ± 259 in a lavage model (Ankermann et al., 2005b), and from 406 ± 364 pg/ml to 4,837 ± 1,951 in an meconium aspiration model (Angert et al., 2007), whereas IL-6 from BALF came up from 0.4 ± 1.0 U/ml to 29 ± 28 following repeated airway lavage (Krause et al., 2005). LTB 4 as an important chemokine in the inflamed lung and increased from 2.6 ± 1.9 pg/ml to 9.3 ± 7.8 in a newborn lavage model (Ankermann et al., 2005a). Data from other authors on fibrosis in (newborn and adult) piglets subject to induced acute lung injury are missing probably due to the observation interval of at least 24-72 h before changes in pro-fibrotic parameters may be quantified as demonstrated in ARDS patients (Fahy et al., 2003;Fligiel et al., 2006). A 72 h model of clinical observation as presented here (Preuß et al., 2012b;Spengler et al., 2018) is expensive and requires detailed knowledge of neonatal physiology and intervention skills. However, as an exception, von der Hardt et al. presented TGF-β mRNA expression data in a piglet lavage (Fligiel et al., 2006).
Systematic Review
The systematic review (flowsheets in Figure 2) highlights two major acute direct lung injury models with the need of mechanical ventilation in term newborn piglets <14 days of age. Thus, gradually developing lung injury models, such as hyperoxia application or lung injury models without mechanical ventilation are not covered here. For a better understanding of NARDS immunologic outcome parameters and the effect of specific interventions are displayed.
Meconium Aspiration Model
The meconium aspiration model is a frequently used model of direct lung injury by the installation of (human) diluted meconium into the airways. Within 2 h following meconium instillation, an increase in OI and R rs and a decrease in sC rs by ∼50% can be observed (Kuo and Chen, 1999;Tølløfsrud et al., 2002). BP, CI and SVRI do not change significantly compared to control groups whereas the pulmonary arterial pressure (PAP) and the pulmonary vascular resistance index (PVRI) differ beyond a 2 h margin (Trindade et al., 1985;Kuo and Chen, 1999;Ryhammer et al., 2007). Of note, the deteriorations in lung mechanics and gas exchange are not sustained evaluating studies with longer observation periods (i.e., 12-48 h) when inflammatory parameters start to gradually decline again (Davey et al., 1993;Korhonen et al., 2004).
Meconium is composed of a myriad of substances essentially containing gastrointestinal secretions, bile, bile acids, pancreatic juice, mucus, swallowed vernix caseosa, lanugo hair, cellular debris, and blood (van Ierland and de Beaufort, 2009). As meconium is located "extracorporally" (i.e., hidden in the intestinal tract) its content normally is not recognized by the fetal immune system (Lindenskov et al., 2015). However, once meconium enters the airways the innate immunity senses a "damaged self " and reacts with "chemical pneumonitis" including increased airway responsiveness, pulmonary hypertension, cellular infiltration, impairment of gas exchange, PMNL infiltration of airways and lung tissue, alveolar epithelial cell apoptosis, and a cytokine storm .
Therefore, aspects most often studied in the newborn piglet meconium aspiration model are cytokines/chemokines, PMNL infiltration (by the quantification of myeloperoxidase (MPO) in BALF and in lung tissue by immunohistochemistry), reactive oxygen species (ROS), pulmonary hypertension, arachidonic acid metabolites (notably sPLA 2 ), and changes of the complement system (membrane attack protein sC5b-9).
Many studies quantified cytokines/chemokines, such as IL-1β, IL-6, IL-8, and TNF-α (Table 11) all of which largely depend on pattern recognition by the Toll-like receptor family (especially TLR4/MD-2, CD14, and C5a) (Salvesen et al., 2010). Specific therapy assessment to influence e.g., CD14 (Thomas et al., 2018) and downstream NF-κB by broad-acting glucocorticoids (Holopainen et al., 2001;Lin et al., 2016Lin et al., , 2017 or more specific inhibitors of NF-κB are scarce and deserve further evaluation. ROS are inflammatory mediators protecting the host from external damage, however, they simultaneously inherit a strong potential to harm the host in case of overwhelming activation. The complement system is linked with C5a-mediated leukocyte oxidative burst and plays an important role by the supply of C5b-9 which has the potential to also directly attack alveolar epithelial cells. While the combined application of C5a-and CD14-inhibitors resulted in a pronounced attenuation of inflammatory parameters (particularly IL-1β and MPO), the clinical course of the intervention group was not different from the control group (Thomas et al., 2018). Meconium has high concentrations of phospholipase A 2 (sPLA 2 ), a family of ubiquitous enzymes that release arachidonic acid by the cleavage of membrane phospholipids or surfactant (Holopainen et al., 1999a;De Luca et al., 2009). The administration of dexamethasone which reduces a stimulated sPLA 2 synthesis (Hoeck et al., 1993), does not contain sPLA 2 activity nor reduce inflammation in the newborn piglet model (Holopainen et al., 2001).
By far the majority of the studies (Table 11) focus on the effect of surfactant substitution for improvements in lung mechanics and gas exchange. While surfactant is known to protect the lungs from inflammation modulating peroxidation, formation of nitric oxide, sPLA 2 , eicosanoids, and cytokines (Wright, 2003), some surfactant fractions, such as palmitoyl-oleoylphosphatidylglycerol (POPG) (Numata et al., 2010;Spengler et al., 2018) and dioleoyl-phosphatidylglycerol (DOPG) (Preuß et al., 2014) exert potent anti-inflammatory action and deserve further research (Salvesen et al., 2014). The administration of a POPG-based synthetic surfactant (CHF5633), however, did not improve the clinical outcome in the newborn piglet model not reflecting some marked inflammatory mediator attenuations, such as reductions in IL-1β and lipid peroxidation (Salvesen et al., 2014).
Lavage Model
The lavage model (Table 12) excels by the fine-tuning of impairment of gas exchange (oxygenation index, ventilation efficiency index), lung mechanics (compliance and resistance of the respiratory system), and lung volumes (alveolar volume, functional residual capacity). Once an appropriate lung injury has been set (mostly monitored by reductions of oxygenation and compliance) the piglet remains stable with regard to circulation and other organ system function. By the use of continuous sedation/analgesia, mechanical ventilation can be perpetuated for several days allowing further injury to the lungs (double-/triple hit injury models) or specific interventions. That way, the requirements for NARDS by the Montreux definition (acute onset; diffuse, bilateral, irregular opacities; edema; oxygenation deficit) can be completely satisfied (De Luca et al., 2017). In addition, as an animal model of acute lung injury, physiologic changes (decreased compliance, reduced functional residual capacity, V/Q-abnormalities, impaired alveolar fluid clearance), biological changes (increased endothelial and epithelial permeability, increased cytokine concentrations in BALF or lung tissue, protease activation, coagulation abnormalities), and pathological changes (infiltration by PMNL, fibrin deposition and augmented intra-alveolar coagulation, denudation of the basement membrane) can be observed (Matute-Bello et al., 2008).
The washout of endogenous surfactant by the use of warmed normal saline with a volume exceeding FRC (i.e., >25 ml/kg) ensures-in contrast to other models-acute onset of NARDS which is sustained by secondary damage to the remaining surfactant within the lung and the triggering of a marked inflammatory response which primarily activates all mechanisms of the innate immunity. Thus, the saline for lavage may be considered as a pathogen-associated molecular pattern (PAMP) being recognized by the collectin, ficolin, and pentraxin families which can act as opsonins either directly or by activating the complement system (as also shown in the meconium aspiration model) (Male, 2006). The innate immunity's response brings leukocytes and plasma proteins to the site of (lung) tissue damage. The arrival of leukocytes in the lung depends on chemokines and adhesion molecules expressed by the pulmonary vasculature endothelium, by the alveolar epithelium, and by the activity of local macrophages. Notably TNF-α, which is primarily produced by macrophages, induces the expression of adhesion molecules and chemokines, and may elicit the activation of NF-κB and apoptosis. Next to TNF-α, IL-1β plays an important part in the induction of adhesion molecules on the endothelium.
In this context it is surprising that many aspects of dampening the innate immune system in the overwhelming response of the neonatal lung secondary to repeated airway lavage have not been studied yet (Table 12). While the early studies considering immunologic aspects measured protein content in BALF, histopathology scores, and the concentrations of TNF-α, LTB 4 , IL-1β, and MPO as a surrogate parameter for neutrophil infiltration (Sood et al., 1996a;Merz et al., 2002;van Kaam et al., 2005), more recent studies put light on the selective inhibition of NF-κB (von Bismarck et al., 2007), IL-1β metabolism (Chada et al., 2008), general immune suppression by dexamethasone or budenoside (von Bismarck et al., 2009;Yang et al., 2010), eicosanoid suppression (Ankermann et al., 2006), and blockade of IL-8 (Ankermann et al., 2005b). The impact of the ceramide metabolism in nARDS has been investigated (Barnes, 2004;von Bismarck et al., 2008;Spengler et al., 2018), and the important role of NLRP3 (nucleotide-binding domain, leucine-rich repeatcontaining protein-3) in a triple-hit lung injury model has been studied (Dos Santos et al., 2012;Spengler et al., 2018).
Miscellaneous Models
NARDS-like severe pneumonia has been established by few authors, however, the maintenance of HR and BP as prerequisite for a stable model of primary lung injury-in contrast to e.g., the rabbit model-can also be demonstrated in direct GBS instillation into the airways (van Kaam et al., 2004b). The selective inoculation of GBS into the lower lobes of newborn piglets resulted in widespread alveolar atelectasis, loss of hyaluronan, and an increased systemic uptake of the microorganisms into the circulation (Juul et al., 1996). Using isolated selectively perfused piglet lungs, an increase in total pulmonary resistance is observed following GBS-instillation into the pulmonary circulation (Aziz et al., 1993). Intravenous E. coli endotoxin has been used to induce a moderate impairment in oxygenation and lung mechanics without experiencing any positive changes by surfactant application (Sood et al., 1996b).
An exceptional model of wood smoke inhalation treated with surfactant and partial liquid ventilation has been set up by Jeng et al. (2003).
Perspective: Newborn Animal Models of NARDS Involving Mechanical Ventilation
Many more newborn animal models from different species and subject to a variety of acute lung injury protocols have been set up (Table 13). While the clinical relevance varies considerably among species with regard to body size (piglets and lambs 70-80% of human size, rodents 0.1-1.6%), availability, and similarities with the human innate immunity, models with rodents have been used abundantly for low costs and genetic similarity among animals despite of their limitations in comparability with human newborns. Ethical considerations and very high costs limit the availability of newborn baboons which have been studied almost exclusively in preterm models of infant respiratory distress syndrome (IRDS). Thus, piglet and lamb models clearly head the list of established newborn animal models of NARDS also considering the wide variety of direct and of only one direct-to-indirect lung injury protocol (i.e., intraamniotic LPS administration, Table 13). With a body size being 70-80% of human newborns, a gravidity length of 40%, a thoracic-toabdominal relationship of 1:1 as in humans (compared to a relationship of 1:2-3 in rodents), and a lung:body weight ratio of 1.6 (porcine) vs. 1.8 (human), the piglet model excels also taking into account the similarities of the innate immunity, such as an 80% accordance of the hypervariable region of the Toll-like receptor 4, LPS response, and production of nitric oxide.
As we know that pathophysiologic peculiarities in ARDS of different age groups show considerable overlap in animal models (Schouten et al., 2015) as well as in human kind (Schouten et al., 2019), it is of uttermost importance to describe and to tackle the numerous facets of innate immunity in the various animal models. With age-dependent changes in lung morphology, cell integrity, and above all wide variations in severity of acute lung injury, most of the direct (e.g., lavage, MAS) or direct-to-indirect acute lung injury protocols are able to elicit at least some of these responses which may be even more pronounced and more diverse in multiple-hit models.
Direct lung injury in newborn animals aims at alveolar epithelial damage, alveolar edema, generation of hyaline membranes, disruption of the alveolar basal membrane, epithelial-to-mesenchymal transition, PMNL migration, induced macrophage activity, cytokine storm, protease and phospholipase activation, coagulation abnormalities, oxidative stress, and increased production of antioxidants. As prognostic biomarkers indicating increased mortality in human ARDS patients do not differentiate direct and indirect forms of ARDS [SP-D in serum excepted (Calfee et al., 2015)] it is important to acknowledge that existing neonatal lung injury models almost exclusively represent direct lung injury for reasons of achievement of acuity, stability of the model, and indispensable characteristics of acute lung injury involving clinical features, physiological changes, biological changes, and pathological changes (Matute-Bello et al., 2008). From a practical point of view newborn models of NARDS may therefore profit more from surfactant therapy than the lungs from indirect models as demonstrated in clinical studies with children (Khemani et al., 2019) and adults (Taut et al., 2008). Finally, newborn animal models of NARDS profit from small body sizes/weights in order to evaluate specific therapy modalities, such as surfactant administration in escalating doses, antibody therapies, or modulation of major pro-inflammatory pathways, all of which are subject to R and D and may be extremely expensive. In this regard it is important to be aware of the similarities in NARDS, PARDS, and ARDS (De Luca, 2019), of the impact of the innate immunity, and of *PIP/PEEP, peak inspiratory pressure/positive end-exspiratory pressure; # V T , tidal volume; $ study l., study length; § Crs/Rrs, compliance/resistance of the respiratory system; ¶ PAP/PVR(I), pulmonary arterial pressure/pulmonary vascular resistance (index); † a.o., and others; BQ-123, endothelin antagonist; ‡ aU, arbitrary units; π CV/HFO, conventional ventilation/high-frequency oscillation; ivIg, intravenous immunoglobulin; ¢rhCC10 it, recombinant human Clara Cell protein 10, intratracheally administered; ¬ CHF5633, synthetic surfactant containing SP-B/C, DPPC, POPG; ∼ TAT, thrombin antithrombin complex; ¤ PAI-1, plasminogen activator inhibitor-1. PIP/PEEP, Crs/Rrs, PAP/PVR(I), immunologic response: arrow (→) delineates changes by the lung injury protocol (i.e., meconium instillation into the airways); vs. delineates differences secondary to a specific intervention (first place control group, second place intervention group data). Immunologic response parameters are from broncho-alveolar lavage fluid (BALF) unless specified otherwise.
Future Direction: Customizing Innate Immunity
In the majority of cases NARDS is elicited by the invasion of PAMPS (pathogen-associated molecular patterns) into the lungs which may be bacteria, meconium, droplets of bile, amniotic fluid, or swallowed blood among others. These pathogens may be bound by macrophages with the help of surface lectins or Toll-like receptors also inducing macrophage activation. The surfactant proteins A and D are collectins and provide a first line defense next to molecules of the ficolin and pentraxin families which act as opsonins together with the complement system. Of paramount significance is the early invasion of PMNL to the site of inflammation (alveolar epithelium, capillary endothelium of the lung) mediated by CAMs (cellular adhesion molecules, interaction with PMNL integrins) and selectins (interaction with carbohydrate ligands). In addition, cytokines, such as TNF-α, IL-1β, and chemokines move PMNL and plasma molecules to the site of inflammation or tissue damage. The clearance of pathogens and cell debris is followed by remodeling and regeneration of pulmonary tissue including epithelialto-mesenchymal transition (EMT) and the proliferation and mobilization of fibroblasts (being responsible for the rapidly declining C rs (compliance of the respiratory system) after some days of mechanical ventilation).
Limiting damage and repair of lung tissue by the newborn organism's innate immunity without completely uncoupling the means of defense-especially in case of infectious pathogensseems to be the distinguished task for future research by the use of the piglet lavage/meconium aspiration model. Considering the complexity of the innate immunity as very shortly outlined above, many approaches are possible but should probably tackle major anti-inflammatory pathways instead of single-even importantmolecules to overcome the phenomenon of redundant activation of e.g., many cytokines [such as blocking IL-8 by specific antibodies may upregulate IL-8 production in experimental NARDS (Ankermann et al., 2005b)].
Conclusions
The newborn piglet serves as an excellent, robust animal model to study severe neonatal lung diseases with high mortality. The research of three decades has described a myriad of physiological and immunological parameters of the newborn piglet as one of the best studied animal models ever. Most of the clinical, physiological, biological, and pathological changes in NARDS can be also found in the two well-established models presented here: the meconium aspiration model and the lavage model. While most of the research was conducted in the last decade and has slowed down lately, many new insights into the innate immune system should bring up new treatments to specifically tackle important pro-inflammatory upstream pathways. For the benefit of many newborns with life-threatening nARDS future research on the newborn piglet models may greatly help to conquer new specific treatment modalities.
AUTHOR CONTRIBUTIONS
DS and NR collected the physiologic data from three experiments and screened eligible publications for the systematic review. MK wrote the first draft of the manuscript. All authors contributed to a manuscript revision and approved the final version of the manuscript. | 2019-10-31T09:13:26.185Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "8c2bc80bc87487998b8af440dbefef248f65c576",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.01345/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58e67aedcadda2826de4404c2b6d42a124d5c91d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254726783 | pes2o/s2orc | v3-fos-license | Chebyshev Transform-Based Robust Trajectory Prediction Using Recurrent Neural Network
Trajectory prediction is gaining attention as a form of situational awareness because it is an essential component of the support system of autonomous driving, particularly in urban areas. A promising application is cooperative driving automation, where the traffic scene is monitored by roadside sensors with undisrupted views. A critical problem is that these sensors are adversely affected by inclement weather, including drenching rain or large amounts of snow, in which case the reliability of the prediction results can be significantly compromised. To address these problems, this study proposes a framework for robust vehicle-trajectory predictions based on the Chebyshev transform. In the proposed framework, the original trajectory snippets (partial trajectories) are Chebyshev-transformed, and the resulting coefficients form new snippets. The LSTM (long-short term memory) encoder-decoder structure was trained and tested using these new coefficient snippets, which were extracted from a public vehicle trajectory dataset. The performance and robustness of the proposed framework were verified by emulating sensor data that were incomplete as a result of environmental factors. The proposed framework provides stable and accurate long-term trajectory prediction because the Chebyshev transform is robust to incomplete sensor data by virtue of its uniform nature.
I. INTRODUCTION
Autonomous driving is being raised to the next level by technical progress in the relevant software and hardware. However, the barriers are also becoming higher as the operational domain extends to urban areas. Removing these barriers is the key to advancing autonomous driving. One of these barriers is knowledge of the future trajectories of surrounding vehicles. In particular, it is necessary to exactly predict the future trajectories of non-autonomous vehicles because mixed traffic, consisting of both autonomous and The associate editor coordinating the review of this manuscript and approving it for publication was Gerardo Flores . non-autonomous vehicles, can be expected to coexist for the foreseeable future until society becomes fully autonomous.
Trajectory prediction can be performed using information received from either the on-board sensors installed in autonomous vehicles or from roadside sensors for cooperative driving automation. Among them, trajectory prediction with the aid of roadside sensors is currently attracting considerable attention because roadside sensors usually have omniscient views; thus, the traffic scene can be monitored with less obstruction. Previous studies [1], [2], [3], [4] investigated this possibility in the next stage of autonomous driving.
The problem with traffic scene monitoring based on roadside sensors is that these sensors are continually exposed to VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the environment. Thus, their reliable operation is negatively affected by adverse weather, such as heavy snowfalls, torrential rain, and strong wind, with camera sensors being particularly vulnerable to these conditions. Objects that should have been detected may be obstructed or may even remain undetected owing to raindrops or frost that covers the lens of sensors, as discussed in [5]. Accordingly, trajectory prediction would be negatively influenced by these conditions, degrading the reliability of autonomous driving.
Research on vehicle trajectory prediction has been gaining momentum, as this technology is required for a higher level of autonomous driving, and the widespread use of neural network-based approaches is now accelerating this trend. Altché and De la Fortelle [6] applied a long shortterm memory (LSTM) network to vehicle trajectory prediction for a highway situation and validated the performance of the proposed model with the public NGSIM [7] dataset. Messaoud et al. [8] and Luo et al. [9] proposed an attention mechanism that combined different networks for trajectory prediction. Zyner et al. [10] proposed a mixture density network (MDN)-based framework for multimodal prediction in roundabouts. Ding and Shen [11] supplemented context information such as information about the construction site and speed regulation for elaborating the prediction accuracy. Their suggested model consisted of two levels of networks: The upper level classifies the driving policy, such as proceeding straight ahead, yielding, or turning, and the lower level generates the trajectory prediction based on an optimization method. The method proposed by Raipuria et al. [2] was also based on an LSTM encoder-decoder structure [12], and utilized a geometrical curvilinear coordinate system in which the road curvature was featured to elaborate the prediction accuracy. Similarly, Yu et al. [13] considered road geometries to improve the prediction accuracy in various road environments. Jiang et al. [14] focused on the temporal accuracy of trajectory prediction. Other researchers [15], [16], [17], [18] considered the surrounding nearby vehicles for an improved understanding of future vehicle trajectories. Deo and Trivedi [15] suggested a maneuver classification and trajectory prediction model using intervehicle interactions with an LSTM network. In addition, SCALE-NET, proposed by Jeon et al. [16], used an edge-enhanced graph convolutional neural network (EGCN) and LSTM, which features efficient computational performance regardless of the number of vehicles in the region of interest. Inter-vehicular interaction was applied to conflicting vehicles at an intersection [17]. The proposed model generates multiple hypothesis trajectories based on maneuver reasoning results. Deo and Trivedi [18] proposed a framework for vehicle trajectory prediction based on social-LSTM, which has been widely used for pedestrian trajectory prediction [19], [20]. Bock et al. [3] proposed an LSTM-based self-learning trajectory prediction framework with additional data from a new measurement.
These vehicle trajectory prediction methods can be categorized into two groups based on traffic scene monitoring. In [2], [3], and [16], multiple roadside sensors were installed on a roadside unit, and trajectory predictions were provided to support cooperative driving automation. In [9] and [15], on the contrary, trajectory prediction was conducted on the side of the autonomous vehicle with on-board sensors and the predicted information was utilized in the decision process for autonomous driving. In addition to neural-network-based methods, various filter-based or stochastic methods were proposed [4], [21], [22], [23].
However, previous studies on trajectory prediction have typically focused on the improvement of the prediction performance, including accuracy and multimodality, as discussed in the literature survey, and problems that arise because of sensor degradation have not yet been addressed, although, practically, it is a critical issue. As discussed later, the conventional approach is highly vulnerable to incomplete sensor data. In this regard, a comprehensive discussion on sensor degradation issues is required when attempting to solve the trajectory prediction problem, considering that our study aims to provide robust long-term vehicle trajectory prediction, even with incomplete sensor data. Our findings are expected to make an important contribution to the realization of cooperative driving automation at a higher level. Table 1 presents a comparative table of the related studies and summarizes the model types and objectives of each study.
This study was motivated by the work of Wiest et al. [24] and was based on the Chebyshev transform [25], which transforms time-sequential physical data to coefficients for Chebyshev polynomial fitting. In the proposed framework, the original trajectory snippets, that is, partial trajectories, are Chebyshev-transformed, and the resulting Chebyshev coefficients form the new coefficient snippets, which are referred to as Chebyshev coefficient snippets (CCSs), for training and prediction. A strong advantage of the Chebyshev transform is that the resulting coefficients are uniform for a variable number of sequences in a fixed time interval [26]. Fig. 1 illustrates this aspect and suggests the possibility that the prediction results could be robust to partially incomplete sensor data. Another strong advantage is that the predictions can be in the form of a coefficient set, which is referred to as a snapshot, rather than the individual predicted positions upon which the predictions are made in the form of future positions. This aspect could be beneficial in vehicle-to-everything (V2X) communication applications with a limited bandwidth. For example, the only information transmitted via communication is a few Chebyshev coefficients rather than numerous time-sequential position values. The efficacy of this aspect increases as the number of vehicles for which the trajectory should be predicted increases.
The LSTM encoder-decoder was selected as a baseline structure because the family of recurrent neural networks (RNN) was shown to deliver good performance in the field of trajectory prediction as presented in the literature survey.
The model was trained and tested with Chebyshev coefficient snippets in the proposed framework, and the prediction results were compared with those of the conventional approach, which is trained with time-sequential physical values in a conventional manner. The main contributions of this study are summarized as follows.
• A trajectory prediction framework based on a neural network, which is robust to partially incomplete data, is proposed. The proposed framework integrates the special feature engineering functionality of the Chebyshev transform that encodes the original trajectory snippets to CCSs. The experimental results, produced with data from the public vehicle trajectory dataset inD [27], verified the robust trajectory prediction performance, even with incomplete sensor data. This achievement is crucial for the realization of cooperative driving automation even under harsh conditions such as during adverse weather.
• A novel method based on feature engineering is proposed for trajectory prediction in the form of a snapshot, rather than in the form of a series of time-sequential values. The last output sequence is configured to be the set of Chebyshev coefficients that encapsulates a predicted trajectory of the entire prediction horizon in the proposed method. In other words, the only information transmitted via the communication line is a few coefficient values instead of a large number of timesequential values.
• The architecture of the model can be simplified because the number of sequences for the RNN can be reduced and additional processes such as sensor data imputation are not required, and these aspects might further reduce the learning time. This study focuses on robustness to the sensor degradation issue and does not consider interaction between traffic occupants or other contextual information. The remainder of this paper is organized as follows. Section II summarizes the basics of the Chebyshev transform, and Section III introduces the proposed trajectory prediction framework, which is based on the Chebyshev transform and LSTM encoder-decoder structure [12]. This section also presents a short description of the LSTM encoder-decoder structure and describes the way in which feature engineering is used to create a snapshot of the predicted trajectory for the entire prediction horizon. In Section IV, the proposed framework is verified with the inD [27] dataset by injecting sensor data faults in various patterns. Section V concludes the paper.
II. CHEBYSHEV TRANSFORM
The proposed trajectory prediction framework is based on the Chebyshev transform, from which the objective function is approximated. This approximation starts from the definition of the Chebyshev polynomial, which is defined by the following trigonometric function [25]: where v denotes the degree of the Chebyshev polynomial, and the values of this polynomial and domain variable t are bounded in [−1, 1]. The Chebyshev polynomial of an arbitrary degree is obtained from the recurrence relation, as follows: From the properties of these orthogonal polynomials, the objective function f (t) of the feature values can be approximated if the degree d is sufficiently large as follows: which is the truncated approximation for d ≤ N d , where N d is an arbitrary number of data sample points. Note that the interval for the approximated function was normalized to [−1, 1]. Upon restoration of the original physical values using the inverse transform, the extent of the interval after the inverse transform can be arbitrarily extended to the designed horizon. The Chebyshev coefficient c v in (3) can be obtained by the following Chebyshev transform: where The transform in (4) is denoted for the set of coefficients: where x 1:N d denotes N d data samples, and N f denotes the number of features in x. Namely, the objective function for each feature is transformed, and the results are aggregated into one, as in (6). In this study, the degree of approximation is set to d = 4, which is known as the cubic Chebyshev transform. As the transformed coefficients in (6) form the feature data in the proposed framework, the degree of approximation determines the number of features; thus, the degree of approximation is determined by coordinating the model complexity and approximation quality. For the cubic Chebyshev transform, the approximated function for a single feature is represented as follows: where g is the approximated function of f .
III. TRAJECTORY PREDICTION FRAMEWORK A. PROPOSED FRAMEWORK
The proposed framework is based on an LSTM encoderdecoder structure. Fig. 2 presents the architecture of the proposed trajectory prediction framework. The LSTM encoder-decoder, also known as seq2seq, is popular in the field of time-series prediction because it supports a flexible model structure with an arbitrary number of sequences on the side of both the input and output. The conventional prediction model based on physical values is represented as: where x 1:z u and x 1:l o denote the input and predicted sequences, respectively, and x 1:z u is from observations. In the proposed framework, the original trajectory snippet for input sequences x t h :t o u is divided into m sub-trajectory snippets, where t h and t o denote the history horizon and current time, respectively, and the original physical data x are composed of two features consisting of the x-and y-axis coordinates as That is, x t h :t o u represents the past positions of the vehicle in the interval from t h to t o . The subtrajectory snippet for each feature in (9) is cubic-Chebyshev-transformed, and the resulting coefficients are aggregated into one from (6) as: where c represents the new feature data and is termed the Chebyshev coefficient snippet (CCS). In (10), the first four and next four elements are from the subtrajectory snippets for the x@hyphe and y-positions, respectively. It should be noted that the number of sequences was reduced to one for each subtrajectory snippet. Consequently, there exists a total of m input sequences of CCS, which are denoted as c 1:m u , and they are fed to the encoder part. In terms of the output sequences, a total of n sequences of CCS are predicted by the proposed framework and denoted as c 1:n o . Finally, the prediction model is transformed into the proposed framework as follows: where T (·) denotes the transform for each subtrajectory snippet and c 1:m u denotes the set of input CCS for equally divided time slots, and k and T denote the number of samples in the subtime span and sample time, respectively. The history horizon t h can be represented as: For the output sequences, the time span for a sequence is increased in multiples of the time span for a unit CCS to the prediction horizon t p as depicted on the right in Fig. 3, where t p can be represented as: In Fig. 1, c i o denotes the predicted output sequence of the i-th index, and the output has a total of n sequences. Note that the last output sequence c n o covers the entire time span up to the prediction horizon; thus, it becomes the final prediction result in this configuration. This means that the prediction result exists in the form of the coefficient set, which is referred to as a snapshot, rather than a series of numerous time-sequential values. This is the optimal configuration that ensures the best prediction results and was empirically determined by conducting a substantial amount of validation using various types of configurations.
The predicted trajectory can be restored from the last sequence c n o in the form of predicted positions using an inverse transform, such as A great advantage of this restoration process is that the sample time for the restoration can be set regardless of the original sample time in the data. This feature can be useful for path planning on the side of the receiving vehicle. Moreover, because the noise is removed by filtration during the transform, higher-order physical values, such as the velocity and acceleration, can be restored without the effect of noise.
IV. EXPERIMENT A. DATASET
The proposed framework was verified using the inD [27] dataset, which is a large-scale dataset of naturalistic vehicle trajectories at urban intersections. The data were collected at 25 Hz using camera-equipped drones and a typical position error of less than 10 cm was guaranteed. The experiment was conducted with data collected in Neuköllner Strasse, Aachen, Germany. In particular, 143 vehicle trajectories for left turns (depicted in Fig. 4) were utilized in our experiment because they feature highly dynamic motions, which enabled the verification to be conducted under severe conditions. As shown in Fig. 4, the vehicle trajectories for the experiment are widely distributed, which increases the uncertainty in the prediction problems. In this configuration, a total of 23,419 trajectory snippet pairs for the input and target sequences were generated, and they were transformed into CCSs according to the discussion in Section III. The data were randomly split into learning and test sets in an 80:20 ratio.
B. EXPERIMENTAL SET-UP
The experiment was set up on the PyTorch platform using the hyperparameters specified in Table 2 in our model. NVIDIA GeForce RTX 2060, 16 GB RAM, and Intel Core i7-8750 CPU @2.21 GHz were used in the experiments. The mean squared error (MSE) was used as a loss function, and this metric evaluates the average squared Euclidean distance between the target coefficient vector as in (10) and the predicted coefficient vector as where e x ijk = c x ijk − c x ijk and e y ijk = c y ijk − c y ijk denote the prediction errors for each element in (9). N b in (16) denotes the number of data points in a batch. As denoted in (16), the loss function is customized to set the weights w i to focus on learning for the specific output sequences. This is because the final prediction results were obtained from the final output sequence. In this regard, the weight w i was assigned the value of 1 for sequences that correspond to the time horizon from 3 to 5 s, and set to 0 for short-term sequences. The Adam optimizer was used with a weight decay of 0.00001.
The objective of this study is to construct a robust trajectory prediction framework that is robust to sensor-degradation problems. One of the representative and critical types of incomplete sensor data is missing data, for which various factors, including adverse weather and hardware failures, could be responsible. Thus, incomplete sensor data were emulated by injecting the missing periodic data. The original physical input data were zero padded at the frequency of 1/2.5/5 Hz, and the number of consecutive missing data was set to 1/2/3 samples, resulting in a total of 9 patterns of missing data. For a comparative analysis between the proposed and conventional approaches and between various CCS configurations in the proposed approach, the following alternatives were examined: • Baseline: The LSTM encoder-decoder model without any feature engineering. The input and output feature data consist of time-sequential position values in a conventional approach.
• M2M1: The LSTM encoder-decoder model with multiple input and output CCS sequences, as shown in Fig. 3. The time span for input unit CCS l s in Fig. 3 was set to 0.5 s. The time span for the output sequences is increased in multiples of l s to the prediction horizon t p in Table 2, as described in the previous section.
• M2M2: Same as the M2M1 model, except that the time span for unit CCS l s was set to 1.0 s, which was longer than that of M2M1.
• M2O: The LSTM encoder-decoder model with multiple input CCS sequences and only one output CCS sequence. The l s for the input sequence is set to 0.5 s, and only the output sequence covers the entire time span, up to the prediction horizon t p in Table 2. Fig. 5(a) shows the M2O configuration.
• O2O: The LSTM encoder-decoder model with one input and output CCS sequence. For this model, the input sequence from the observations covers the entire time span down to the historical horizon t h in Table 2, and the output sequence covers the entire time span up to the prediction horizon t p in Table 2. Fig. 5(b) shows the O2O configuration.
A comparative study shows that the proposed framework with Chebyshev transform-based feature engineering renders the trajectory predictions even more robust than the conventional approach.
Moreover, a comparative study between various CCS configurations presents the most suitable configuration for the trajectory prediction problems with the proposed framework. For each configuration of the CCSs for the various models presented in Fig. 3 and Fig 5, the input CCSs sequences are fed to the proposed framework, as shown in Fig. 2, which outputs the predicted CCS sequences according to the configurations described above.
C. PREDICTION RESULTS-CHEBYSHEV COEFFICIENTS
The original trajectory snippets were transformed into CCSs, which were subsequently used to train the model. Thus, the prediction performance was first verified in the form of the Chebyshev coefficients. The proposed framework was optimally trained with losses from the learning and test data, but the prediction accuracy was calculated from a specific instance of the trajectory during the left turn. This instance was set to t l + 2 s, where t l indicates the time the vehicles crossed the imaginary start line L s . This configuration generates verification data for the most dynamic instances during the turns. Recall that the final prediction results are obtained from the last output CCS sequence. The prediction accuracy was measured as the average of the Euclidean distances between the target coefficient vector, as in (10), and the predicted coefficient vector from the last output CCS sequence, as where N v denotes the number of verification data points, and the asterisk indicates that the value is from the last output sequence. Table 3 presents the experimental results of the trajectory prediction for the M2M1 and M2M2 models in the form of Chebyshev coefficients, and the values were calculated from the metric in (17). As shown in Table 3, the lowest error was recorded for the M2M1 model with a shorter CCS time span, and it seems that the higher the number of input sequences, the more accurate the results become. However, the M2M2 model, with a longer CCS time span, produced superior prediction results when the input data were periodically missing. Moreover, the results for the M2M1 model deteriorated when data were more frequently missing, whereas the results for M2M2 were robust to the frequency of missing data. This could be attributed to the longer CCS time span because the results become insensitive to missing data owing to the uniform nature of the Chebyshev transform.
D. PREDICTION RESULTS-POSITIONS
The last output CCS sequence was inverse-transformed by (15) for all verification data and the accuracy was calculated using the mean absolute error (MAE) as: where x * ij and y * ij denote the predicted x-and y-position inverse-transformed from the last output CCS sequence, respectively; and N p denotes the number of predicted position sequences, where N p = t p /T . The error for each individually predicted point was calculated using the Euclidean distance between the two points, using (18). Table 4 presents the comparative experimental results of the trajectory prediction VOLUME 10, 2022 in the form of the inverse-transformed positions. To verify the efficacy of the proposed framework, the results of the baseline model in the conventional approach are presented in Table 4. The input data of the conventional approach features the x-and y-position, velocity, and heading, whereas the target data features only the x-and y-position. The conventional approach produced a total of 76 and 125 sequences for the time horizon in Table 2, and other hyperparameters were assigned the same values as in the proposed method. Note that the number of sequences was larger than that of the proposed framework, thus the training time was much longer in the case of the conventional approach, as will be discussed in the following paragraph.
In the case of the proposed framework, the results were generally consistent with those in Table 3. The prediction accuracy was higher in the case of the shorter CCS time span (M2M1), but the results for the longer CCS time span (M2M2) were more robust to the various patterns of missing data. Fig. 6 presents the trajectory prediction results from three samples with the M2M1 model. As shown in the figure, the prediction results became more stable as the sequence progressed. This aspect is one of the advantages of the proposed framework because the last predicted sequence in Fig. 6(c), (f), and (i) are the final prediction results for the prediction horizon. This appears to be a remedy for the chronic problem of LSTM, namely the inconsistency in the early sequences, as reported in [28].
The differences in the prediction accuracy between the models were not significant when no values were missing. However, in the case of the conventional approach, the prediction accuracy dropped dramatically even for the pattern in which missing data occurred most rarely. The experimental results in Table 4 verify that the proposed framework is significantly robust to incomplete sensor data.
Moreover, the number of variables in the final prediction results for the proposed framework was only 2d = 8, whereas that for the conventional approach was 2 × 125 = 250. As mentioned previously, this aspect is likely to be beneficial for V2X-based applications in cooperative driving automation.
Furthermore, Table 5 presents the trajectory prediction results for other CS configurations. In the table, all configurations listed were robust to the various missing data patterns, but the best results were obtained with the M2M model depicted in Fig. 3. These results prove that the RNN structure is highly appropriate for the trajectory prediction problems. Although not listed in Table 5, numerous configurations were tested to determine the optimal configuration for the proposed framework.
E. TRAINING COST
Under the experimental set-up of this study, the training costs of M2M1 and M2M2 models were 1h:18m:40s and 1h:4m:6s in time and 574 and 626 epochs, respectively. These results were comparable to those of the baseline, that is, 3h:5m:10s for 705 epochs, and attributed to the significant reduction in the number of sequences owing to the proposed Chebyshev transform-based feature engineering. For example, the lengths of the sequences were merely 6 and 10 for the input and output, respectively, in the M2M1 configuration, whereas it was 76 and 125 for the baseline.
V. CONCLUSION
Cooperative driving automation for a higher level of autonomy requires situation awareness to be robust to harsh environments, including adverse weather, because roadside sensors are exposed to the external environment. This study proposed a robust trajectory prediction framework based on a recurrent neural network. The robustness of the framework was proved by analyzing experimental data from a public vehicle trajectory dataset. Moreover, the proposed framework establishes the possibility of efficient communication between the connected vehicles and roadside units. This efficiency was attributed to the fact that the proposed framework requires only a few coefficients, rather than numerous timesequential physical values, to be transmitted. This aspect is expected to play an important role in the realization of cooperative driving automation.
The present study has a limitation in that the feature data only include the trajectories of the ego vehicle for simplicity. Moreover, the implementation of fault injection was limited to the representative form of the missing data. Because the future trajectory can also be affected by the behavior of surrounding vehicles, in the future, we will include a discussion on the effect of various forms of incompleteness in the information about ego and surrounding vehicles in trajectory prediction, as well as on the relevant countermeasures. Notably, the sophisticated feature engineering proposed in this study can be integrated with other models. | 2022-12-16T16:17:30.245Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "5dbc43af9c61ce5b5db2336ef55f4afd43aa5be9",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09984668.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "2779de9e10cb9c2b6d8a7406c63e831b0bad5f59",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
44075497 | pes2o/s2orc | v3-fos-license | Sequential Attacks on Agents for Long-Term Adversarial Goals
Reinforcement learning (RL) has advanced greatly in the past few years with the employment of effective deep neural networks (DNNs) on the policy networks. With the great effectiveness came serious vulnerability issues with DNNs that small adversarial perturbations on the input can change the output of the network. Several works have pointed out that learned agents with a DNN policy network can be manipulated against achieving the original task through a sequence of small perturbations on the input states. In this paper, we demonstrate furthermore that it is also possible to impose an arbitrary adversarial reward on the victim policy network through a sequence of attacks. Our method involves the latest adversarial attack technique, Adversarial Transformer Network (ATN), that learns to generate the attack and is easy to integrate into the policy network. As a result of our attack, the victim agent is misguided to optimise for the adversarial reward over time. Our results expose serious security threats for RL applications in safety-critical systems including drones, medical analysis, and self-driving cars.
INTRODUCTION
Recent years have seen great advances in reinforcement learning (RL). Owing to successful applications of deep neural networks on policy networks [23], RL has surpassed human-level performance in Atari games [23] and in the game of Go [33]. It is seeking its way into security-critical applications like self-driving cars [31].
While being effective, deep neural networks are known to be vulnerable to adversarial examples, small perturbations on the input that make the network confidently predict wrong outputs [34]. Huang et al. [10] have shown that deep policy networks are no exceptions. A sequence of small perturbations on the environment (Atari game pixels) result in the agent performing significantly worse on the given task. Many follow-up works have investigated further vulnerabilities of deep policy networks [15,19] and proposed strategies to make them more robust [2,20,22,28,30].
In this work, we show that a sequence of small attacks on the deep policy network can not only make the agent underperform on the original task, but also manoeuvre it to pursue an adversarial goal. For example, given a self-driving vehicle trained to transport goods , , . from a seaport to a sorting centre, an adversary applies a sequence of perturbations on the vehicle's sensor to deliver the goods to the adversary's property, without altering the vehicle's policy network. Such an attack would be more appealing to the adversary than simply making the agent fail. The attack we consider is therefore realistic and relevant.
We build our threat model as a perturbation network, which, together with the victim's policy network, becomes a policy network that pursues the adversarial goal. Specifically, assume that agent a follows the policy network f to maximise the original reward r O in the long term. Our threat model is represented as a feed-forward adversarial transformer network (ATN) [1] д : X → X . д produces small perturbations over input sequences such that the agent's policy network over the perturbed inputs f (x + д(x)) pursues the adversarial reward r A r O . For making the perturbations small, we project the adversarial perturbation д(x) onto the ℓ 2 ball of radius p.
We train and evaluate our threat model over agents trained for the Pong Atari game. Our adversaries successfully make the agents pursue the new, adversarial goal (hitting the centre 1/5 region in the enemy's score line) through a sequence of quasi-perceptible perturbations over the input pixels. This paper contributes the following: (1) a threat model that generates a sequence of perturbations that manoeuvre a policy network to pursue an adversarial reward at test time and (2) empirical evidence that the suggested threat model successfully achieves the adversarial goal. Our work exposes crucial yet previously unseen security risks of real-life deployment of RL based agents.
RELATED WORK
We describe prior work in three relevant areas: (1) reinforcement learning, (2) machine learning vulnerability, and (3) vulnerability of deep policy networks. Our work will be discussed in the context of those prior literature.
Reinforcement Learning
Reinforcement learning (RL) enables agents to learn by interacting with the environment to achieve long-term rewards. The advantage of not requiring per-action supervision has attracted much research in the field in application areas that involve long-term and complex action-reward structures: boardgames [35], inverse pendulum [4], and robotics [29]. Development of highly performant deep neural networks (DNN) [16] has trickled down to the effective deep policy networks (Deep Q-Networks, DQN) for RL. Mnih et al. [23] have first applied the DQNs to learn to play 43 diverse Atari games, and have demonstrated super-human performances. The seminal work by Silver et al. [33] has showcased the ability of a DQN-based agent for playing the game of Go to overwhelm human experts.
Deep reinforcement learning is an active area of research. Many improvements to the algorithms have been proposed in the last years. Double Q-learning [36] and dueling networks [37] were significant steps forward for the usability of DQNs. Among the most widely used improvements is also prioritized replay [32], which we employ in our method. As a public contribution, OpenAI has open-sourced the Atari game environments (OpenAI Gym, [5]). Our experiments on the game of Pong are built on the baseline implementation of DQN by OpenAI [6]. We also use the pre-trained Pong agents from [6].
Attacking Machine Learning Models
While fragility of learned models has been studied for a long time [13,18,21], it has received more attention in the recent years after deep neural networks were found to be vulnerable to humanimperceptible adversarial perturbations [34].
Victim Models.
Most frequently used victim models for adversarial attack research are classification models: given an input, predict the corresponding class [9,24,25,27]. Other works have verified the vulnerability of models for generative [14], and detection and segmentation [38] tasks.
Huang et al. [10] first showed that deep neural networks are vulnerable for reinforcement learning tasks. Our work also studies the model vulnerability in the RL setup. We will compare our work against [10] and follow-up works in §2.3.
Targeted Versus Non-Targeted Attacks.
Researchers have considered two types of adversarial attacks against models: ones inducing any change in prediction (non-targeted) and ones inducing a specific prediction (targeted) [7,9]. This work considers an analogue of targeted attack in the reinforcement learning setup. Our attacks can not only make an agent fail, but also make it actively pursue an adversarial goal.
Attack Algorithms.
Since the first discovery of the imperceptible adversarial examples [34], researchers have developed more efficient, more efficient, and more resilient adversarial perturbation algorithms [9,17,24,25,27]. In particular, Baluja et al. [1] have proposed the Adversarial Transformation Networks (ATN). Unlike prior works that generate perturbations by computing gradients ∇ x f (x) from the target network f , ATN is a learned function д that transforms the input x into an adversarial perturbation д(x) such that the victim network f is fooled when x + д(x) is given.
Attacking Agents
Huang et al. [10] have shown for the first time that deep policy networks are also susceptible to adversarial perturbations; small perturbations that would not interfere with human performance have significantly reduced the test time reward for the agents in various Atari game environments. They have further verified that the perturbations transfer across agents pursuing the same task. Independently, Kos et al. [15] have also proposed adversarial attacks that reduce the test time rewards. Unlike [10] that attacks the agent on every frame, they have considered timing attacks where attacks are performed intermittently. Many follow-up works have expanded the research frontier in different directions.
Researchers have considered injecting adversarial perturbations on the environments during training to learn policy networks that are more robust at test time. Pinto et al. [30] and Pattanaik et al. [28] have suggested a minimax training objective for the agent, where an add-on adversary continually injects reward-minimising changes on the environment. As a result of this training, they have reported better generalisation and robustness against adversarial attacks at test time.
In contrast to this line of work where the adversary only strives to make the agent fail on the original task, a few studies have considered driving the victim towards a certain state or goal. Lin et al. [19] have proposed the enchanting attack in which the adversary sequentially perturbs the input states (frames) s t for time steps t = 0, · · · , H −1 to guide the agent towards a predefined adversarial state s A at time t = H . While sharing similarities, our adversary imposes an adversarial reward r A on the victim, instead of an adversarial state s A ; the former is more flexible and can encode the latter.
Behzadan et al. [2] have proposed the Policy Induction Attacks. In this attack, the adversary first trains a policy network (DQN) with an adversarial reward r A . Using the trained policy, it crafts a sequence of targeted adversarial perturbations that lead the victim's DQN to a sequence of actions leading to r A . While related, their attacks are applied during training to make the agent learn the adversarial reward. Our attacks, on the other hand, are applied at test time and do not explicitly model a secondary DQN for planning actions; we train a feed-forward state perturbation module that is added on the input stream of the victim DQN. We adapt their method as a baseline to our setting.
BACKGROUND
In this section, we provide background on the reinforcement learning (RL) setup and techniques, and adversarial attacks in general.
Reinforcement Learning
The RL agent is assumed to be interacting with the environment through a Markov Decision Process (MDP) [3], specified by the 5tuple (S, A,T , R, γ ), where S and A are the state and action spaces, T is the transition model, R is the reward for the agent, and 0 < γ < 1 is the discount factor. An MDP is a stochastic process for t ≥ 0 that depends on the agent's action sequences a 0 , a 1 , · · · . Specifically, given a state s t ∈ S and action a t ∈ A taken by the agent, the next state is determined stochastically by the transition model , and the reward is given by R(s t , a t ). The goal of RL is to find the optimal policy π : S → A that maximises the discounted reward t ≥0 γ t R(s t , π (s t )).
(1) Sequential Attacks on Agents for Long-Term Adversarial Goals , , One of the most promising approaches to this problem is the Q-learning paradigm, also referred to as backward induction [3]. We define an auxiliary Q-function Q : S × A → R that returns the discounted future reward attained, given a state-action pair at time t, and following the optimal policy afterwards. Once we have access to the Q-function, we can obtain the optimal policy by computing π (s) = argmax a ∈A Q(s, a).
In Q-learning, Q is initialised randomly, and then approximated by sequential Bellmann updates [3]: A Deep Q-Learning Network (DQN) models Q as a deep neural network f that takes the state s t as input and returns a vector of scores over the actions a t as output [23]. The training objective is given by is the target Q value computed separately via a target DQN parametrized by ϕ ′ . Periodically, ϕ ′ is set to ϕ and then kept fixed again. This improves training stability. During training, exploration of the state space yields obervation tuples (s t , a t ,T (s t , a t ), R(s t , a t )). These are stored in a replay buffer and later used to approximate the training objective. Prioritized replay [32] implements this replay buffer as a priority queue, with priorities set to the temporal difference er- Applying deeply learned policy networks has led to many breakthroughs in performances for RL. In this paper, we consider attacking DQNs by imposing an adversarial policy through sequential, small perturbations on the states s t at test time.
Adversarial Attacks
While deep neural networks have enjoyed super-human performances in various tasks, including reinforcement learning, they have been found to be susceptible to small (in the range between imperceptible to semantics-unchanging) adversarial perturbations on the input [34].
Given a learned model f : X → Y (e.g. a classifier) and an input x, we say that an additive input perturbation δ is an adversarial perturbation of x for f if δ is small (e.g. ||δ || 2 < ϵ for some ϵ > 0) and the new output f (x + δ ) is significantly different from the original f (x) (e.g. a different class prediction). Omnipresence of such examples throughout the input space against most existing neural network architectures has spurred discussions over the safety of neural network applications in security-critical tasks, such as self-driving cars.
For generating adversarial perturbations, people have mostly considered using diverse variants of the gradient over the input [9,24,25,27]. The simplest of them is the fast gradient sign method (FGSM, [9]) which computes the following quantity: the negative signed input gradient for the prediction of class y, the argmax prediction by f . While being simple and effective, this requires expensive gradient computation for every input x and is hard to integrate into other learning models. Baluja et al. [1] have proposed the Adversarial Transformer Network (ATN), which, instead of relying on gradient computations, obtains the perturbations through a learned feed-forward network д(x). The network is learned through the following objective via stochastic gradient descent over multiple training images x ∼ D.
In our work, we use the ATN as the perturbation generator against the DQN: Q(x + д θ (x)). We will explain the method for training д θ to impose adversarial reward on Q in the next section.
THREAT MODEL
We consider an adversary whose goal is to make a trained victim agent interacting with an environment for the original reward r O to maximise an arbitrary adversarial reward r A through a sequence of state perturbations. An overview of our approach is in Figure 1.
In this section, we describe in detail how the perturbations are computed to guide the agent towards the adversarial reward r A , and then discuss key assumptions for our threat model.
Attack Algorithm
See the right half of Figure 1 for an overview of our attack paradigm. Given a fixed victim policy network Q ϕ trained for the original reward r O , we attach the Adversarial Transformer Network (ATN) [1], a feedforward deep neural network д θ : X → X which computes the perturbation to be added to the input of the victim DQN Q ϕ . The aim of the adversary is to learn θ such that the perturbed states lead the victim to follow an arbitrary adversarial reward r A . We approach the training of θ by regarding the combination of DQN and ATN Q ϕ (x + д θ (x))) as another DQN to be trained for the adversarial reward r A . In this process, we fix the parameters learned for the victim Q ϕ and only learn θ . Specifically, we solve for the Equation 3 where Q ϕ is now the mapping x → Q ϕ (x + д θ (x)), and the trained parameters are θ (and the victim DQN parameters ϕ are fixed). Using the generalisability of д θ to unseen states x, the adversary then only needs to feed the input state through д θ and then through the victim DQN to achieve the desired outcome.
The detailed architecture is shown in Figure 2. Note that the victim DQN architecture is the same as in Mnih et al. [23]. To enforce the norm constraint in Equation 6, we insert a norm-clipping layer (ClipNorm) which does the following operation: We parametrize p using ϵ such that p = 84 · 84 · 4 · ϵ.
We also enforce the
Assumptions
We consider an adversary which can manoeuvre the long-term behaviour and goal for a victim agent only through a sequence of small input perturbations rather than through direct manipulation of the victim's policy network.
We explicitly spell out the assumptions we make for the described algorithm and our experimental evaluations. While some are restrictive, others may be easily relaxed. 1. White-box access at training time. For training the ATN, the adversary requires gradient access to the victim policy network. While this is restrictive, there exists much ongoing work on the transferability of adversarial examples. Measuring the efficacy of those techniques for our adversary will be an interesting future work. 2. Manipulation on the input stream. We assume that the adversary can manipulate the input stream (state observations) for the victim DQN. This can be achieved e.g. by hacking into sensors [11] or by making physical changes to the environment [17]. 3. Computational resources to train the ATN. Training the ATN can be computationally prohibitive for many. However, even one entity with the intention and ability to train such an ATN can be a grave threat in security-critical applications. 4. Environment for training the victim. For training the ATN, we have trained the combined ATN+DQN module on the same environment that has trained the original DQN. This assumption may be relaxed in the future by experimenting with different victimattacker environments (albeit with the same task). 5. Fixed victim DQN. If the victim DQN is updated, then the adversary needs to re-train the ATN. However, in practice, such an update does not occur continually.
Sequential Attacks on Agents for Long-Term Adversarial Goals , ,
EXPERIMENTS
We evaluate the threat model discussed in the previous section against victim agents trained to play the game of Pong [5]. We will first describe the game along with the original reward r O and define an adversarial reward r A ( §5.1). We will discuss implementation details and our evaluation metric in §5.2, and present results and analysis in §5.3.
The Game of Pong
In our experiments we focus on the game of Pong [5], a classic environment for deep reinforcement learning. It is a great environment for our purposes, since victim agents can be trained to achieve optimal play, a property not enjoyed in more complicated environments. We verify that our attack works even for high-performance victim agents.
We use the Pong simulation from the OpenAI Gym [5]. In this game, two players are positioned on opposite sides of the screen. They can only move up and down. Similar to tennis, a single ball is passed between the two players. The goal of the game is to play the ball such that the other player is unable to catch it. In single-player mode, the opponent uses simple heuristics to play. The original reward r O is defined as if the ball leaves the frame on the opposing side, −1, if the ball leaves the frame on the agent's side, 0, otherwise.
A game state is represented as a 210 × 160 colour image. Before passing states to the DQN, we apply the same pre-processing as [23]. (1) Merge four consecutive frames using a pixel-wise max operation on each channel. (2) Resulting image is converted to grey-scale and down-sampled to 84 × 84. (3) To enable the DQN to utilise temporal dependencies, four most recent such processed images are put into a queue and then used as inputs to the DQN. (4) To get the next input, we append a new processed frame to the queue and remove the oldest frame. During training, we append about thirty processed frames per second to the queue. To make this pre-processing consistent with the environment, the same action is repeated four times.
The action space consists of six actions available on the Atari controller (joystick): four directions, one button, and the "no action" case.
Victim Agents.
As victim agents, we use three off-the-shelf trained agents from the OpenAI baselines [6] (OAI1, OAI2, OAI3), as well as five agents trained by us independently (OR1, OR2, OR3, OR4, OR5). They are all trained for the original reward r O of winning the game.
Adversarial
Reward. The adversary intends to impose another reward r A on the trained agents. In our work, we consider the centre reward, r C . For any given time step t, the centre reward is defined as 1, if the ball hits the centre 20% of the enemy line, 0, otherwise.
See Figure 3 for an illustration of the rewards.
FGSM Baseline.
As a baseline, we adapt the work of Behzadan et al. [2] to our setting. An FGSM adversary is inserted in between the victim agent and its input. Since we assume to have white-box access at training time of the ATN, we grant the FGSM adversary white-box access to the victim agent. Instead of an ATN, the FGSM adversary consists of two components: a policy DQN and a perturbation generation module. The policy DQN determines the desired action of the FGSM adversary. We then generate a perturbation using FGSM on the victim agent, where the desired action of the policy DQN defines the one-hot target distribution. This perturbation is then added to the input, clipped element-wise, and given to the victim agent network.
Implementation Details
We describe the training procedure for the victim DQN agents as well as the adversary's ATN.
Training Victim Agents.
Here, we describe the training details for the victim agents mentioned above (OAI1-OAI3 and OR1-OR5). There are two differences in the training of the OAI agents and the OR agents. The OAI agents and the OR agents were trained on 200 million frames and 80 million frames, respectively. 1 We use a different schedule for the fraction of random actions: during training of the OR agents, we linearly anneal from 1.0 to 0.1 for 8 million frames, and then keep it constant at 0.1 afterwards. For the OAI agents, the rate is linearly annealed from 1.0 to 0.1 for 4 million frames, and then linearly annealed to 0.01 for another 36 million frames, where it is then kept constant.
The DQN replay buffer size has been set to 100k observation tuples. We use prioritized replay [32]. We use Adam with a learning rate of 1 × 10 −4 and γ = 0.99 [12], with the batch size 32. Every agent uses a different random seed to initialize the network parameters and the environment.
Training Adversary.
Given eight victim agents, we train eight corresponding Adversarial Transformer Networks (ATNs) for the centre reward r C . The training procedure for the composite ATN+DQN network is identical to the training of victims, except for different random seeds and a different schedule for the fraction of random actions taken (linearly anneal from 1.0 to 0.1 for 40 million frames, then keep it constant). We control the amount of perturbation via the norm clipping layer (Equation 7). Unless otherwise denoted, we use ϵ = 10 −4 .
To measure the oracle performance of ATNs, we have trained five additional vanilla DQNs from scratch for the centre reward: CR1-CR5.
Evaluation Metric.
We evaluate the victim agents' performance on the original and adversarial rewards with or without the adversarial manipulation on the input stream. The victim agents play the game of Pong from five different seeds for 40k frames each, without taking any random actions. We then plot the average accumulative rewards over the five random seeds.
FGSM Baseline.
We consider FGSM adversaries with CR1-CR5 as policy networks. We use all of the agents trained for r O , OAI1-OAI3 and OR1-OR5, as victim agents. In a grid search, we empirically determined the FGSM perturbation norm that yields the highest success rate in imposing the policy DQN's desired action on the victim agents. This norm is ϵ = 1 × 10 −5 . The average success rate varies from 49% to 61% for the OAI victim agents, and from 86% to 95% for the OR victim agents.
Results
We present experimental results here. See Figures 4, 6, and 7.
Main Results.
We first examine the performance of the eight victim agents (OAI1-OAI3 and OR1-OR5) for the original reward r O of winning the games. In Figure 4(a), we observe that these victim agents accrue r O steadily over time, all reaching over 100 rewarded points at the final frame (40k). We confirm that the victim agents are fully performant at playing the game.
In terms of the centre reward r C , the eight victim agents achieve around 100 rewarded points at the final frame (Figure 4(b)). Although they are not explicitly trained for r C , they sometimes send the ball to the middle while trying to win the games.
We then study the performance of the vanilla DQN networks trained from scratch for the centre reward r C , to determine if the task is learnable at all. In Figure 4(b), we confirm that the DQN trained for r C attains a far better performance at accruing r C than do the eight victim agents. We note that agents trained for r C are not doing well for the original task of winning the games (r O ). We confirm that it is possible to build a policy towards r C .
Finally, we examine the ability of our adversary to generate perturbations that misguide agents to aim for the alternative reward r C . In Figure 4(b), we observe that the victim DQNs fooled into pursuing r C do effectively accrue r C over time, matching the performance of DQNs trained from scratch to pursue r C . Our adversary can successfully impose an adversarial policy on a victim agent through a sequence of perturbations.
L 2 Norm Restriction.
In the previous set of experiments, we have used the L 2 norm constraint ϵ = 10 −4 . Here, we study the effect of the norm constraint on the effectiveness of attacks. It is expected that relaxing the norm constraints gives more freedom for the adversary to choose the adversarial patterns that effectively lead the victim to the adversarial goal.
See Figure 6 for the results. We show the final frame centre rewards versus ϵ for eight vanilla agents each fooled by our sequential adversarial attacks. We indeed observe that, from ϵ = 0 (no attack) to ϵ = 10 −5 , the final frame r C increases, confirming that the adversary is better-off with relaxed norm constraints. However, for ϵ > 10 −4 , greater variances in the final rewards are observed; we conjecture that for such great amount of perturbations training is unstable and does not converge.
We visualise the amount of perturbations at each ϵ level in Figure 7. Note that ϵ = 10 −4 make the perturbations visible, but this would not interfere with a human player.
FGSM Baseline.
We now compare our method to the FGSM adversary. In Figure 5(b), we see that, averaged over the five policy networks CR1-CR5, the FGSM adversary achieves a significantly lower accumulated centre reward at the final frame for the OAI agents than for the OR agents. The latter performance is about on par with the ATN adversary and the agents trained for the centre reward, CR1-CR5. When considering only the OAI agents, Figure 5(a) shows that the ATN adversary outperforms the FGSM adversary. We hypothesize that the longer training and the different schedule for the random exploration causes the OAI agents to become more robust to the FGSM adversary. This is supported by the significantly higher success rate by the FGSM adversary in imposing its desired action on the OR agents compared to the OAI agents. Possibly due to its more intimate joint training with the victim agent, the ATN adversary can still successfully attack the OAI agents.
CONCLUSION
We have exposed a new security threat for deeply learned policies. Much prior work has argued that a small perturbation on the states can make a deeply learned policy fail to achieve the originally set task. In our work, we have shown experimentally that it is moreover possible to impose an arbitrary adversarial reward and corresponding policy on a policy network (Deep Q-Network) through a sequence of perturbations on the input stream (state observations). The possibility of such an adversary questions the safety of deploying learned agents in everyday applications, not to mention security-critical ones. | 2018-06-05T07:18:24.916Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "3697d586016b28d0d8314f0ee7a96abf087cdb78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "45a63eb1bba2f47ed76eaf1c0da0e5b7a36dbea9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
221459883 | pes2o/s2orc | v3-fos-license | COVID‐19 and care homes in England: What happened and why?
Abstract In the context of very high mortality and infection rates, this article examines the policy response to COVID‐19 in care homes for older people in the UK, with particular focus on England in the first 10 weeks of the pandemic. The timing and content of the policy response as well as different possible explanations for what happened are considered. Undertaking a forensic analysis of policy in regard to the overall plan, monitoring and protection as well as funding and resources, the first part lays bare the slow, late and inadequate response to the risk and reality of COVID‐19 in care homes as against that in the National Health Service (NHS). A two‐pronged, multidimensional explanation is offered: structural, sectoral specificities; political and socio‐cultural factors. Amongst the relevant structural factors are the institutionalised separation from the health system, the complex system of provision and policy for adult social care, widespread market dependence. There is also the fact that logistical difficulties were exacerbated by years of austerity and resource cutting and a weak regulatory tradition of the care home sector. The effects of a series of political and cultural factors are also highlighted. As well as little mobilisation of the sector and low public commitment to and knowledge of social care, there is a pattern of Conservative government trying to divest the state of responsibilities in social care. This would support an interpretation in terms of policy avoidance as well as a possible political calculation by government that its policies towards the care sector and care homes would be less important and politically damaging than those for the NHS.
A two-pronged, multidimensional explanation is offered: structural, sectoral specificities; political and socio-cultural factors. Amongst the relevant structural factors are the institutionalised separation from the health system, the complex system of provision and policy for adult social care, widespread market dependence. There is also the fact that logistical difficulties were exacerbated by years of austerity and resource cutting and a weak regulatory tradition of the care home sector. The effects of a series of political and cultural factors are also highlighted. As well as little mobilisation of the sector and low public commitment to and knowledge of social care, there is a pattern of Conservative government trying to divest the state of responsibilities in social care. This would support an interpretation in terms of policy avoidance as well as a possible political calculation by government that its policies towards the care sector and care homes would be less important and politically damaging than those for the NHS. This article critically examines the policy response to in care homes in the UK, with particular focus on England. Focusing on the first months of the pandemic-March, April, May, June-the research questions investigate the timing and content of the policy response and the factors that best explain what happened. A compound explanation of a mix of structural factors and more political and socio-cultural elements is developed. A core insight is that, whilst health and social care may be spoken of together in the UK, they are two different 'entities'. This is true whether one takes a systemic or structural perspective or conceives of them as socially and culturally embedded. The care system functions in the shadow of the National Health Service (NHS) which enjoys far greater resource allocation and higher cultural and political capital.
Care homes are a critical case from which to view the COVID-19 policy response because of the major role they play in provision for one of the most vulnerable population sectors and also because they were recognised in advance as a high-risk setting. 1 The relevant pandemic statistics to date are staggering. Between 2nd March and 12th June 2020, there were 66,112 deaths of care home residents in England and Wales, of which 19,394 (or 29%) are officially attributed to COVID-19 (Office for National Statistics, 2020). Once more accurate statistics become available and we have a better idea of the pandemic's mortality trajectory, deaths of care home residents may account for between 30% and 40% of all COVID-19 related deaths in the country. In terms of excess mortalitywhich some epidemiologists believe to be the best measure of pandemic-related mortality (especially in countries like the UK where testing was scarce)-the mortality rate of care home residents in England and Wales from 28th December 2019 to 12th June 2020 was 45.9% up on the same period the previous year (Office for National Statistics, 2020). Not just residents but workers too are vulnerable. Care workers (which include care assistants, home managers, cooks, cleaners, inter alia) proved to have a higher risk of death from COVID-19 as compared with the general population-being twice as likely to die from the pandemic. 2 The emerging commentary is of a parallel pandemic in care homes to that playing out in hospitals. Government's claim to have tried to put "a protective ring around our care homes" needs to be carefully scrutinised. This is a policy analysis, focused on government policy. For official purposes in the UK, care homes fall within the field of 'adult social care'. Known generally as 'social care', the policy field is more recognisable as 'long-term care' in other countries. In the UK, adult social care is a broad and generic categorisation, 'adult' being essentially a crude age stratification to distinguish it from provision for children; it is defined to refer to care and support directed at all persons aged 18 years and over. Along with services for older people, adult social care encompasses services for disability, mental health, homelessness, domestic violence, inter alia. For the purposes of precision, England (and sometimes England and Wales) will be the article's reference point. This is so for several reasons. First, key aspects of care-related and public health policy are devolved in the UK, making for some regionally-specific policies across the four jurisdictions. Second, statistical reporting conventions vary across the regions. For the analysis, care homes are defined following Eurofound (2017, p. 3) as: "institutions and living arrangements where care and accommodation are provided jointly to a group of people residing in the same premises, or sharing common living areas, even if they have separate rooms." They may also be known as nursing homes or residential homes. Significant numbers of people are involved: for care homes for older people in the UK we are talking about over 400,000 residents (for England it is around 320,000) and more than 500,000 staff (for England some 400,000) (Competition and Markets Authority, 2017; Office for National Statistics, 2020).
The article is organised into two main parts. The first part sets out the core elements of the policy response.
Here there is a dual focus-content and timing-with NHS policy as a general comparator, especially for timing.
Focusing on the relative gap vis-à-vis the NHS allows us to control for common explanations-like general weaknesses and preferences in government action-and extract the specific from the general. The article's second part assembles an explanation. This draws in sector-specific factors but also-and more broadly-the political complexities and socio-cultural properties and location of social care in the English setting. The argument advanced is that English (and UK) government policy encountered a crisis in its handling of COVID-19 in care homes not just because of a failure of political leadership but also because of governance and other systemic weaknesses and relative lack of significant voice from or cultural valuing of the care sector.
| THE POLICY RESPONSE
In a context where policy has to operate in a new and rapidly changing situation and where no finishing line is in sight, both the makers of policy and those of us who research it must carefully assemble a set of parameters for analysis purposes. The international review undertaken by Comas-Herrera, Ashcroft, and Lorenz-Dant (2020) For the purposes of assessment, the timeline is vital given the immense potency of COVID-19 which propelled it around the globe in a matter of weeks. At the time of writing, the first two cases in the UK were confirmed on 31st January 2020, the first recorded death occurred on 5th March and the first death in a care home attributed to COVID-19 was reported on 20th March. Deaths in care homes (generally understood to refer to deaths of care home residents whether in hospital or in the care home) appear to have peaked in the week ending 24th April-those for the population at large peaked a week earlier. 3 Policy started to become active from early March, with a highly intense period from then to mid-May (see timeline in Figure 1). The article's clock is set by the response targeted on the NHS-which started on 3rd March-and the period considered is from then to mid-June, with the week beginning 2nd March as week 1. This covers the height of the pandemic and the most intense period of policy activity. I stress that the focus is on policy statements rather than practical roll-out of policy and delivery of promised resources. This is important to note given significant disjuncture between government claims for its policy and provision and the reality on the ground. I also stress that care home agency, either in response to government policy or autonomously, is not the focus of study here.
| Guidance and action plan
The first concerted policy effort was the action plan for the pandemic which was announced on 3rd March. 4 This was a mix of information about the virus and the four-phased response strategy adopted: contain, delay, research, mitigate. Little or no attention was given in the action plan to the adult social care system (just one mention). The sector was included alongside first responders, employers, the justice system and educational settings. Moreover, 'health and care' were elided in a classic double formulation in the UK (one that is misleading because they are two different systems as we shall see). Guidance for reducing the risk of transmission in residential settings (including care homes) was published on 13th March 5 ; prior to that care homes were represented as low-risk settings for The 3rd March plan focused on how to maintain delivery of care in the event of an outbreak or widespread transmission of the virus and what to do if care workers or individuals being cared for show symptoms.
Rather than instructing care homes to shut down, this advised them merely to deny entry to unwell visitors and those with suspected COVID-19. On 23rd March what is now known as 'lockdown' was announced, instituting a policy of advising people to only go outside to buy food, exercise once a day or go to work if working from home was not possible. Police were granted additional powers to use 'reasonable force' if necessary to implement the lockdown measures (under the Coronavirus Act 2020 enacted on 25th March). In addition, the same Act amended existing legislation, allowing local authorities significant easements of their social care duties, effectively cutting back their obligations to meet care-related need to cases where not doing so would breach someone's human rights (in the case of England) (Foster, 2020). The 'duty to' meet care and support needs was substituted by a 'power to meet needs'. This is a hugely significant change, for it empowers the local authorities in responding to COVID-19 to reduce their core offer and set of responsibilities in social care. Notably, the legislation or guidance did nothing to enable or encourage care homes to reduce capacity. This would have been a wise move given what we now know about how the virus is spread and the fact that care homes following 'business as usual' incentives to remain at full capacity probably exacerbated infection and mortality rates (an example of 'pandemic perverse incentives').
Not until 15th April was a specific action plan for adult social care issued for England. 7 Prior to that the sector-and care homes in particular-came to official attention mainly as places to which recovering COVID-19 patients could and would be discharged. A core element of early government response was to free up NHS capacity through rapid discharge into the community. 8 Care homes were one of the locations mentioned in this context. They were thus positioned and instrumentalised to solve problems for the NHS, a downstream, supposedly low-risk, receiving location in the discharge channel. It is estimated that some 25,000 patients were discharged from hospitals to care homes in England in this period (National Audit Office, 2020). It would take nearly a month for government to act on the significance of this type of downgrading approach for care homes-on 15th April it announced a correction to the prevailing policy whereby all residents of the care homes into which COVID-19 patients were being discharged would be tested (as part of the targeted action plan on adult social care) as would those being discharged from hospital. But prior to that and in particular in the F I G U R E 1 Timeline of key policy developments regarding care homes in England guidance issued on 2nd April on admission to care homes, it was stated explicitly that negative tests would not be required before COVID-19 recovering patients would be transferred from hospital to care homes. 9 According to the National Audit Office (2020), it is unknown how many patients discharged to care homes in this period had COVID-19.
The action plan for social care of 15th April was directed at all settings in which adults receive social care (therefore drawing in a wide array of settings). It announced a four-pillar approach: controlling the spread of infection; supporting the workforce; supporting independence, supporting people at the end of their lives, and responding to individual needs; supporting local authorities and the providers of care. Reading the action plan suggests that at that stage the UK policy was in control mode rather than prevention mode. Indeed, one could question whether there ever was a prevention phase for care homes in the UK for on 12th March the government announced that it was moving to the second-delay-phase. This meant abandoning the WHO's standard containment approach of: find, test, treat and isolate (Scaly, Jacobson, & Abbasi, 2020). In addition, whilst a wide range of actions are mentioned in the action plan, there was limited targeting of care homes as particularly vulnerable. It would be a further month before a specific care home support package was put in place (on 15th May). 10 This was in week 11 (starting the calendar from the first policy action in the week beginning 2nd March). In the intervening time much had been done for the NHS including: an appeal for retired staff to return to the service (on 20th March, week 3); a deal with private health care providers for extra beds, ventilators and staff (21st March, week 3); writing off of £13.4 billion of NHS providers' debt (1st April, week 5); first NHS Nightingale Hospital (of 7) opened in London (3rd April, week 5). 11
| Monitoring
Monitoring in the form of testing but also counting mortality was a particularly weak and contentious point in the UK's response, with much disputation of government claims about the extent of testing and achieving testing-related targets. In effect, government policy made testing for the virus a very scarce resource for all, from 12th March effectively confining testing to those on the cusp of or already accepted for hospital admission (for which a high symptomatic threshold was set). Throughout March there was no specific procedure for monitoring the extent of the virus in care homes-only on 2nd April (week 5) was guidance issued on procedures for admissions and care of residents in care homes. This introduced what might be called a 'light touch monitoring regime' of in-house measures.
Care homes were advised to assess each resident twice a day by checking for the usual COVID-19 symptoms. Only if they had two or more symptomatic residents were they obliged to report it to the Health Protection Team. Even this was no guarantee that a test would be forthcoming, however.
On 15th April government (as part of the adult social care action plan) announced that testing would be offered to everyone in social care settings eligible for it. This was the first specific targeting of testing on social care settings, even though an ambitious testing regime for NHS staff had been launched on 17th March and reinvigorated on 2nd April. Limits on testing capacity meant tests started to be rolled out to symptomatic NHS staff from 27th March only. This was extended to care workers on 15th April and to the rest of their households 2 days later (National Audit Office, 2020). From 28th April, all care home staff were made eligible for tests but the Department of Health and Social Care capped the daily amount of care home tests at 30,000, to be shared between staff and residents. A new digital portal for care home specific testing was announced on 11th May, with priority for those catering for people aged 65 and over. 12 This was one of the first measures targeting the over 65s and can be taken as recognition-in week 11-that the testing policy was failing.
In a report issued by the National Audit Office on 12th June it is stated that the government does not know how many NHS or care workers were tested in total during the pandemic to that date (National Audit Office, 2020).
What is clear is that the four main opportunities for prevention of transmission to care homes-through early lockdown of care homes, the non-transferral of COVID-19 and other patients from hospitals, measures to monitor and test, measures to prevent staff from spreading the virus-either came too late or were missed altogether.
Apart from tracking and diagnosing, counting and reporting mortality rates are crucial to monitoring. The UK has manifested special difficulties in this regard, the lack of widespread testing increasing the potential to under-record the virus-related death rate. This is why excess mortality is so important as a measure in the UK context. But there are other problems as well, especially to do with procedures and mis-alignments between how COVID-19 related deaths in hospitals and in community settings are reported. To address this and render the statistics more accurate, on 15th April a new data series on deaths in care homes was set up. This included evidence which the Care Quality Commission (CQC) was receiving directly from care homes in compliance with statutory notification procedures (Office for National Statistics, 2020). It was only for the week beginning 20th April (week 8) that mortality statistics included deaths in care homes (prior to that deaths in hospitals were taken as the public record). Their inclusion came as media (broadly defined to include also scientists/academics' use of the media to raise issues) began highlighting a deteriorating situation in care homes. 13 When released retrospectively, the weekly statistics, particularly those from 3rd April to mid-May, showed exponential increases in COVID-19 deaths in care homes as well as in excess mortality, with the latter especially skyrocketing from early April. 14 Significant upward revision of statistics announced earlier-particularly to take account of deaths in community settings-became a regular feature of the reporting scenario in this period.
It took nearly two months, then, for the deaths in care homes to be included as part of the official death toll announced each day by the government; doing so significantly changed understanding of the virus. As things stand, many questions have been raised about data inconsistencies and omissions, mainly around the number of deaths in care homes attributed to COVID-19 and how to assess and understand excess mortality in the community-which is largely excess mortality in care homes-during the pandemic (Comas-Herrera & Fernández, 2020).
| PPE, staffing and working conditions
The availability of Personal Protective Equipment (PPE) became another major flashpoint, with much on-the-ground skepticism by health and care service staff regarding governmental claims about widespread PPE availability. The normal supply chain was operated-through the NHS Trusts (which mainly channel resources to the health system).
Only on 6th April (week 6) were there moves to direct PPE to care homes. Even this did not prioritise them though because, again, they were included alongside other providers such as hospices, residential rehab and community care organisations. A cross-government plan to ensure that PPE is delivered to so-called 'frontline workers' was published on 10th April. 15 Care homes were mentioned here as amongst the 58,000 relevant providers, included alongside GP surgeries, hospices and other community providers. Only from 15th May were bespoke supply routes and specific guidance for care homes regarding PPE announced. 16 This was part of the first care home-specific measure, a relatively late response to the evidence that PPE supply was minimal in care homes (with supplies in the NHS more widespread but also inadequate). 17 Staffing levels were given attention in the action plan. A capacity tracker introduced as part of the 15th April plan is to be used to monitor workforce absences as well as other resources (bed capacity, PPE levels and overall risks in care homes). The ambition was stated also to attract 20,000 people into social care employment over the (then) upcoming 3 months. This was to be achieved by a new national recruitment campaign, targeting returners to the sector, as well as new starters who may have been made redundant from other sectors, and those able to take up short-term work. As well as the tracker, the plan included another innovation: the development of a new online platform to give people who want to work in social care access to online training and the opportunity to be considered for multiple job opportunities via a matching facility.
| Funding and responsibilities
Funding for social care and care homes is a complex mosaic in England (and the UK as a whole). Looked at from the perspective of the service providers, funding comes via local authorities which commission the services but also, and increasingly, from so-called 'self-funders' (residents who pay privately). Local authorities obtain their funding from three main sources: block grants from government; monies raised though taxes levied locally (the household Council Tax and rates paid by businesses for example); user contributions (through means-testing for services for example).
Local authorities also obtain some funding from the NHS and other joint arrangements for the social care they provide. How they expend these funds to carry out their duties is largely up to them, provided they meet their statutory obligations. But spending is typically the subject of a highly-politicised negotiation at council level (especially in the austerity environment that has prevailed in the UK up to this year).
Extra funding for social care was part of the government's early pandemic response. On 19th March some £2.9 billion additional funding was allocated "to strengthen care for the vulnerable" in England. 18 Of this, £1.6 billion was for local authorities to help them respond to COVID-19 pressures across all their services, including adult social care (the remaining £1.3 billion was for the NHS to enhance discharge from hospitals). A further £1.6 billion to help English local authorities to respond to the pandemic was announced on 18th April. All of the £3.2 billion was to be shared widely amongst the services offered by the local authorities (including those for homeless people and a range of other services targeted at those considered vulnerable). 19 Finally, targeted funding for care homes of £600 million was announced on 13th May as part of a new infection control fund and a 'care home support package' (announced on 15th May). 20 The funds are intended especially to allow care homes to employ additional staff and pay for restrictions/constraints on staff movement and deployment (in order to reduce the risk of transmission).
Taking an overview, it is worth pausing for a moment to be precise about what is to be explained here, and which factors might be considered as causal. In terms of the former, there are two outstanding features of the policy response for care homes: its relative slowness, lateness and reactiveness in relation to the NHS; the inadequacy of the focus on care homes. To turn to possible explanations, the situation as described is generally true of all of the UK (apart from perhaps lockdown policy), so devolution is not a major part of the explanation (although there were some variations between the devolved regions). The political ideology of government is another potential causal factor.
The neo-liberal orientation of the Johnson government certainly made lockdown slow to happen and its shifting 'reading' of the pandemic and the resources needed to address it led to inconsistencies, gaps and errors that cost lives. But government (in)competence and political handling of the virus are insufficient in themselves as explanations for the slowness and nature of the response to care homes. For a comprehensive explanation, I suggest examining two main sets of factors: structural/logistical and political/socio-cultural (see Figure 2). I do not ignore the role of government reaction but rather integrate it into the broader explanatory landscape in terms of the existing system and politics which enabled and, to some extent foretold, the response.
| Two separate (and complex) systems
Arguably, the COVID-19 virus required an integrated response, one crossing health and social care. But this was highly unlikely in a policy setting characterised by a long-standing, systemic divide between the two. Despite a common heritage in the Poor Law-local provision for poor or destitute people-they have been growing apart since the middle of the 19th century. Publicly funded health services came to be provided free of charge whereas local authorities retained the right to charge for social care services (Thane, 2009). The 1946 Act establishing the NHS and the National Assistance Act 1948 were especially significant in institutionalising the divide, in several respects. The NHS was established as a national service whilst social care remained under the auspices of the local authorities. Second, whereas providers of health care were public entities, social care provision was a mixed system, either directly delivered by the local authorities or through independent providers contracted by them. Third, the NHS was and is a centralised, tax-funded service free for all whereas social care is local, dependent on a test of means (as well as [dis] ability) and is in significant respects privately funded. Amongst other things, this means two separate funding, governance, legislative and service regimes. 21 Having two systems is not fatal (and indeed is relatively common in Europe [Incisive Health, 2018]), provided there is co-ordination between them. However, the border between health and social care in the UK can be accurately described as a 'hard boundary' (Lightfoot, Heaven, & Henson Grič, 2019, p. 41). Its deep institutionalisation can be appreciated from the fact that, in the past 20 years, integration has been the subject of some 12 white papers, green papers and consultations, and five independent reviews and consultations (House of Commons Committee of Public Accounts, 2018). Whilst the NHS/social care interface or indeed integration are talked about and planned, there are grounds to be sceptical about the degree of commitment to integration. First, the resources given to their achievement are relatively small and there is a high usage of pilot and localised initiatives. Second, they tend to be technical and bureaucratic initiatives, removed from democratic accountability. "Different organisations with different budgets working under different policy guidance have found it difficult to work effectively in a joined-up manner" (Lightfoot et al., 2019, p. 23).
The significance of a systemic divide in COVID-19 times is that it hampers joined up functioning and resource flows. There are many examples of relative failures of the channels or supply routes for testing and PPE for both hospitals and care homes. But the failure was greater in the latter and this was caused, partially anyway, by the fact that care homes are not part of the routine supply channel for the NHS. Two relatively 'siloed' resourcing and provision channels were operating, with that of the NHS by far the better resourced.
There is another aspect to logistical or efficiency issues as well. The social care system and care homes in particular are embedded in a long and complex policy/governance chain. Although statutory responsibilities mainly lie with 152 local councils/authorities, the Department of Health and Social Care retains significant authority. One has to factor in also the existence of a separate route for quality monitoring and resident safeguarding: this is the function of the CQC which is constituted as an executive, non-departmental body of the Department of Health and Social Care. To be properly understood, this complexity should be placed in a recent history of shifting government F I G U R E 2 Key lines of explanation positions on centralised versus localised responsibilities in regard to social care and also public health. Blurred lines and channels of authority made for some key logistical and governance vacuums in the pandemic and-whilst evidence about what happened on the ground is still not available in sufficient detail-it does appear that government was inconsistent and confused about the degree of reliance it placed on local (authority) versus national channels and supply routes and oscillated between a heavily centralised response and expecting a more bottom-up one.
There is also, though, a tradition of weak governance in the field. To function well any governance system needs resources such as detailed strategic knowledge of its sector. Robust strategic knowledge is not available for the care home sector in England (or the UK). Why is this? The sector is private and largely market based. Some 97% of beds in all care homes in England are provided by the independent sector-commercial for-profit (84%) and charity-run (13%)-with only 3% operated by local government or the NHS (Blakely & Quilter-Pinner, 2019). 22 Trends over time describe a rapid financialisation of the sector, characterised as an increasing encroachment of financial motives, financial markets and financial institutions, with larger for-profit companies (which tend to have very complex corporate structures) especially gaining a greater foothold. And all the time the central state passes on the responsibility, devolving it to the local authorities which in turn outsource social care provision (Blakely & Quilter-Pinner, 2019).
Whilst local authorities have a statutory duty to undertake market oversight (e.g., monitoring the performance and finances of the providers), research suggests that as a result of both the complexity of larger, private, equity-owned care providers and the capacity and capabilities of local authorities, this is an unrealistic expectation (Blakely & Quilter-Pinner, 2019). There is no other market regulation and most of the information available comes from private market analysts (such as Laing-Buisson). This feeds into and off a 'habit' of weak regulation, especially of the market in care homes. This may also help explain the rather 'hands-off' approach taken to care homes during the pandemic.
Compare this to the NHS-where: (a) far greater information exists; (b) this information is readily available to government planners; and (c) there is a strong tradition of regulation.
Apart from structural complexity and weak regulation, there is also the very large matter of resources.
| Under-funding, under-resourcing and austerity
The sector-like the UK generally-has been exposed to austerity policy since 2009/2010 and this has significantly weakened capacity for decision-making and resourcing as well as service provision.
In the UK the system of public funding of care is supply rather than demand led. Although local taxes are the main source of revenue for local authorities, there is a fundamental dependence on central government funding.
Almost 10 years of austerity policy oversaw an estimated funding reduction of 49.1% in real terms to local government, with the cumulative reduction forecast at 56.3% by 2019/2020 (National Audit Office, 2018). This did not translate directly into falls of the same magnitude in adult social care services, for two main reasons. First, local authorities sought to protect adult social care services in their spending cuts and they were also enabled by government to raise additional revenue though local taxation from 2015. 23 Second, funding pressures were somewhat eased because recent governments (all Conservative led) were forced to commit additional resources to stem potentially catastrophic shortages. As a result of both protecting the sector and additional monies, local authority spending on adult social care services in England reduced by only 2% in real terms between 2009/2010 and 2018/2019 (Atkins et al., 2019). However, this still represents a severe shortfall in income when the growing demand from an ageing population and rising costs are factored in. The hard reality is that adult social care services in England are estimated to face a £1.5 billion funding gap in 2020/2021, and £6 billion by 2030/2031 (at 2018 prices) (Bottery, Varrow, Thorlby, & Wellings, 2018). This was pre-COVID-19-a recent report calculated that the sector will face an extra £6.6 billion in costs due to There are no stated plans to meet the pre-existing shortfall and in the meantime dependence on time-limited, emergency funding grows. The comparison with the NHS, again, offers a strong contrast. Although it too has experienced major funding shortages, a multiannual funding plan was put in place, prior to the pandemic. The settlement announced in June 2018 would increase the NHS England budget by 3.4% a year on average in real terms between 2019/2020 and 2023/2024 (Atkins et al., 2019, p. 41).
Notwithstanding some financial easement, the austerity-induced cuts did significantly undermine the social care sector and, directly and indirectly, impacted the capacity to respond to the crisis. This is because the local authorities, whilst protecting the adult care sector, were underfunding both other services and their own resources (using up their financial reserves, cutting staffing and infrastructural resources). This affected their governance and resourcing capacity, including that in public health (Scaly et al., 2020). In sum, prior to the pandemic, the local authorities and the social care sector had experienced nearly a decade of austerity and were continually propped up through special interventions or increasing marketisation. The sector was in no fit economic or structural state to meet a major challenge.
| Political and socio-cultural explanations
Part of what has to be explained about the policy response to care homes centres around the government's downgrading of the significance and vulnerability of care homes in the pandemic. Whilst we do not yet know how the official advice given by the expert SAGE committee 25 affected policy, an interpretation of the government response as politicised is credible. Both sectoral and socio-cultural politics are involved.
| Social care as a relatively depoliticised field
If we accept the government response as political, one set of factors that helps explain the priority given to the NHS over social care and care homes relates to differences in the degree of political mobilisation of the two sectors during the 'shock' weeks of the pandemic. During this period, the social care sector was relatively poorly mobilised and generally 'quiet'. This is at least partly explained by the fact that there is no overall convening advocate or voice for care homes. And in the 'system' as a whole, the platforms that exist are sub-sectoral. Two such platforms dominate the rather sparse field: those of the local authorities and the commercial providers. 26 Both were vocal during the pandemic period covered by this article, especially the latter, but they 'discovered' their voice quite late in the day. In another difference to the health sector which has very strong professional organisations and trade unions, social care workers are poorly organised. There is a national association of care and support workers but this is small and, whilst some of the workers are unionised, most are not. There is a carers' association, representing mainly informal carers and cohort or age group organisations (such as Age UK) also exist. But, again, unlike health which has many patients'/clients' organisations and platforms for patient feedback, there is no national representation of the voices of those receiving care in care homes.
To all intents and purposes then, the sector is unorganised representationally and so there was little push from this source against government policy.
What about broader political resonance?
Despite its name, social care is for many in the UK a private good and this, amongst other things, complicates the politicisation around it. Looked at from a social politics perspective, a core feature of the system is limited risk pooling.
Local authorities typically only fund packages of care for adults assessed as having high needs and limited means. They operate two thresholds to entry: level of disablement and income and means (with a cut-off of income/assets in excess of £23,250 annually). Around 41% of residents in care homes (in the UK as a whole) fund their care privately (selffunders) and a further 12% pay top-ups. So care home residency or future residency does not mobilise a particular constituency and solidarity has no convening political power. The contrast with the NHS is striking: 'risk pooling' is a core organisational and political principle-everyone contributes to total costs through taxation and people receive the amount of care they are deemed to need, however expensive that is.
With scarce interest group or wider political organisation, there was little 'political clamour' until the infection and death rates in care started to be reported in early-to mid-April. The media played a major role in shaming the government into action but it was the sharply rising death toll that was the main spur to action.
There was also some likely policy avoidance involved. Social care had been effectively 'parked' by the previous Theresa May administration, which had been damaged in the 2017 election by its proposal to increase the amount that people would have to pay for social care. Prior to that reform had been frozen by the various Conservative-led governments which have failed over their 10 years in office to come up with a credible plan to resolve the many problems in the sector. Whilst the Labour Party's long-term position is for a nationally-funded service, the Conservative Party is riven by division over the threshold for the means-test and whether there should be a lifetime cap on how much an individual will pay for care (which is currently unlimited). Social care, therefore, is a policy field in which government was reluctant to get involved.
| Socio-cultural factors
Cultural politics also have a critical part to play in explaining the relative neglect of care homes and adult social care in the pandemic response. And here we see politicisation on the part of the government clearly at play. It was government strategy to place the NHS front and centre in its battle cry (it favoured bellicose idioms); in the discourse representing it as vulnerable. Government's main slogan-repeated ubiquitously on all public information from early March to 10th May-'Stay Home-Protect the NHS-Save Lives'-illustrates this kind of thinking and messaging. Whilst reference to the NHS did not survive the change of messaging in England from 11th May (Stay Alert-Control the Virus-Save Lives), neither care nor care homes ever received any public billing or specific recognition. In this and almost every other communication, the main terms and measures pertinent to both health and social care were given a primary reference to health. In the process, the term 'carer'-not a label normally adopted by health service personnel who tend to prefer professional titles-was appropriated for care in a health setting (although its meaning did open out over time towards more inclusive terms such as 'frontline workers' or 'key workers').
All of this draws from deep cultural politics which locate and construct health and social care in very different narratives. There are at least two dialogues in relation to the NHS, and both are relatively positive. One, predating but carried through in the COVID-19 representation, is about the NHS as being at risk, especially from underfunding, austerity and the creep of privatisation. The Brexit referendum was a focusing event with a prominent promise by the Vote Leave campaign that the monies saved by not having to make the EU funding contribution could be spent on the NHS. But public concern about the NHS dates much further back and has to do not just with public awakening to the fact that the system might be at risk (mainly from Conservative policy of austerity and privatisation) but also people's sense of their own entitlements in this context. The meaning of health as a right has depth in the UK, unlike, say, welfare benefits where public opinion has shifted and the rights perception of social security has diminished. 27 Social care is at best a weak social right (Daly & Lewis, 2000). A second and related dialogue construes the NHS as public property, associated with the British nation. Despite considerable criticism, the NHS has assumed the status of a valued cultural entity in the public imagination, an icon of Britishness.
By comparison, social care is a poor shadow. It lacks a clear public identity-on 2018 evidence nearly half (48%) of adults in England have little or no understanding of what the term 'social care' means 28 And social care does not have a positive image as a public good. Unlike the NHS where 83% of the population said they would support additional spending on health (in 2016), 29 public opinion surveys suggest a bifurcation in support for social care with most people (55%) favouring options where responsibility for care is shared publicly and privately, although a sizeable 41% favour government funding (paid for by taxes) (Bottery et al., 2018). It is also noteworthy that a half of English adults say that they have never thought about how they will pay for care when they get older and only 15% say they have made any plans in this regard (Bottery et al., 2018).
Against such a backdrop one could see government making a calculated decision that there was a real possibility that its policy on care homes would not generate much (negative) political reaction.
| CONCLUSION
The treatment of care homes in the COVID-19 pandemic in the UK is fast generating the sense of a scandal. The analysis undertaken here, which focused on explaining the government's policy response rather than the outcome of the pandemic per se or what care homes did, showed that it is the slow, late and inadequate response to the risk and reality of COVID-19 in care homes as against that in the NHS that has to be explained. Policy inadequacy and relative downgrading are found right across the policy elements considered here: the specific targeting of care homes, monitoring and testing, staffing and working conditions or funding. The NHS was front and centre of the national response, whilst care homes were poorly targeted and in many senses neglected until late in the pandemic when a response was unavoidable.
Undoubtedly, part of the explanation for this lies in government ineptitude and erroneous policy choices. Boris Johnson and his government made many mistakes which included a generally delayed response and jettisoning of a preventive strategy very early on. Causation is more complex though, especially if our focus is on the relative policy inaction around care homes until infection and death rates began to pile up and the subsequent inadequacy of the response. In the preceding pages, I have suggested a compound explanation for this situation, pointing the explanatory dial towards a mix of both structural and politico/socio-cultural factors. Amongst the relevant structural factors are the complex systems of provision and policy prevailing in adult social care and the deep separation from the health system.
These make governance complex, having to operate different supply routes and governance channels. In the event, government policy was confused about the appropriate response, sometimes using centralised and at other times localised decision-making and resource channels. One must also factor in how logistical difficulties were exacerbated by years of austerity and resource cutting which, amongst other things, have depleted local authorities' resources and capacities.
But the weak regulatory tradition of the sector also contributes to explaining the inadequacy of the response. Care homes and social care in general are far down the chain of public policy and the majority are market providers which see relatively little regulation (other than some monitoring of quality and safeguarding). Explanation also extends wider, to take account of a series of political and cultural particularities which see the NHS and health care elevated to a high plane in public opinion and national politics, whereas social care is in comparison the 'Cinderella service' . In social care in particular, there is a trend of government, especially Conservative government, trying to divest itself of responsibilities and, moreover, the social care policy story in the last decade is one of relative reform failure on the part of successive Conservative governments. All of this points to a role for avoidance of the social care field as part of the explanation for the policy response to COVID-19 suggesting, in addition, a possible calculation by government that its policies towards the care sector and care homes were far less important than those for the NHS and policy errors would not 'hurt them' as much as would NHS mistakes or mis-steps.
Many of the deficits identified in this article have been addressed in the interim period but all by short-term, stop gap measures. Nothing has happened to suggest that what I have recounted is past history or that the structural, political and cultural barriers have been overcome. The UK needs a new model of care for older adults. The large and diverse network of independent providers does not look like a resilient form of provision and is likely to have become even less resilient following the pandemic. Ultimately, the country has to answer the question of what is an acceptable way of caring for its older people and view the pandemic outcome as associated not just with shortterm failures of policy and political leadership but a much deeper undervaluing of the came home sector, the activity of caring and those who require care. Long-term care policy has to become a meaningful part of the British welfare state in which the rights and entitlements of those involved are given a central place, unseating the far more dominant risk, exigency and need perspectives. | 2020-09-01T13:10:15.854Z | 2020-08-28T00:00:00.000 | {
"year": 2020,
"sha1": "641a665dc7e5a75fce59c50db5b4c17c0390ef9a",
"oa_license": "CCBYNCND",
"oa_url": "https://ora.ox.ac.uk/objects/uuid:0e2ab83a-674c-4d24-8700-d5bd2ccf888c/download_file?file_format=pdf&safe_filename=DalyVoR2020.pdf&type_of_work=Journal+article",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea871ab8587f75f306a8b8962bc44dbdf3f63024",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
250688376 | pes2o/s2orc | v3-fos-license | Liquid Gallium based temperature sensitive functional fluid dispersing chemically synthesized FeMB nanoparticles
In this work, FeMB (M = Nb, V) nanoparticles were first reported to be synthesized by a chemical method, from reduction of FeCl2, NbF5 (and NH4VO3) using NaBH4 as a reducing agent in aqueous solution. A new temperature sensitive functional fluid was then prepared by dispersing silica coated FeNbVB nanoparticles in liquid gallium. The result shows that the FeNbVB nanoparticles exhibit an oxidation resistance better than that of FeNbB nanoparticles. The FeNbVB nanoparticles were in the size range of 30 – 50 nm and the thickness of silica layer was observed about 10 nm by means of transmission electron microscopy. The magnetization of the synthesized particles and fluid shows a temperature dependency within the testing temperature range of 293 – 353 K, which indicated their application potential in magneto-caloric energy conversion devices.
Introduction
A functional fluid is a good choice for the application in magnetocaloric energy conversion and heat exchange devices, where a high thermomagnetic coefficient is required. However, the most widely used functional fluids, prepared by dispersing magnetite nanoparticles in water, can not satisfy these requirements [1]. Therefore, the development of temperature sensitive magnetic nanoparticles for functional fluids is of great interests. Especially, FeMB (M = Zr, Hf, V, Nb, Ti, Ta, Mo and W) nanocrystalline alloy is an excellent type of soft magnetic materials characterized by its high saturation and permeability [2,3] that, as recently it is reported, show also magneto-caloric effect [4][5][6][7]. Nowadays, iron based alloys are mainly fabricated by metallurgical methods, such as arc-melting [8,9] and mechanical alloying [10,11], by which nano-sized and homogenously distributed particles are difficult to be obtained. On the other hand, chemically synthesized magnetic materials may have unique magnetic properties, derived from their small particles sizes and uniform size distribution [12,13]. Thus, developing a chemical synthesis method to prepare temperature sensitive FeMB nanoparticles is of great interests. On the other hand, when selecting a carrier liquid, it is important to consider the melting point, boiling temperature and evaporation at working temperature. A good carrier for temperature sensitive functional fluids can be a liquid metal, which exhibit a high thermal and electric conductivity as well as fluidity. When ultra fine particles are dispersed in a liquid metal, the functional fluid such as magnetic fluid and magneto-rheological fluid can be prepared. For many years, the synthesis of the mercury based magnetic fluid has been investigated, however, liquid mercury is toxic and difficult to handle. Thus, an environment-friendly alternate, liquid gallium becomes a good choice for the carrier liquid in functional fluid. In particular, gallium has the advantages of remaining in the liquid state throughout a wide temperature range (303 to 2477 K) and has a very low vapour pressure at atmospheric pressure (9.31×10 -21 Pa at 302.9 K). It has high thermal conductivity which is much higher than that of traditional organic or aqueous carrier liquids [14]. Thus, liquid gallium-based functional fluids can be used in magneto-caloric energy conversion devices or heat exchange devices, where continuous heat diffusion and cooling can be achieved and the elastic properties can be kept at all times [15]. The aim of this work is to prepare the liquid gallium based temperature sensitive functional fluid. Here, we report for the first time the synthesis of FeMB nanoparticles by a chemical method. In order to increase the affinity of the nanoparticles to liquid gallium, silica coating on FeNbVB particle surface was then carried out by the sol-gel method, based on the hydrolysis and condensation of TEOS (tetraethyl-orthosilicate). Our attentions were focused on the preparation and the temperature dependency of magnetization of the synthesized magnetic nanoparticles and functional fluid.
Preparation of FeMB nanoparticles and silica coating procedure
FeNbVB nanoparticles were prepared by a chemical synthesis based on sodium borohydride reduction. A mixed aqueous solution of niobium floride 97 % (NbF 5 , Aldrich), ammonium vanadate 99 % (NH 4 VO 3 , Wako) and iron (II) chloride tetrahydrate 99 % (FeCl 2 ·4H 2 O, Nacalai,) was precipitated by using 336 mmol of sodium borohydride 98 % (NaBH 4, Nacalai) solution. Reagents of NbF 5 , FeCl 2 ·4H 2 O and NaBH 4 were dissolved separately in distilled water. NH 4 VO 3 was dissolved in water by adding of 0.5 mol/l sulfuric acid solution. NaBH 4 solution was simultaneously added into the mixture containing the Fe, Nb and V ions. The synthesized particles were then washed several times with ethanol. FeNbB nanoparticles were also prepared in the same way, only without the addition of ammonium vanadate. Finally, the synthesized particles were collected by means of filtration, and dried in a desiccator at room temperature. Details of the preparation of the silica coated magnetic nanoparticles were given in a previous publication [16]. Finally, theses core-shell nanoparticles were dispersed in the liquid gallium at the solid weight fraction of 1.67 mass%.
Composition analysis and characterization
The composition of the sample was determined with inductively coupled plasma optical emission spectrometry (Optima 5300 DV, PerkinElmer) and field emission scanning electron microscope (JSM-7000F, JEOL) equipped with EDS (energy dispersive spectroscopy). The morphology of core-shell type nanoparticles and the thickness of silica layer were observed by using a transmission electron microscope (JEM-2000FX, JEOL) equipped with EDS. Vibrating sample magnetometer (VSM) was also employed to investigate the temperature dependency for the magnetization of samples. The applied magnetic field was varied between 0 and 0.9 T.
Morphology and composition of FeMB nanoparticles
The content of Fe, Nb, V and B in the samples was qualitatively determined by inductively coupled plasma optical emission spectrometry (ICP-OES), and the results were shown in Table 1. It can be seen that the ratio of iron, niobium and boron is about 78, 7 and 15 mol % for the FeNbB nanoparticles. A drawback is that Nb has lower resistance to oxidation at an elevated temperature. In order to overcome the oxidation problems, 4 mole % of vanadium ion was contained in this work. The synthesized FeNbVB nanoparticles have a ratio of iron, niobium, vanadium and boron around 80, 3, 4 and 13 at mole %. As shown in Figure 1, the as-synthesized FeNbVB nanoparticles have a smaller particle size in general, and are not aggregated as much as FeNbB particles. The EDS data also showed that the content of oxygen of FeNbVB samples is largely reduced from 46.67 % to 15.25 %.
Morphology and composition of FeNbVB-SiO 2 core-shell nanoparticles
Generally speaking, naked metal powders are not dispersed easily in the liquid gallium directly; on the other hand, refractory metals and ceramics, such as tantalum, tungsten, graphite, stabilized Zr0 2 , and quartz are most stable in gallium [17]. Therefore, in this work, FeNbVB nanoparticles were coated with silica in order to improve the dispersability of these particles in liquid gallium. Another reason for selecting silica coating, it is possible to prepare the core-shell composite particle consisting of a FeNbVB core and a silica coating layer, with a density similar to that of the liquid gallium [14]. Figure 2 depicts the TEM images and EDS data of the FeNbVB and FeNbVB-SiO 2 core-shell nanoparticles. The FeNbVB particles are mostly in the size range of 30 -50 nm. By placing the electron beam exactly on these spherical particles, peaks for iron, niobium and vanadium have been observed (Figure 2(a)), and a shell of SiO 2 with a thickness of about 10nm was also be observed on the FeNbVB particles (Figure 2(b)). The formation of silica can be observed from the composition analysis of the coated particles. Also the density of the core-shell particles can be adjusted by controlling the thickness of silica coating layer. The macroscopic observation of different particles mixed with liquid gallium is shown in Figure 3. The result clearly shows that the FeNbVB particles without coating can hardly be dispersed into liquid gallium, so that Figure 3(a) shows two separated phase i.e., residual FeNbVB particles and liquid gallium. A small amount of metal powders could be directly dispersed and a small amount of powders tended to remain on the surface of liquid gallium. Figure 3(b) shows that the silica coated FeNbVB particles can be well dispersed into liquid gallium, which keeps its brilliant silvery colour. Moreover, the contact angle at the iron and SiO 2 substrate / liquid gallium interface was 158 o and 145 o , respectively. Contact angles were measured directly from the image of the drop section by using a digital microscope (Keyence VXH-100), equipped with an 18 million-pixel CCD camera. The measurement of initial contact angel between pure liquid gallium (99.9999 %, 0.1 mL) and substrate samples were conducted at 313 K under atmospheric pressure. This result also shows that silica coated FeNbVB particles can be easily wetted by liquid gallium, relatively when compared with FeNbVB particles without silica coating. Therefore, it is reasonable to conclude that the existence of silica coating improves the dispersion of FeNbVB particles into liquid gallium. Figure 4 shows that the magnetization curve for the samples measured between 293 K and 353 K in 5 K steps and the temperature dependencies of the magnetization for FeNbVB nanoparticles (Figure 4(a)), and silica coated FeNbVB (Figure 4(b)) nanoparticles measured in an applied field 0.9 T. The result shows that the temperature dependency of the magnetization for the silica-coated FeNbVB nanoparticles is almost same as the FeNbVB nanoparticles, though the magnetization values decrease due to the existence of a non-magnetic silica layer (Figure 2(b)). The result shows that the synthesized FeNbVB particles have a little lower saturation magnetization and temperature dependency than FeMB soft magnetic alloy prepared by metallurgical methods [5][6][7]. However it has relatively high saturation magnetization and temperature dependency when compared with various types of temperature sensitive ferrite such as Ni-Zn and Mn-Zn ferrite etc. [18][19][20], which are normally applied to magneto caloric energy conversion or heat exchange device. Moreover, chemically synthesized FeNbVB magnetic particles have smaller particles sizes and uniform size distribution than FeMB magnetic alloy obtained from metallurgical methods. Therefore, chemically synthesized FeNbVB magnetic particles more suitable to prepare the liquid gallium based functional fluid. On the other hand, Figure 4(c) shows that the temperature dependency of liquid gallium based functional fluid dispersed silica coated FeNbVB nanoparticles (1.67 mass% of solid fraction). The magnetization value of the synthesized functional fluid decreases, which is caused by the low magnetic property of liquid gallium and low solid fraction of magnetic particles. However, the magnetization value of the synthesized functional fluid still shows the temperature dependence, i.e. decreases with increasing temperature.
Conclusion
In this study, the liquid gallium based temperature sensitive functional fluid, dispersing silica coated FeMB nanoparticles was synthesized. The result shows that the FeNbVB nanoparticles exhibited an oxidation resistance higher than that of FeNbB nanoparticles. The FeNbVB nanoparticles were in the size range of 30 -50 nm and the thickness of silica layer was about 10 nm. Both the synthesized nanoparticles and functional fluid showed a temperature sensitive of magnetization within the testing temperature range of 293 -353 K. Therefore, the liquid gallium based temperature sensitive functional fluid dispersing silica coated FeNbVB nanoparticles is considered as an attractive working liquid for magneto-caloric energy conversion and heat exchange applications due to a high saturation magnetization and a high temperature dependence even low solid fraction. | 2022-06-28T00:52:42.295Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "2a2faa462db14922755af59aa2b6779f217fa683",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/149/1/012108/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2a2faa462db14922755af59aa2b6779f217fa683",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
218816837 | pes2o/s2orc | v3-fos-license | Fundamental Research on Electronic Image Recognition of Cylindrical Zno Nanorods Based on Deep Learning
ZnO is recognized as one of the most important photonic materials in the blue-violet region due to its straight-width band gap and large excitation bonding energy. Since ZnO nanorod array performs superior optical and field emission properties, a lot of efforts have been made in the fabrication of a vertically ordered ZnO nanorod array. The shape and size of ZnO nanorods have a significant effect on PEC property. In order to efficiently recognize and measure the shape and size of ZnO nanorods, a new method based on deep learning model mask r-cnn is proposed to detect cylindrical ZnO nanorods. The SEM images of ZnO nanorods were used as a data set for training. Adjust the size of the bounding boxes that model generated to make it more suitable for the data set. At the same time, improve the NMS (non-maximum suppression) algorithm to reduce the missing detection rate, and achieve a good detection effect on the SEM images of ZnO nanorods.
Introduction
ZnO is a semiconductor material with piezoelectric and photoelectric properties. The band gap is 3.37 eV at room temperature and the exciton binding energy is 60 meV [1]. Compared with other common semiconductor photoelectrochemical decomposing agents such as TiO 2 , WO 3 , Fe 2 O3, ZnO is one of the most important photoelectrochemical decomposition water materials due to its high electron mobility, low manufacturing cost and good light trapping performance. The one-dimensional ZnO nano structure has the characteristics of large specific surface area, fast electron transport rate, and quantum confinement effect. In recent years, various devices based on ZnO nanorod array have shown broard application in many fields such as quantum dot-sensitized solar cells, photoelectrochemical decomposition of water, gas sensitive devices [2, 3, and 4].
The shape and size of the ZnO nanorods have effects on the PEC property. The ZnO nanorods are usually observed by using scanning electron microscope (SEM), however this will consume a lot of time, manpower and material resources. Using image recognition [5] algorithm to identify ZnO nanorods is an efficient new measurement method. Since the convolutional neural network has achieved great success in image classification [6] on the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [7,8], object detection algorithms based on deep learning have been widely used in various object detection tasks. In this paper, the SEM images of ZnO nanorods are used as a data set, and the convolutional neural network model mask r-cnn is trained to detect cylindrical ZnO nanorods.
Mask r-cnn model
Mask r-cnn [9] is an instance segmentation algorithm based on Faster r-cnn [10] proposed by The Kaiming. Mask r-cnn is a convolutional neural network model based on ROIs (regions of interest). The model structure is shown below. The steps of mask r-cnn are: (1). The input pictures are put into the backbone consists of ResNet101 [11] and FPN [12] network for feature extraction to obtain the feature maps after it is cropped.
(2). Each pixel of the feature maps generates a predetermined number of bounding boxes as candidate ROIs.
(3). These candidate ROIs are sent to the RPN(regions propasal network) for binary classification (foreground or background) and bounding boxes regression, and some candidate ROIs are filtered out by the NMS [13] algorithm. (4). Do ROIAlign operation on the remaining ROIs to associate the original images with the feature maps. (5). Classification, accurate bounding boxes regression for these ROIs, and generate the masks of the object in the FCN (fully convolutional network) [14].
Evaluation of the model
The prediction results of the model can be divided into four categories: TP (true positive samples), FP (false positive samples), TN (true negative samples), and FN (false negative samples). Define the accuracy and recall rate of the model as: The performance of the model is evaluated by the average accuracy. Plot the curve with accuracy and recall and the average accuracy is the area under the curve, as shown in formula (9)
Optimization of model
Feature maps with size of 16×16 are obtained after convolutional layers, and each pixel of the feature map generates bounding boxes with a determined quantity and size. Each pixel in the original model generates nine bounding boxes of size (128, 256, and 512) and an aspect ratio of (0.5, 1, and 2).
Considering that many small-sized nanostructures exist in the images, increase the bounding boxes with size of 32 and 64. Considering that most of the ZnO nanorods are relatively slender, adjust the aspect ratio of the bounding boxes.
After the bounding boxes is generated, each object corresponds to multiple bounding boxes, but each object in the final detection only corresponds to one bounding box, so the NMS algorithm is used to filter out some bounding boxes. After the bounding boxes are sent to the RPN, each box obtains a confidence score. The NMS algorithm filters out some bounding boxes according to the confidence scores and the IOU between different bounding boxes. The IOU between the two candidate boxes is calculated by the following formula: The steps of NMS are: (1). Select a box with the highest confidence score from the bounding boxes set and put it in the final ROIs set, and remove the box from the original set.
(2). Calculate the IOU of the box with the highest score and the remaining boxes in turn. If the IOU is greater or equal to a threshold, then the box is considered to correspond to the same object as the box with the highest score, so the box is removed.
(3). Repeat the above steps until there are no remaining boxes in the original bounding boxes set. The NMS algorithm has disadvantages. The removal of the bounding boxes depends on the threshold, but it is difficult to set a suitable threshold. There are many overlapping nanorods in the images. If the threshold is set inappropriately, it is easy to filter out some bounding boxes that should not be filtered out.
Considering this situation, the SOFT-NMS algorithm [12] is used instead of the NMS algorithm. Different from the NMS algorithm, when the IOU of the box b i and the box b m with the highest score is greater than the threshold T, the SOFT-NMS algorithm does not directly filter out the box b i , but uses a penalty coefficient S i to reduce the confidence of the b i . S i is calculated as follows: , ( Where σ is usually set as 0.5. After multiple iterations, the bounding boxes with low scores will be filtered out.
Experiment
This paper selects tensorflow as the framework for to implement mask r-cnn and use the method of transfer learning to pre-train the model on the COCO dataset. Although there are no ZnO nanorods on the COCO dataset, the model can learn a lot of other feature information after pre-training, which can help the model to converge faster. Finally, train the model with the ZnO nanorods data set.
Dataset
The SEM images of ZnO nanorods were used as the data set, which included a training set of 177 images and a validation set of 45 images. The size of the images were 512 × 384 after being cropped. The ZnO nanorods contained in the data set are mainly cylindrical and prismatic. Use VIA as the labeling tool to label the cylindrical ZnO nanorods in the images to generate the .json dataset file.
Size of bounding boxes
Using different aspect ratio to obtain the performance of the model: When the aspect ratio is (0.3, 1, 3), the model has better performance, so the aspect ratio used in the training is (0.3, 1, 3).
Detection result
The output of Mask r-cnn are the type, bounding boxes and mask of the detected object, the position and size of the object are described by the bounding boxes, and the outline of the object is described by the mask.
Comparison of model performance before and after optimization
Use the data set to train the model before and after optimization. Obtain the performance comparison. It can be seen that the improved model achieves better performance, higher AP, and the model has better robustness. For the same image, the detection results before and after optimization is as follows: It can be seen that the improved model achieves a better detection effect and effectively reduces the missed detection rate for the same image.
Conclusion
The deep learning method can effectively identify ZnO nanorods and achieve a good detection effect in the SEM images. By adjusting the size of the bounding boxes and optimizing the NMS algorithm, better performance is achieved on the same data set, and the model is more suitable for the recognition task of ZnO nanorods. And using deep learning to identify nanostructures can be used for many other applications. Knowing the SEM magnification, the size of the nanostructure can be calculated according to the size of the bounding boxes and the pixels of the images. In future research work, more applications of deep learning in nanostructure identification and measurement will be carried out. | 2020-04-16T09:11:51.133Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "fa32ec502272836f9bd4a2a0841883f5a2fb5b78",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/782/2/022034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c62ee9674fc1d72fa618fb283620a50a72ee4699",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
211541632 | pes2o/s2orc | v3-fos-license | Towards space-borne monitoring of localized CO2 emissions: an instrument concept and first performance assessment
The UNFCCC (United Nations Framework Convention on Climate Change) requires the nations of the world to report their carbon dioxide (CO2) emissions. Independent verification of these reported emissions is a corner stone for advancing towards emission accounting and reduction measures agreed upon in the Paris agreement. In this paper, we present the concept and first performance assessment of a compact space-borne imaging spectrometer that could support the task of “monitoring, verification, reporting” (MVR) of CO2 emissions worldwide. With a single spectral window in the short-wave infrared spectral 5 region and a spatial resolution of 50× 50 m, the goal is to reliably estimate the CO2 emissions from localized sources down to a source strength of approx. 1 MtCO2 yr−1, hence complementing other planned CO2 monitoring missions, like the planned European Carbon Constellation (CO2M). Resolving CO2 plumes also from medium-sized power plants (1–10 MtCO2 yr−1) is of key importance for independent quantification of CO2 emissions from the coal-fired power plant sector. Through radiative transfer simulations, including a realistic instrument noise model and a global trial ensemble covering 10 various geophysical scenarios, it is shown that an instrument noise error of 1.1 ppm (1σ) can be achieved for the retrieval of the column-averaged dry-air mole fraction of CO2 (XCO2). Despite limited amount of information from a single spectral window and a relatively coarse spectral resolution, scattering by atmospheric aerosol and cirrus can be partly accounted for in the XCO2 retrieval, with deviations of at most 4.0 ppm from the true abundance for 68 % of the scenes in the global trial ensemble. 15 We further simulate the ability of the proposed instrument concept to observe CO2 plumes from single power plants in an urban area using high-resolution CO2 emission and surface albedo data for the city of Indianapolis. Given the preliminary instrument design and the corresponding instrument noise error, emission plumes from point sources with an emission rate down to the order of 0.3 MtCO2 yr−1 can be resolved, i.e. well below the target source strength of 1 MtCO2 yr−1. Hence, the proposed instrument concept could be able to resolve and quantify the CO2 plumes from localized point sources responsible 20 for approx. 90 % of the power plant CO2 emission budget, assuming global coverage through a fleet of sensors and favourable conditions with respect to illumination and particle scattering. 1 https://doi.org/10.5194/amt-2019-414 Preprint. Discussion started: 16 January 2020 c © Author(s) 2020. CC BY 4.0 License.
Introduction
Despite the broad consensus on the negative long-term effects of carbon dioxide (CO 2 ) emissions and the proclaimed efforts reducing these emissions, the atmospheric CO 2 concentrations continue to rise. During the course of 2018, the average CO 2 concentration increased from 407 to 410 ppm at the Mauna Loa observatory, representing the fourth-highest annual growth ever (Wheeler and Ummel, 2008;Ummel, 2012), together with the corresponding To advance towards emission accounting and reduction measures, agreed upon in the Paris agreement in force since 2016, independent verification of reported emissions is of significant importance. To this end space-borne instruments provide a suitable platform where continuous long-term measurements can potentially be combined with a near-global coverage with no 5 geopolitical boundaries.
Most of the currently operating, planned and proposed instruments for passive CO 2 observations from space measure the reflected short-wave infrared (SWIR) solar radiation in several spectral windows covering the oxygen-A (O 2 A) band near 750 nm as well as the weak and strong CO 2 absorption bands near 1600 and 2000 nm, respectively, e.g. GOSAT (Greenhouse Gases Observing Satellite; Kuze et al., 2009Kuze et al., , 2016, OCO-2 (Orbiting Carbon Observatory-2; Crisp et al., 2004, TanSat 10 (Liu et al., 2018), GOSAT-2 (Nakajima et al., 2012), OCO-3 (Eldering et al., 2019), MicroCarb (Buil et al., 2011), GeoCarb (Moore III et al., 2018), CarbonSat (Bovensmann et al., 2010;Buchwitz et al., 2013) and G3E (Geostationary Emission Explorer for Europe; Butz et al., 2015). These instruments and instrument concepts further rely on a comparatively high spectral resolution on the order of approx. 0.05 − 0.3 nm representing resolving powers (ratio of wavelength over the full-width halfmaximum of the instrument spectral response function) ranging from approx. 3600 for the strong CO 2 absorption bands near 15 2000 nm for CarbonSat (Buchwitz et al., 2013) up to > 20 000 for the OCO and GOSAT instruments. Such advanced instruments, like for example GOSAT and OCO-2 that have been operating since 2009 and 2014, respectively, generally target an accuracy and coverage sufficient to study the natural CO 2 cycle on a regional to continental scale (e.g. Guerlet et al., 2013;Maksyutov et al., 2013;Parazoo et al., 2013;Eldering et al., 2017;Chatterjee et al., 2017;Liu et al., 2017), but have also been used to observe and quantify CO 2 gradients on the regional scale caused by anthropogenic CO 2 emissions in urban areas (Kort 20 et al., 2012;Hakkarainen et al., 2016;Schwandner et al., 2017;Reuter et al., 2019). OCO-2 data have further been used to observe strong CO 2 plumes from localized natural and anthropogenic CO 2 sources like volcanoes and coal-fired power plants (Nassar et al., 2017;Schwandner et al., 2017;Reuter et al., 2019), demonstrating the capabilities of imaging spectrometers to monitor CO 2 from space. The spatial resolution of OCO-2 and similar instruments like e.g. OCO-3, TanSat and the planned European CO 2 satellite constellation for CO 2 monitoring CO2M (on the order of approx. 2-4 km 2 ) does, however, pose a 25 difficulty for the routine monitoring of localized power plant CO 2 emissions, since the plume is usually only sampled by a handful of pixels, where CO 2 plume enhancements cannot be fully separated from the background, making quantitative CO 2 emission rate estimates difficult and vulnerable to cloud contamination and instrument noise propagating into CO 2 retrieval errors. For this reason CO2M will only address isolated large power plants ( 10 MtCO 2 yr −1 ) and large urban agglomerations ( Berlin) (Kuhlmann et al., 2019) and thus, a large fraction of the emission total will be missed.
30
To contribute to closing this gap and expanding on the future CO 2 monitoring from space, we here present the concept and a first performance assessment of a space-borne imaging spectrometer that could be deployed for the dedicated monitoring of localized CO 2 emissions. By targeting power plants with an annual emission rate down to approx. 1 MtCO 2 yr −1 , a substantial fraction (on the order of 90 %) of the CO 2 emissions from power plants and hence a significant part of the global manmade CO 2 emission budget in total could be observed (given a global coverage through a fleet of instruments). As shown 35 3 https://doi.org/10.5194/amt-2019-414 Preprint. Discussion started: 16 January 2020 c Author(s) 2020. CC BY 4.0 License.
in Fig. 1, it is of key importance to cover also the medium-sized power plants (1-10 MtCO 2 yr −1 ) as they alone contributed to approx. 64 % of the CO 2 emissions from power plants in 2009, according to the CARMA v3.0 data. To achieve this, the proposed instrument has an envisaged spatial resolution of 50×50 m 2 . With such a dense spatial sampling, averaging of plume enhancements and background concentration fields is avoided. This leads to an enhanced contrast compared to a coarser spatial resolution. To increase the number of collected photons and hence the signal-to-noise ratio (SNR) and relative precision of the 5 CO 2 concentration retrievals, such a high spatial resolution has to be compensated for with a rather coarse spectral resolution.
To further compensate for the limited spatial coverage of a single instrument, a comparatively compact and low-cost instrument design is an important aspect, as it would allow for a fleet of instruments to be deployed, increasing the spatial coverage. Wilzewski et al. (2019) recently demonstrated that atmospheric CO 2 concentrations can be retrieved with an accuracy < 1 % using such a comparatively simple spectral set-up with one single spectral window and a relatively coarse spectral resolution of 10 approx. 1.3-1.4 nm (resolving power of 1400-1600). Thompson et al. (2016) demonstrated the ability to resolve and quantify methane (CH 4 ) plumes, posing a similar remote sensing challenge as CO 2 , using data from the space-borne Hyperion imaging spectrometer, with a spectral and spatial resolution of 10 nm (resolving power around 230) and 30 m, respectively. Observation of emission plumes, from plume detection to enhancement quantification and flux estimation, using imaging spectroscopy with a single narrow spectral window and a spectral resolution as coarse as 5 to 10 nm (resolving power around 200-500) 15 has further been repeatedly demonstrated using airborne imaging spectroscopy data for both CO 2 (Dennison et al., 2013;Thorpe et al., 2017) and CH 4 (Thorpe et al., 2014; Thompson et al., 2015;Thorpe et al., 2016Thorpe et al., , 2017Jongaramrungruang et al., 2019). For an airborne instrument primarily dedicated to the quantitative imaging of CH 4 , but also CO 2 plumes, Thorpe Canadian company GHGSat Inc. was launched in 2016 as a demonstrator for a satellite constellation concept targeting the detection of CH 4 plumes from individual point sources within selected approx. 10 × 10 km 2 target regions at a spectral and spatial resolution of 0.1 nm (resolving power around 16 000) and 50 m, respectively (Varon et al., 2018). Varon et al. (2019) recently showed how anomalously large CH 4 point sources can be discovered with GHGSat-D observations.
Given the results from previous studies and the technology at hand, we are confident that the proposed instrument concept 25 presented here could be realised and that it would be an important complement to the fleet of current and planned space-borne CO 2 instruments, allowing for the routine quantitative monitoring of CO 2 emissions from large and medium-sized power plants and the estimation of corresponding CO 2 emission rates. The proposed instrument concept would also serve as a good complement and companion to CO2M, by targeting also medium-sized power plants and providing high-resolution images with finer CO 2 plume structures. The added value of such an instrument would be of interest, both in terms of advancing science 30 as well as in providing independent emission estimates that could be used to verify reported CO 2 emission rates at facility level and inform policy makers on the progress of reducing man-made CO 2 emissions. The proposed instrument concept is described in Sect. 2, followed by a description of the instrument noise model in Sect. 3. A global performance assessment addressing instrument noise and the errors introduced by atmospheric aerosol is presented in Sect. 4. The ability to monitor Offner-type spectrometer.
single CO 2 emission plumes at urban scale is further simulated in Sect. 5. A short summary and our concluding remarks are finally presented in Sect. 6.
Mission and instrument concept
The instrument concept presented in this paper is based on a space-borne push-broom imaging grating spectrometer, measuring spectra of reflected solar radiation in one single SWIR spectral window, from which the column-averaged dry-air mole fraction 5 of CO 2 (XCO 2 ) can be retrieved. With an expected instrument mass of approx. 90 kg, it is suitable for the deployment on small satellite buses. Since the proposed instrument is targeting the quantification of localized CO 2 emissions from e.g. coal-fired power plants, a high spatial resolution of 50 × 50 m 2 is envisaged. The instrument is designed to fly in a sun-synchronous orbit at an altitude of 600 km and a local equatorial crossing time at 13:00.
The preliminary optical design assumes a 15 cm aperture and is based on a Three-Mirror-Anastigmat (TMA) telescope, 10 combined with an Offner-type spectrometer, as shown in Fig. 2. The optic system relies on metal-based mirrors and is designed as an athermal configuration for a wide temperature range onboard the satellite. The three mirrors of the TMA are standard aspheres aligned on a single optical axis. The efficiency of the optical bench (throughput), including e.g. transmittance and grating efficiency, is estimated to 0.48 and the f-number (f num ), equal to the ratio of focal length to aperture diameter, amounts to 2.4. The dispersed electromagnetic radiation is then focused onto a two-dimensional array detector that captures the spatial In order to reach a sufficient signal-to-noise ratio (SNR), the proposed spatial resolution only allows for a relatively coarse spectral resolution. Wilzewski et al. (2019) used spectrally degraded GOSAT soundings to demonstrate the capability of retrieving XCO 2 from a single spectral window at such a coarse spectral resolution using a spectral set-up (in terms of spectral range, resolution and oversampling ratio) compact enough to fit onto 256 detector pixels. They evaluate two alternative spectral set-ups covering the spectral ranges 1559-1672 nm (hereafter also referred to as SWIR-1) and 1982-2082 nm (hereafter also 5 referred to as SWIR-2), each with a spectral resolution (full-width half-maximum (FWHM) of the instrument spectral response function) of 1.37 nm and 1.29 nm, respectively and an oversampling ratio of three. The resolving power of the SWIR-1 and and SWIR-2 set-ups amounts to approx. 1200 and 1600, respectively. For optics design reasons, we use a spectral oversampling ratio of 2.5 in this study, resulting in a spectral sampling distance of approx. 0.55 and 0.52 nm for SWIR-1 and SWIR-2, respectively. Simulated synthetic measurements of spectral radiances for the two prospective spectral set-ups are shown in 10 Fig. 3, assuming a Gaussian instrument response function with FWHM of 1.37 nm and 1.29 nm, respectively, as proposed by Wilzewski et al. (2019). The SWIR-1 window (Fig. 3a) exhibits two weak CO 2 absorption bands around 1568-1585 nm and 1598-1615 nm and has the advantage of a stronger top-of-atmosphere (TOA) signal due to higher solar irradiance and surface albedo at these wavelengths. It also allows for the simultaneous retrieval of CH 4 using the CH 4 absorption band near 1666 nm. The SWIR-2 window, on the other hand, exhibits two stronger CO 2 absorption bands around 1995-2035 nm and 15 2045-2080 nm and has higher sensitivity to atmospheric aerosol that can potentially be exploited during the XCO 2 retrieval (2019) showed similar performance for SWIR-1 and SWIR-2, respectively, but suspect SWIR-2 to be the favourable spectral set-up given the stronger CO 2 absorption bands, the ability to account for particle scattering and the lower radiance SNR required to reach sufficiently small XCO 2 noise errors. In this paper, we further investigate the performance of the two spectral set-ups in order to finally conclude on the more suitable one given the preliminary instrument design and realistic instrument SNR assumed here.
5
The instrument is designed to have a radiance SNR of 100 at the continuum for a reference scene with a Lambertian surface albedo of 0.1 and solar zenith angle (SZA) of 70 • . Given the altitude of 600 km and the corresponding orbital velocity of 7562 m s −1 , the instrument traverses along one 50 m ground pixel in approx. 7.2 ms. The amount of photons collected over the course of 7.2 ms is, however, not enough to reach a SNR of 100. To increase the SNR, we suggest to increase the integration time to 70 ms. This would normally lead to elongated ground pixels (approx. 50 × 500 m 2 ), but by using forward motion 10 compensation (FMC), the instrument can be periodically altered in the along-track direction, such that each ground pixel is sampled for a time period longer than the actual satellite overpass time (see e.g. Sandau, 2010;Abdollahi et al., 2014). FMC has the evident drawback that the coverage along the satellite track is discontinuous, since no data are sampled when the instrument returns to the starting forward position. A second disadvantage is the geometrical distortion of the ground pixels, that increases with the maximum off-nadir angle. The baseline design assumes 1000 measurements to be made in the along-track dimension https://doi.org/10.5194/amt-2019-414 Preprint. Discussion started: 16 January 2020 c Author(s) 2020. CC BY 4.0 License.
for each FMC repetition, leading to off-nadir angles up to approx. 20 • . Further assuming a 1000 detector pixels in the spatial dimension would consequently result in observed tiles on the order of 50 × 50 km 2 . Table 1 summarizes the preliminary mission concept and instrument design parameters assumed for this study. It should be clear that this is a preliminary baseline design used to demonstrate the CO 2 monitoring abilities and added value of the proposed instrument concept. Alternative instrument designs will be further investigated and the exact instrument design will 5 most likely be subject to change before the instrument would be realized. The continuum SNR for our reference scene should, nevertheless, remain at roughly 100, ensuring a similar performance as presented in this paper.
Instrument noise model
To assess the performance of the proposed instrument concept w.r.t. retrieving XCO 2 and monitoring localized CO 2 emissions, the expected instrument noise levels that accompany the measurements have to be quantified. To this end a numerical instrument 10 noise model that calculates the instrument's SNR is developed, following a similar approach as e.g. Bovensmann et al. (2010) and Butz et al. (2015). The SNR is given by where S is the signal, i.e. the number of photons emerging from a 50×50 m 2 ground pixel that generate a charge in the detector and σ tot is the corresponding instrument noise. The signal S is calculated as where L λ is the simulated reflected solar spectral radiance at the telescope, A det the detector pixel area, f num the instrument's f-number, η the efficiency of the optical bench, Q e the detector's quantum efficiency, ∆λ the wavelength range covered by a single detector pixel and t int the integration time between the detector pixel read-outs. Following the thin lens equation (for large distances between lens and object) and the magnification formula, the term π·Adet 4·f 2 num can also be expressed as A ap · Ω, where 20 A ap is the area of the aperture and Ω the instrument's solid angle i.e. the squared ratio of the ground sampling distance (50 m) over the orbit altitude (600 km). Apart from L λ that is calculated for each scene using a forward radiative transfer model, all quantities in Eq. 2 and their corresponding values were introduced in Sect. 2.
The total noise σ tot in Eq. 1 accounts for the noise contribution from five separate instrument noise sources 25 where σ ss = √ S is the signal shot noise, σ bg is the noise due to thermal background radiation incident on the detector, σ dc is the noise due to dark current in the detector, σ ro is the noise upon detector read-out and σ qz the quantization noise that arises when the analog signal is digitized. The thermal background signal per detector pixel is approximated as where E BB is the thermal black-body irradiance incident on the detector. E BB is determined by integrating the black-body spectral radiance L λ,BB (T bg ) emitted by the background over the detector's cut-off wavelengths λ 1 and λ 2 and hemispheric opening angle For this study, detector cut-off wavelengths of 900 and 2500 nm are assumed and the background temperature T bg is estimated 5 to 200 K. The thermal background noise is then calculated as σ bg = S bg . Similarly, the dark current noise is given by σ dc = √ S dc , where S dc = I dc · t int · Q is the per-pixel detector signal due to dark current. While Q = 6.242 · 10 18 electrons Coulomb −1 is constant, the dark current I dc strongly depends on the detector's operating temperature and is estimated to 1.6 fA pix −1 s −1 (assuming 150 K detector temperature), yielding a dark current signal of approx. 10 000 electrons (e − ) per detector pixel and second. Finally, the read-out noise (σ ro ) and quantization noise (σ qz ) are estimated to 100 and 40 e − , respectively. These noise 10 levels are preliminary estimates used to test and evaluate the instrument concept, but are comparable to those of state-of-the-art detectors for space applications. Figure 4a shows the continuum SNR (calculated with Eqs. (1)- (5)) as a function of the scene brightness for the two prospective spectral set-ups SWIR-1 and SWIR-2. The scene brightness describes the conversion from incident solar irradiance to reflected solar radiance and is calculated as the product of the surface albedo and the cosine of the SZA, divided by π, hence 15 assuming a Lambertian surface. Furthermore, Fig. 4b visualizes the individual contributions from the different noise sources for the SWIR-2 set-up. Since the instrument design is assumed to be similar, independent of whether the SWIR-1 or SWIR-2 set-up is finally used, the SNR is consistently higher for SWIR-1 compared to SWIR-2, as a result of the higher surface albedo at these wavelengths. For the reference scene (albedo = 0.1, SZA = 70 • ), the continuum SNR is approx. 180 and 100 for SWIR-1 and SWIR-2, respectively. When looking at the contributions from the different instrument noise sources, it is clear that the readout noise and signal shot noise are the major contributors, whereas the noise arising from quantization errors, dark current and thermal background radiation has a small or even negligible contribution in comparison. The signal shot noise is, however, smaller than the dark current, read-out noise and quantization noise inside the CO 2 absorption bands, where the signal, and hence the signal shot noise, decreases. Note that all noise terms, except for the signal shot noise σ ss , are constant.
4 Generic performance evaluation
In this section we conduct a first performance evaluation of the proposed instrument concept by assessing the XCO 2 retrieval errors expected on a global scale. Such errors arise due to instrument noise and because of inadequate knowledge about the light path through the atmosphere due to scattering aerosol and cirrus particles. For this purpose we use a global trial ensemble with a large collection of geophysical scenarios with varying atmospheric gas concentrations, meteorological conditions, surface 10 albedo, SZA as well as aerosol and cirrus compositions, that can be expected to be observed by a polar orbiting instrument. The same methodology and dataset have been used in several previous studies to assess the greenhouse gas retrieval performance of different satellite instruments (Butz et al., 2009(Butz et al., , 2012(Butz et al., , 2015.
The global trial ensemble contains geophysical data representative for the months of January, April, July and October. Imaging Spectroradiometer) MCD43A4 product (Schaaf et al., 2002). Aerosol optical properties are calculated (assuming Mie scattering) for an aerosol size distribution, superimposed from seven log-normal size distributions and five chemical types at 19 vertical layers, as provided by the ECHAM5-HAM model (Stier et al., 2005). Cirrus optical properties are calculated for 20 randomly orientated hexagonal columns and plates following the ray tracing model of Hess and Wiegner (1994) and Hess et al. (1998). In total the global trial ensemble consists of approx. 10 000 scenes with XCO 2 ranging from 340 to 400 ppm with an average of 382 ppm, albedo ranging from 0 to 0.7 with an average of 0.13 (SWIR-2 window), aerosol optical thickness (AOT) ranging from 0 to 1.1 with an average of 0.18 (SWIR-2 window) and cirrus optical thickness (COT) ranging from 0 to 0.8 with an average of 0.13 (SWIR-2 window). Thus, the global trial ensemble contains challenging scenes with scattering loads that 25 would be filtered out by current satellite retrievals, such as those applied to OCO-2 and GOSAT data which typically screen scenes with scattering optical thickness greater than 0.3 (at the O 2 A band around 760 nm). All data in the global trial ensemble are re-gridded to a spatial resolution of approx. 2.8 • × 2.8 • . This is, of course, much coarser than the envisaged 50 × 50 m 2 , but for investigating the propagation of instrument noise into the target quantity XCO 2 on a global scale, this dataset serves its purpose. See previous studies (e.g. Butz et al., 2009Butz et al., , 2010 for further details on the content of the global trial ensemble.
30
The geophysical data for each scene are fed to our radiative transfer software RemoTeC (Butz et al., 2011;Schepers et al., 2014) in order to simulate corresponding synthetic measurements. The measurement noise is calculated by propagating the 2000). Simulations are conducted globally for the 16th day of each of the four months January, April, July and October, hence covering SZA conditions ranging from 0 to 86 degrees.
By retrieving XCO 2 from the simulated synthetic spectra, the range of XCO 2 retrieval errors that can be expected with the proposed instrument concept can be estimated, as can the ability to account for atmospheric aerosol. The RemoTeC retrieval algorithm (e.g. Butz et al., 2011) is based on a Philipps-Tikhonov regularization scheme (Phillips, 1962;Tikhonov, 1963) 5 that uses the first-order difference operator as a side-constraint to retrieve the CO 2 partial column profiles, from which XCO 2 can be determined. Additional retrieval parameters are the total column concentrations of H 2 O and CH 4 (only for SWIR-1), surface albedo (as second-order polynomial), spectral shift, solar shift and, possibly, information on scattering aerosol. Here we assume knowledge about the airmass (needed to calculate XCO 2 ), in reality meteorological and topography data would be required to estimate the airmass.
Instrument noise induced XCO 2 errors
In a first step, we assess XCO 2 retrieval errors that are induced by instrument noise. To this end, for now, we neglect scattering by aerosol and cirrus. These so-called non-scattering simulations assume no scattering particles to be present in the atmosphere and simply compute the transmittance along the geometric light path (Rayleigh scattering is included). Figure 5a shows the cumulative distribution of the random XCO 2 noise error, i.e. the instrument noise propagated into 15 scene as a function of the corresponding scene brightness. The noise errors are significantly smaller for the SWIR-2 set-up (blue) when using the proposed integration time t int of 70 ms. The dashed lines in Fig. 5b show that on average 68 % and 95 % (1σ and 2σ respectively) of the retrievals have noise errors of less than 1.1 and 2.0 ppm, respectively. For the SWIR-1 set-up (green), the corresponding numbers are 2.9 and 5.0 ppm. For the SWIR-2 set-up it is only retrievals over scenes that are darker than our reference scene (albedo = 0.1, SZA = 70 • ) that are expected to have instrument noise induced errors larger 5 than approx. 2 ppm. For comparison, and as a reference, we also investigate how much the integration time has to be increased for the SWIR-1 set-up, in order to reach a SNR sufficient to yield XCO 2 noise errors comparable to those obtained with the SWIR-2 set-up. We find that with the preliminary instrument design assumed here, the integration time has to be increased to at least 350 ms (i.e. by a factor five) for SWIR-1 (grey) in order to reach a similar performance.
Despite the advantage of being able to retrieve XCH 4 alongside XCO 2 using the SWIR-1 set-up, the much longer integration 10 time required to reach sufficiently low CO 2 noise errors is not feasible for the purpose of the proposed instrument concept.
Hence, we conclude that the SWIR-2 set-up is superior for the passive satellite based CO 2 monitoring instrument proposed in this paper. Consequently, the remainder of this paper is limited to the SWIR-2 set-up, covering the spectral range 1982-2092 nm with a spectral resolution (FWHM) 1.29, resolving power around 1600 and a spectral sampling distance of 0.52 nm.
15
Atmospheric aerosol and cirrus particles modify the light path of the reflected solar radiation to a certain degree, depending on the particle abundance, optical properties, height and surface albedo. Consequently, this can cause large errors in the retrieved XCO 2 if the effect of CO 2 absorption and particle scattering on the measured reflected solar radiation cannot be adequately separated during the retrieval process. In this section the ability to account for atmospheric aerosol and cirrus during the retrieval is investigated by including scattering by atmospheric particles in the simulation of the synthetic measurements as 20 well as in the corresponding XCO 2 retrievals. This is done by using a more complex forward model and representation of the aerosol and cirrus particles when simulating the spectra, and a comparatively simple representation and forward model for the corresponding retrievals. More precisely, the full physical representation of vertical profiles of hexagonal cirrus particles and spherical aerosol particles of the five chemical types characterized by the seven log-normal size distributions with known micro-physical properties for each aerosol and cirrus particle type is used when simulating the synthetic measurement for 25 each scene in the global trial ensemble. On the contrary, only three aerosol parameters are fitted during the corresponding retrieval: the total column number density, the size parameter of a single mode power-law size distribution and the center height of a Gaussian height distribution. Such differences in the aerosol/cirrus representation lead to forward model errors that, alongside the instrument noise induced errors, propagate into the retrieved quantity XCO 2 . Previous studies have shown that this approach gives a good approximation of how well a satellite sensor can account for scattering by atmospheric aerosol 30 while retrieving target gas concentrations (e.g. Butz et al., 2009Butz et al., , 2010. Figure 6a shows the difference between the XCO 2 retrieved ("retr") from the synthetic measurements and the corresponding "true" XCO 2 used as input to simulate these synthetic measurements. This deviation from the truth, contains information on both random instrument noise error (Sect. 4.1) and systematic errors arising from insufficient modelling of the aerosol and cirrus properties. For comparison, Fig. 6b shows the corresponding results achieved when using a non-scattering retrieval, i.e. where the scattering by atmospheric aerosol and cirrus, now present in the atmosphere and the simulated synthetic spectra, is neglected (similar to Sect. 4.1). The retrieval errors are strongly reduced when the RemoTeC retrieval algorithm accounts for the scattering by atmospheric aerosol. When scattering is considered, 50 %, 68 % and 95 % of the XCO 2 retrievals deviate from the true abundance by less than 2.5, 4.0 and 11 ppm, respectively, with a mean bias of -2.0 ppm and no clear error-correlation 5 with the optical thickness of the scattering particles. For the non-scattering retrieval, the corresponding numbers are 16, 29 and 78 ppm, with a mean bias of -25 ppm that increases with optical thickness, exposing the necessity of accounting for atmospheric aerosol and cirrus when retrieving the XCO 2 .
Scattering particles can modify the light path and hence the XCO 2 retrieval in primarily two ways. Firstly, an elevated layer of aerosol or cirrus will scatter parts of the incoming solar radiation towards the observing sensor at a higher altitude compared 10 to the Earth's surface, leading to a reduced light path. Secondly, aerosol and cirrus will extend the light path to some degree as a result of multiple scattering between scattering particles and the surface. Such modifications of the light path will be understood 13 https://doi.org/10.5194/amt-2019-414 Preprint. Discussion started: 16 January 2020 c Author(s) 2020. CC BY 4.0 License. as either too low (overall reduced light path) or too high (overall extended light path) CO 2 concentrations in the atmosphere if scattering cannot be accounted for in the retrieval. Which effect that is dominating, is primarily driven by the surface albedo. This is visualized in Fig. 6d that shows the difference between retrieved and true XCO 2 as a function of the surface albedo when scattering by aerosol and cirrus is neglected in the retrieval. Over darker surfaces, where the effect of multiple-scattering between aerosol and surface is limited, aerosol and cirrus particles scattering the incoming solar radiation towards the sensor 5 higher up in the atmosphere becomes the dominating effect, leading to a reduced light path and underestimation of the XCO 2 .
Over brighter surfaces, where the effect of multiple scattering becomes dominant, the non-scattering retrieval is more likely to overestimate the CO 2 abundance, because the loss of radiation due to an extended light path, resulting from the multiple scattering, is assumed to be caused by more absorbing CO 2 molecules in the atmosphere. Fig. 6c shows the difference between retrieved and true XCO 2 as a function of the surface albedo when scattering by aerosol and cirrus is accounted for when 10 retrieving XCO 2 from the synthetic measurements of the proposed satellite concept. It clear that when aerosol properties are retrieved alongside the CO 2 abundance, the curve-shaped relationship between the XCO 2 error and surface albedo vanishes with no clear error-correlation other than that XCO 2 errors increase with decreasing albedo (and thus SNR).
Performance evaluation for an urban case study
While the previous section assessed XCO 2 errors for the range of geophysical conditions to be encountered over the globe, this 15 section evaluates the CO 2 monitoring capabilities at urban scale using high-resolution CO 2 concentration and surface albedo data. Similar to Sect. 4, the high-resolution data are used to simulate synthetic measurements, from which synthetic XCO 2 abundances can be retrieved in order to make a first assessment of the CO 2 monitoring ability of the proposed instrument concept. The Hestia CO 2 emission data are used as input to a Gaussian dispersion model in order to compute a three-dimensional CO 2 concentration field. For a given CO 2 emission rate Q (in g s −1 ), the CO 2 concentration C (in g m −3 ) at a given position (x, y, z) downwind of the emitter is calculated as
Datasets
where u is the horizontal wind speed in the x-direction (along-wind), h is the height of the emitting source (in m above ground level) and σ y and σ z are the standard deviations of the concentration distribution (in m) in the horizontal across-wind and vertical dimension, respectively. σ y and σ z , and hence the spread of the emission plume, depend on the atmospheric instability i.e. the degree of atmospheric turbulence as well as the downwind distance x from the emitting source. Here, we calculate 10 σ y and σ z assuming the Pasquill-Gifford stability class C (slightly unstable atmosphere). Furthermore, a constant wind speed u = 3 m s −2 and an emitting source height h = 75 m (for all sources) are assumed. This model set-up is comparable to similar studies (e.g. Bovensmann et al., 2010;Dennison et al., 2013).
Downwind CO 2 concentrations from each emitting source (pixel) in the Hestia dataset are calculated across an equidistant grid at 50 m resolution in all dimensions and the contributions from all individual emitting sources (pixels) are subsequently combined to form a three-dimensional CO 2 concentration field over Indianapolis. Figure 7b shows the resulting (vertically integrated) two-dimensional field of (noise-less) XCO 2 enhancements at 50 × 50 m 2 spatial resolution over a constant background with a surface pressure of 1013 hPa. While weaker diffuse sources like streets cannot be identified, the plumes from stronger point sources are clearly pronounced given the high spatial resolution that allows for a detailed mapping of the plumes.
For comparison, Fig. 7c shows the corresponding XCO 2 enhancements assuming a coarser spatial resolution of 2 × 2 km 2 . Al-5 though the stronger plumes can still be identified at the coarser resolution, the XCO 2 enhancements are significantly lower and each plume is only sampled by a few pixels. Figure 7d further shows these XCO 2 enhancements in more detail for three along- enhancements reaching approx. 18, 6 and 3 ppm, respectively. This clearly demonstrates the benefit of an instrument with a high spatial resolution when resolving CO 2 emission plumes from space.
Surface albedo data from Sentinel-2 15
To accurately simulate the instrument SNR and hence the measurement noise, it is important to know how large a fraction of the solar radiation incident on the Earth's surface is reflected back towards space. To get realistic estimates of the surface albedo within the Hestia Indianapolis domain, data from the European Sentinel-2 satellite are used. The multi-spectral instrument aboard Sentinel-2 measures the TOA radiance in 13 spectral bands with a spatial resolution ranging from 10 × 10 m 2 to 60 × 60 m 2 . For this study, we use the Sentinel-2 L1C radiances measured in the spectral band 12 (centred at approx. 2200 nm) 20 at a spatial resolution of 20 × 20 m 2 . The software Sen2Core (ESA, 2018) is employed to compute corresponding L2 surface reflectances from the L1C TOA radiances, through a so-called atmospheric correction.
Surface reflectance data for the month of July 2018 are computed and re-gridded (using nearest neighbour) to the envisaged spatial resolution of 50×50 m 2 . The surface reflectance for Sentinel-2 pixels classified as vegetation are scaled by a factor 0.82 in order to account for the generally lower reflectance by vegetation in the SWIR-2 window compared to Sentinel-2's band 12. 25 The scaling factor has been derived using spectral reflectance data from the ECOSTRESS spectral library (Baldridge et al., 2009;Meerdink et al.). Figure 8b shows the gridded surface reflectance data for Indianapolis together with a corresponding RGB composite (Fig. 8a), using the Sentinel-2 data from the bands centred at red, green and blue wavelengths, as reference.
The scaled and gridded Sentinel-2 surface reflectance data are taken as representative for the Lambertian surface albedo within the SWIR-2 window.
Background data from CarbonTracker
Background data, including vertical profiles of CO 2 , H 2 O, temperature and pressure, are taken for the 15th of July 2016 from the CarbonTracker CT2017 dataset (Peters et al., 2007, with updates documented at http://carbontracker.noaa.gov). The CarbonTracker CT2017 data over Indianapolis are provided at a spatial resolution of 1 • × 1 • , meaning that the entire Hestia Indianapolis domain is covered by one single CarbonTracker pixel leading to a constant background data field. 5
Simulated CO 2 plume observations
As in Sect. 4, the above sets of input data are used to simulate synthetic measurements (spectral radiances) and corresponding instrument noise of the proposed instrument concept using the forward model and the instrument noise model (Sect. 3). The SZA is calculated for the given coordinates in the Hestia domain assuming the sun-synchronous orbit described in Sect. 2 and an observation date of July 15, 2018, which translate to a SZA of about 18 • . Corresponding XCO 2 abundances are then 10 retrieved from the simulated spectral radiances, such that the ability to observe the CO 2 emission plumes from the Hestia Indianapolis data can be evaluated. In this first assessment we focus solely on the instrument performance in terms of its CO 2 plume quantification capabilities and hence we perform the high-resolution simulations with the expected instrument noise induced errors only, i.e. by assuming a non-scattering atmosphere. Figure 9a shows the retrieved field of XCO 2 enhancements w.r.t. the retrieved background XCO 2 over the Hestia domain. 15 The CO 2 plume from the strongest point source, E 1 , with an annual CO 2 emission rate of Q 1 = 3.24 MtCO 2 yr −1 , is clearly resolved with local XCO 2 enhancements well above 100 ppm close to the emitting source. Although they emit considerably (Fig. 7b). Dark scenes with albedo < 0.05 have been filtered out due to unreliable XCO2 retrievals. less CO 2 , the plumes from the second and third strongest point sources, E 2 and E 3 , with annual CO 2 emission rates of Q 2 = 0.55 MtCO 2 yr −1 and Q 3 = 0.48 MtCO 2 yr −1 , respectively, can be clearly separated from the background as well. The plume from the fourth strongest point source, E 4 , with an annual CO 2 emission rate of Q 4 = 0.32 MtCO 2 yr −1 can also be observed, but is partly obscured by filtered out dark surface areas, where retrieval errors are too high. Plumes from weaker point sources ( 0.1 MtCO 2 yr −1 ) and other sources like e.g. streets and highways cannot be identified given the spatial resolution 5 and instrument noise errors of the proposed instrument.
One concern with high-resolution CO 2 remote sensing is the impact of the albedo heterogeneity at urban scale at such a high spatial resolution. For the non-scattering scenario simulated here, the second-order polynomial albedo fitted by the retrieval algorithm matches the reference input albedo with an average (absolute) deviation of 0.14 %, and there is consequently no spatial variability in the accuracy of the albedo retrieval that in turn affect the XCO 2 retrieval accuracy. There is, however, the 10 evident effect that a higher albedo leads to a higher SNR and hence a generally lower noise error. This is evident from Fig. 9b showing the difference between the retrieved and true XCO 2 , thus illustrating an instantaneous noise error field that would be expected for a single satellite overpass. Generally, the deviations from the true XCO 2 are smaller over areas of brighter surfaces like concrete, whereas the deviations are larger over dark surfaces like forests (see also Fig. 8). The effect of albedo heterogeneity in combination with scattering particles is not addressed in this paper and will have to be analysed in future 15 studies.
Across the entire Hestia domain (but excluding dark scenes with albedo < 0.05) 68 % and 95 % of the XCO 2 retrievals deviate from the true XCO 2 by less than 1.1 and 2.3 ppm, respectively. This is slightly lower than the noise error obtained for the global trial ensemble in Sect The plume of the strongest emitter E 1 in Indianapolis with an annual emission rate of Q 1 = 3.24 MtCO 2 yr −1 (Fig. 10, left panels) is clearly resolved. Within the area 200-2200 m downwind of the emitting source (blue square), maximum enhancements exceed 25 ppm, and in total, approx. 200 (60) pixels have enhancements above 4 (8) ppm, representing enhancements of approx. 1 (2) % w.r.t. the background. The average along-track XCO 2 enhancement 200-2200 m downwind of the emitting source (blue/white line) reaches 12 ppm. The plumes from the second and third strongest emitters E 2 and E 3 , approx. six times 10 weaker than E 1 , with annual emission rates of Q 2 = 0.55 and Q 3 = 0.48 MtCO 2 yr −1 , respectively (Fig. 10, center panels) has considerably lower XCO 2 enhancements, but can nevertheless be clearly separated from the background with distinct increments in both per-pixel and average XCO 2 enhancements within the area 200-2200 m downwind of the emitters (blue squares). While the background fields varies from approx. -1 to 1 ppm due to instrument noise, the per-pixel plume enhancements vary from approx. 0.5 to 3 ppm, with single enhancements exceeding 4 ppm close to the emitting source. The average along-track XCO 2 enhancements 200-2200 m downwind of the emitting sources (blue/white lines) reach 1.9 and 1.5 ppm for 5 E 2 and E 3 , respectively. Despite being partly obscured by filtered out dark surfaces (water), also the plume from the fourth strongest emitter E 4 , with an annual emission rate of Q 4 = 0.32 MtCO 2 yr −1 (Fig. 10, right panels) can be separated from the background, both when looking at the two-dimensional field and the per-pixel enhancements within the area 200-2200 m downwind of the emitter. With maximum average XCO 2 enhancements of at most approx. 1.3 ppm, the proposed instrument concept is, however, approaching the limit of what it could achieve in terms of CO 2 plume observation under favourable 10 conditions, i.e. where the effect of aerosol induced errors are neglected and the SZA is relatively low. A second peak in the average along-track XCO 2 enhancements is observed approx. 850 m above (north of) the fourth strongest emitter E 4 . This enhancement stems from the CO 2 plume from the seventh strongest emitter in Indianapolis (labelled as E 7 in the top-right panel of Fig. 10) with an annual emission rate of Q 7 = 0.1 MtCO 2 yr −1 . Quantifying the CO 2 emission rate from such a weak source is, however, not realistic given the low sampling density (especially further downwind) in combination with the weak 15 per-pixel enhancements.
Conclusions
To follow the progress on reducing anthropogenic CO 2 emissions worldwide, independent monitoring systems are of key importance. In this paper, we show how a proposed concept of an imaging spectrometer, to be employed on a space-borne platform, can be used to map CO 2 emission plumes from localized point sources at a spatial resolution of 50 × 50 m 2 for target 20 tiles on the order to 50 × 50 km 2 and, hence, contribute to the independent large-scale verification of reported CO 2 emission rates at facility level.
Through radiative transfer simulations using a global trial ensemble, a preliminary, yet realistic, instrument design and an instrument noise model, we show that the expected instrument noise induced XCO 2 errors are smaller than 1.1 and 2.0 ppm for 68 % and 95 % of the retrievals, respectively, using the SWIR-2 spectral set-up covering the CO 2 absorption bands near 25 2000 nm. For the SWIR-1 spectral set-up, covering the weaker CO 2 absorption bands near 1600 nm, the instrument noise induced XCO 2 errors are significantly higher, making it inadequate for the proposed instrument concept. Although the main focus in this paper is on the performance of the proposed CO 2 monitoring instrument concept, we could also show that despite the usage of a single spectral window and a relatively coarse spectral resolution of 1.29 nm, scattering by highly complex atmospheric aerosol compositions can be partly accounted for during the XCO 2 retrieval on the global scale, limiting the 30 deviation from the true XCO 2 to at most 4.0 ppm for 68 % of the retrievals. This gives us confidence that accurate twodimensional fields of XCO 2 enhancements could be retrieved from real spectra measured by the proposed instrument concept. https://doi.org/10.5194/amt-2019-414 Preprint. Discussion started: 16 January 2020 c Author(s) 2020. CC BY 4.0 License.
A reasonable a-priori state vector w.r.t. the aerosol properties (e.g. provided through models or a companion aerosol instrument (Hasekamp et al., 2019)) would, however, still be important.
Using high-resolution CO 2 emission data for the city of Indianapolis together with a Gaussian dispersion model, corresponding high-resolution albedo data and additional radiative transfer simulations, we have clearly demonstrated that the instrument is well suited for the task of space-borne CO 2 monitoring of large and medium-sized power plants and can (only limited by its 5 own instrument noise) resolve and quantify emission plumes from point sources with an emission source strength down to the order of 0.3 MtCO 2 yr −1 . This is well below the target emission source strength of 1 MtCO 2 yr −1 , hence leaving some margin for additional error sources (not yet addressed here) and lower instrument SNR.
The high spatial resolution implies limitations in terms of spatial coverage, arising from the narrow swath (50 km assuming 1000 detector pixels in the spatial dimension) and the forward motion compensation. Hence, a single instrument of the proposed 10 concept could not map CO 2 concentrations at local to regional scale with dense global coverage, but would have to be restricted to some pre-defined targets, where independent estimates of CO 2 emissions are of highest interest. The relatively compact design with a single spectral window, however, allows for the deployment of a fleet of instruments and hence independent monitoring of localized CO 2 emissions on a larger scale. For real measurements, the proposed instrument concept would rely on meteorological and topography data to compute the airmass and thus XCO 2 , indicating the demand for a high instrument 15 pointing accuracy in order to avoid erroneous XCO 2 estimates (Kiel et al., 2019). The proposed instrument could also prove useful in synergy with a space-borne CO 2 lidar (e.g. Kiemle et al., 2017), where the passive spectrometer would benefit from the lidar's accuracy and knowledge on the light path and the lidar would benefit from the spectrometer's imaging capability.
For this first performance assessment of the proposed instrument concept, the analysis on the local scale (Indianapolis) was constrained to one day in July using a rather simplistic Gaussian dispersion model that assumes constant atmospheric stability 20 and (unidirectional) horizontal wind speed. As a next step, the CO 2 monitoring capabilities on the local scale of the instrument concept will be evaluated further for different seasons (with varying surface albedo and solar zenith angles), atmospheric states and emission source strengths using large eddy, rather than Gaussian, modelling of the CO 2 plumes, also including the inverse CO 2 flux estimates from the two-dimensional fields of synthetically retrieved XCO 2 . Although the effect of aerosols has partly been assessed on the global scale in this study, it is of key importance to also include information on the properties and distri- 25 bution of aerosols in the local scale simulations in order to better understand the instrument's ability to resolve and quantify localized CO 2 emissions under realistic conditions. Such an in-depth aerosol analysis is, however, the task of further future studies looking into realistic emission scenarios w.r.t. aerosol abundances in and around the CO 2 plumes, also in combination with high-resolution surface albedo. | 2020-02-28T01:38:36.961Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "79bf5a7de7c2d24c29ee0c4d109f4c9f5b49da7f",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/articles/13/2887/2020/amt-13-2887-2020.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79bf5a7de7c2d24c29ee0c4d109f4c9f5b49da7f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
247229890 | pes2o/s2orc | v3-fos-license | COVID-19 vaccine hesitancy in Iranian patients with multiple sclerosis
Background World Health Organization (WHO) mentioned COVID-19 vaccination as the safest way to eradicate this pandemic. In the meantime, vaccine hesitancy (a delay in accepting or rejecting the vaccine despite the availability of vaccination services) is a barrier. Hence, we studied this obstacle in the Iranian multiple sclerosis (MS) population. Objective MS patients eligible for vaccination were asked to complete a google form survey. Demographic information, MS disease-related factors, flu vaccination history, COVID-19 vaccination history, cause of vaccination refusal, past history of COVID-19 infection, and their compliance with public health guidelines after vaccination were recorded. Results 1479 patients participated in this study. 6.9% of participants have not received the vaccination. Sinopharm was the most commonly used vaccine (92.9%). Vaccine hesitancy was associated with young age, lower education, unemployment, negative flu vaccination history, no previous episode of COVID-19 infection, less concern about COVID-19, and the expectation of not getting infected with the virus after vaccination. Participants mentioned concerns about the side effects of the vaccines as the most prevalent cause of avoiding vaccination (58.0%). Patients’ concern of SARS-CoV-2 significantly decreased after vaccination (p-value < 0.001). Conclusion Our findings in this study elucidate that a minor group of patients with MS has vaccine hesitancy, which may expose them to more severe COVID 19. The treating physicians should ask the history of vaccination and try to persuade such patients with scientific knowledge transformation. The long-term consequences of not being vaccinated should be clarified to such patients especially those who are receiving immunosuppressive agents.
Introduction
Severe Acute Respiratory Syndrome Corona Virus 2(SARS-CoV-2) which resulted in Coronavirus disease 2019 is an infectious disease and then has rapidly become a worldwide pandemic (Gralinski and Menachery, 2020).
Certain patients who have underlying diseases may be more susceptible to viral infections. Patients with multiple sclerosis (MS) who need immunomodulatory or immunosuppressive medications comprise one category (Zheng et al., 2020). National Multiple Sclerosis Society (NMSS) guideline on COVID-19 recommended that these patients should receive the COVID-19 vaccine at once it becomes available (2022b). Furthermore, adults with advanced MS, older age, or other comorbidities (such as diabetes, obesity, cardiovascular disease, pregnancy) are at higher risk for severe infection, so they should get vaccinated sooner (Dror et al., 2020).
However, doubts about receiving the vaccine are a barrier to COVID-19 vaccination (Dror et al., 2020).
Vaccine hesitancy means a delay in accepting or rejecting the vaccine despite the available vaccination services. This doubt varies according to place, time and vaccines. It is associated with many factors such as satisfaction, comfort, and trust in vaccines (MacDonald, 2015).
There is inaccurate information about vaccines on social media. Companies are widely criticized for not dealing with this misinformation (Wardle and Singerman, 2021).
Vaccine hesitancy was studied before the pandemic. Diem et al. (2021) investigated pneumococcal vaccination refusal and Ziello et al. (2021) reported about influenza vaccine hesitancy both in persons with MS. The results could predict the possible challenges for the SARS-CoV-2 vaccination. investigated willingness to get COVID-19 vaccination among adults with MS. They found its association with higher awareness of one's risk of infection with COVID-19 and a higher level of education. Safety concerns were also found as the main cause for SARS-CoV-2 vaccine hesitancy in the study by Yap et al. (2021). To our knowledge, no report from Iran was published so far.
Understanding the frequency and related determinants of vaccine hesitancy will aid policymakers in gaining a better understanding of intervention barriers and developing strategies to overcome them.
The current study aimed to provide evidence on the effect of vaccines on patients' concerns about COVID-19, and the prevalence of compliance with public health guidelines after vaccination among Iranian MS patients.
Methods
In this observational study, a questionnaire was designed and approved by two MS experts.
It was piloted on five patients. The final google form was made available in online groups of MS patients in Iran from September 28 to November 22, 2021.
Participants' demographic information (age, sex, education, employment status), MS disease-related factors (MS type, current disease-modifying treatment (DMT), and disease severity), flu vaccination history in the last five years, their belief about the effects of DMT on vaccine efficacy, COVID-19 vaccination history, their reason for vaccination refusal, type and time of the first COVID-19 vaccine dose, history of COVID-19 infection, their concerns about COVID-19, their expectations of vaccine efficacy and their compliance with public health guidelines after vaccination were recorded.
Disease severity was assessed by expanded disability status scale (EDSS) (who were able to walk without any aid (EDSS < 6), who needed aid for walking (EDSS ≥ 6)).
COVID-19 was diagnosed by an infectious disease specialist or internist based on a positive reverse transcription polymerase chain reaction (RT-PCR), clinical symptoms and signs, or computed tomography (CT) scan results.
The concern about COVID-19 scale ranged from 1 to 10 (1 the least, 10 the most concern) Descriptive statistics were reported as mean ± standard deviation (SD) and proportion for quantitative and qualitative variables, respectively. Binary logistic regression methods were used to determine the possible factors associated with vaccine hesitancy. The Wilcoxon ranksum test was performed to determine the link between patients' preand post-vaccination complaints. Statistical significance was defined as ap-value less than 0.05. IBM® SPSS® version 26 was used for analyses. Sensitivity analysis was performed by redefining vaccine hesitancy excluding those with claims of missed appointment and inability to present in vaccination centers as the reasons for not getting vaccinated.
Results
Among 1517 participants, eight duplicated cases, 25 cases with neuromyelitis optica spectrum disorder (NMOSD), and five cases under 18 years of age were excluded. The basic characteristics of the remaining 1479 participants are presented in Table 1.
Concern about the side effects of vaccines was the most prevalent cause of avoiding vaccination (58.0%), followed by belief in the ineffectiveness of the vaccines (15%), not having options for vaccine type (15%), mistrust in vaccination generally (14%), no vaccination appointments (4%), inability to present in vaccination centers (4%) and others (17.0%). The other reasons include pregnancy, COVID-19 infection, and traveling.
Patients with vaccine hesitancy were younger (p-value = 0.012). Moreover, it was significantly associated with lower education, unemployment, no flu vaccination history, no previous episode of COVID-19 infection, less coronavirus concern, the expectation of not getting infected with the virus after vaccination which are presented in Table 2. Whereas high efficacy DMT (p-value = 0.066) and female gender (pvalue = 0.055) were significant in univariate regression but not significant in the final multivariate model. 137 of 1266 completely vaccinated patients (10.9%) experienced COVID-19 infection after their second dose. Furthermore, 46/1377 (3.3%) who received at least one dose of vaccine were infected by COVID-19 after their first dose. *Quantitative variables are presented as mean ± SD, whereas qualitative variables are reported as number (%) and categorical variables as median (IQR). 193/1377 (14.0%) of vaccinated patients had decreased compliance with health guidelines after vaccination.
The effect of getting vaccinated on decreasing the score of concerns about coronavirus was significant (p-value < 0.001).
Sensitivity analysis
After excluding those who did not get vaccinated in terms of missed appointment and inability to present in vaccination centers from the vaccine hesitant group, age (p-value = .016), female gender (p-value = .037), unemployment (p-value = .012), past COVID-19 infection (p-value < .001), less concern about coronavirus (p-value = .004), no flu vaccination history (p-value = .046), expectation of not getting infected with the virus after vaccination (p-value < .001), and high efficacy DMT (pvalue = 0.031) remained significantly associated with the outcome measure.
Discussion
This is the first report of vaccine hesitancy relevant factors besides investigating the effect of vaccination on patients' concerns, and their compliance with public health guidelines.
Our study indicated that (6.9%) of the patients with MS had not received vaccination which was less than vaccine hesitancy in adults with multiple sclerosis in the United States (20.3%) . Vaccine-hesitant cases frequently expected of not getting infected with the virus after vaccination. Besides they were less concerned about the virus and had not been infected with SARS-CoV-2 before. This may be related to the assumption that vaccines should prevent infection completely. It also appears this group thinks if they have not been infected in the past, they will not experience it in the future. Moreover, they have fewer concerns about this pandemic. As a point of reference, a research found that healthcare workers who are not involved with or caring for COVID-19 positive patients seem to be more vaccine-hesitant and have less worries, which is consistent with our results (Dror et al., 2020).
Additionally, we discovered that vaccination hesitation was connected with educational level in MS patients. It is in agreement with the results of a previous US study that indicated more willingness to receive the vaccine in higher-level education patients and higher concern of COVID-19 infection . But it was the only factor that was not associated after sensitivity analysis which was predictable, because a lower academic degree may lead to lower insight for checking social media and web-based appointments for vaccination.
Besides, Xiang et al. (2021) exhibited an association of vaccine willingness with higher educational level and prior influenza vaccine acceptance. Unsurprisingly, there is a negative association between past flu vaccination history and COVID-19 vaccine hesitancy in our study. Past history of vaccination in general impacts future decision-making concerning vaccination (Diem et al., 2021;Uhr and Mateen, 2021;Wu et al., 2021).
A meta-analysis demonstrated that males had a considerably greater risk of COVID-19-related complications and mortality (Galbadage et al., 2020), which may result in increased uptake of COVID-19 vaccinations. In our study, females were more vaccine-hesitant (OR: 2.053, p-value = 0.055) which stayed significant in sensitivity analysis (p-value = .037). Younger adults conceive they are less likely to be hospitalized because of COVID-19 or to die from it than older cases (Diem et al., 2021). This issue may explain our findings that vaccine hesitancy was more frequent at a younger age.
Patients who were currently taking a high-efficacy DMT were more averse to vaccination. According to one research, some high-efficacy DMTs, such as rituximab, an immunosuppressive medication, increased the incidence of COVID-19 side effects (Simpson-Yap et al., 2021). As a result, they may become more concerned about the vaccine's adverse effects on their body.
In general, the greatest vaccine-hesitant respondents' concern was about the possible vaccine side effects which are confirmed by other studies Xiang et al., 2021;Uhr and Mateen, 2021). Therefore, informing the patients about vaccine safety sounds necessary. The treating physicians should give information about the long-term consequences of COVID-19 to the concerned patients.
The research showed that vaccinated patients were substantially less worried about COVID-19, and as a result, 14.0% of vaccinated patients (193/1377) lowered their compliance with public health norms after immunization. This reduction may explain why 137 of 1266 patients who were completely vaccinated (10.9%) experienced COVID-19 infection after their second dose. WHO mentioned COVID-19 vaccination does not provide 100% protection. However, it is more effective against serious illness and fatalities (2022c). Especially with the emergence of omicron, a new variant of COVID-19 that spread much faster (2022d), CDC announced the importance of wearing a mask in public, vaccination, and boosters (2022e).
The most important limitation of the current study lies in the fact that vaccine-hesitant patients might not feel like participating in this web-based survey. It may promote to the bias of lower rate of vaccinehesitancy in this study compared to other countries.
Conclusion
It is critical to comprehend the causes and essential aspects that contribute to vaccine rejection. Our findings in this study identified some at-risk groups (younger adults, those with less education, those who are unemployed, those with a negative flu vaccination history, those who have never had an episode of COVID-19 infection, those who are less concerned about COVID-19 infection, and those who expect not to become infected following vaccination). These patients' doctors should persuade them to obtain immunizations. Neurologists should inform vaccine-hesitant MS patients about COVID-19 long-term consequences and highlight how common vaccination was to eradicate some previous pandemics. They can also mention some studies indicated no significant difference in the rates of COVID-19 vaccine side effects informed among those with neuroinflammatory diseases and control groups (Epstein et al., 2021).
Additional appropriate information about the vaccine safety, potential self-limited vaccine side effects, how it works to defend against severe illness is also required. Advertising text or voice message reminders may be influential.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.msard.2022.103723. | 2022-03-05T14:09:33.406Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "422253c99271972cb1d7a5cd034f0fbaedefcee4",
"oa_license": null,
"oa_url": "http://www.msard-journal.com/article/S2211034822002383/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e16a216ea015b678004c4ca33829d0f6ae511430",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269641298 | pes2o/s2orc | v3-fos-license | Sustainable Housing in Rural Wayanad: Exploring the Link between Income and Housing among Indigenous Communities
This study examines sustainable housing development to address indigenous communities' socio-economic challenges in Wayanad, Kerala, India. The research aims to propose policy recommendations for promoting sustainable housing while addressing poverty. The methodology integrates field research and literature review findings to develop comprehensive insights. The study's novelty lies in its holistic approach, which integrates income, housing conditions, and community resilience. Findings suggest that community-driven initiatives and environmentally friendly construction techniques like rammed earth and cob construction can empower indigenous communities while fostering environmental stewardship. Policy recommendations emphasise community participation, climate responsiveness, and flexibility in dwelling unit plans to promote sustainable housing development. Overall, the study highlights the potential of sustainable housing to address socio-economic disparities and empower indigenous communities in Wayanad.
Introduction
Wayanad, a picturesque district in the Western Ghats of Kerala, hosts diverse Indigenous communities and biodiversity with a high percentage of endemism (Mathew, 2018).Kerala has forty-three Indigenous groups, according to the Ministry of Tribal Affairs (MoTA, 2018).These indigenous groups have deeprooted connections to the land and nature, yet they need to be more immune to the broader forces of change that have ushered in complex transitions in recent decades.As traditional modes of sustenance evolve, the issue of housing becomes a focal point through which the interplay of culture, income, and quality of life unfolds (Rapoport, 2000) (Winger, 1968) (Streimikiene, 2015).There is a significant presence of households within the middle-income bracket, indicating the importance of agriculture as a reliable source of income in Wayanad (Prathapachandran & Devadas, 2023).Kerala has pioneered a new type of decentralised planning that emphasises sustainable development through community-based resource management, focusing on agriculture.These state-sponsored and choreographed organisations and initiatives have empowered underprivileged social groups, such as women and Indigenous populations, to improve their participation and livelihoods and reduce poverty (T.R. Suma, 2017).These sustainable interventions were centred on three major themes: forest conservation, community ecosystem-based adaptive capabilities, and Indigenous land and population protection and sustainability.Exploratory research is commonly used to evaluate human-environment interactions to understand better how people perceive and use local resources for their well-being (Cuthil, 2002).Conservation of associated biodiversity is essential for India to meet some of its SDGs (Sustainable Development Goals).Sustainableaffordable habitat is a conceptual framework for achieving and maintaining sustainability through housing construction while conserving and protecting the environment and natural resources.Sustainable development principles and methodologies should be integrated into policy creation and planning procedures at a strategic level (Deepa.G.Nair, et al., 2005).
Research Objectives
The Primary objective of this study is to propose policy recommendations for promoting sustainable housing development and addressing the socio-economic disparities faced by Indigenous populations in Wayanad, Kerala.The study aims to investigate the relationship between household income, housing conditions, and duration of residency within Indigenous settlements, highlighting disparities and socioeconomic mobility challenges.The study also discusses sustainable built practices that can help address the indigenous housing situation in Wayanad.
Novelty of the Study
This study combines poverty and housing conditions, presenting a cohesive framework that thoroughly explains how income, housing, and community resilience are interconnected.The study effectively combines theoretical insights with actionable solutions tailored to the cultural context by suggesting policy recommendations rooted in sustainable practices.Focusing on indigenous perspectives and community engagement, the study ensures authenticity and applicability to the local setting.
Literature Review
Sustainable housing initiatives can combat poverty and land alienation by addressing fundamental socioeconomic and environmental challenges.By prioritising affordability through cost-effective construction methods and materials, sustainable housing projects ensure that low-income communities can access adequate shelter while freeing up resources for other essential needs.Moreover, these initiatives advocate for and secure land rights for marginalised groups, empowering them to build stable homes and establish tenure security, thus combating land alienation.Indigenous communities in Kerala grapple with land alienation and housing segregation, perpetuating poverty.Despite legislation and housing schemes, access to land remains limited, and housing quality has not substantially improved.Landowning indigenous communities like the Kurumans and Malayarayans show changes in housing and settlement patterns (Rajasenan, 2015).The deprivation index, which measures the quality of life, has narrowed down among the indigenous population in Kerala during 2001-2011.However, the level of deprivation in terms of housing, basic facilities, and economic status is still very high compared to the general population (Thara & Nair, 2013).M. Bagavandas's (2021) study on the Malayali Indigenous population indicates that infrastructure was critical at the village level but not as much at the household level (Bagavandas, 2021).Migration has influenced housing patterns and agricultural activities, which are essential aspects of the livelihoods of Indigenous people in Wayanad (Mano, Kumar, & Smitha, 2020).Indigenous labour migration is primarily driven by low wages (87.50%) and unemployment (push factors).The main pull factors are better employment opportunities (99.16%) and job security (95.83%).Most migrants (97.50%) seek agricultural labour work, and a few (2.520%) opt for non-agricultural jobs.Most (86.66%) are unregistered migrants, as registration is optional at police stations (Mano, A. Anil, & K.P, 2021).Housing also goes beyond shelter, symbolising cultural values and socio-economic life, influenced by cultural orientations, and these elements are more evident in rural areas (Ochapa, 2018).For more than two-thirds of the rural population, the land is a crucial source of livelihood and the most important source of survival (Sharma, 2007).Land ownership defines a household's social identity and provides economic security.It also allows family members and others to put their labour capacity to good use, reducing involuntary unemployment.Non-agricultural enterprises such as industrial manufacturing, real estate, and infrastructure development have evolved as alternatives to agricultural-based livelihoods since the dawn of the new era of globalisation and industrialisation (John Kujur, 2020).Many ideas on land alienation exist, all dealing with alienation from the means of production.The theory of primitive accumulation proposed by (Marx, 1867;1974) is the process of separating producers from their means of production, transforming social means of subsistence and production into capital, and immediate producers into wage-labourers, through the forcible usurpation of common property, which is done through individual acts of violence and eventually parliamentary robbery (Glassman, 2006).Non-Adivasis appropriated Adivasi land through various tactics during the colonial and post-colonial periods (John Kujur, 2020).Under neoliberalism, accumulating people's land and natural resources by dispossession has become the central mechanism of accumulation or expropriation (Harvey, 2017).Building on Harvey, Michael (Levien, 2015) describes how the people are displaced from their land mainly by the state machinery.He claims that the state primarily displaces people from their land through two methods: coercion or forcible displacement and persuasion of the public by legitimising land expropriation in the name of "public purpose" or "national interest" (Tura, 2018).Land alienation can exacerbate urban sprawl by facilitating uncontrolled development, fragmenting traditional land use patterns, and promoting inefficient land use practices, prioritising short-term economic gains over long-term sustainability and community well-being.Urban sprawl results from numerous individual actions, and among the possible causes are population increase, the economy, and closeness to resources and essential utilities (Wilson, Hurd, Civco, Prisloe, & Arnold, 2003).Researchers are now looking into ancient civilisations to see if they developed economic systems and practices centred on living in harmony with nature.Traditional knowledge systems were created with the community as the key stakeholder, and the cornerstone is a symbiotic link between man and nature.Promotion of individual and societal well-being rather than gauging development by growth rates and GDP (Kakoty, 2018) .With 8.6% of the population being Indigenous, India has access to a vast pool of Indigenous knowledge that, if properly recognised, adopted, and mainstreamed, has the potential to provide long-term solutions to issues such as declining agricultural productivity and soil quality, biodiversity loss, water scarcity, pollution, and a slew of other social issues.Improvements in these communities' income, women's and children's health, and education because of targeted interventions would have a direct positive impact on national SDG indicators (Priya Priyadarshini, 2019).
Methodology
Statistical thinking is essential to data analysis, providing insights into the data under scrutiny (Hill & Berry, 2021).Descriptive statistics can provide initial insights into the distribution and variability of the variables, which can inform the choice of regression models (McCarthy, McCarthy, Ceccucci, & Halawi, 2019).Descriptive statistics and graphs can present data to policymakers and conduct exploratory data analysis (Titus, 2021).This study adopts a cross-sectional research design to comprehensively analyse the complex interactions between income and various housing dimensions within Wayanad District, Kerala, indigenous communities.The sample size is determined using random sampling techniques, aiming for 400 indigenous households.This sample size provides a confidence level of 95% with a margin of error of 5%.Primary data is collected through structured questionnaires distributed among indigenous households.Descriptive statistics are utilised to gain a contextual understanding of the collected data.These statistics help summarise the distribution and variability of the variables under study.Regression analysis is employed to explore the relationships between income and housing characteristics.This analysis offers insights into how income influences housing preferences, quality, and affordability within indigenous communities.
Analysis 4.1 Built-Up Area
The analysis in Table 1 shows a strong correlation between the built-up area of houses and income distribution among households.The data shows that more than half of the households (53%) live in houses with a built-up area ranging from 25 to 34 square meters, indicating that the most common housing size falls within this range.Additionally, about one-fourth (22.75%) of households live in slightly larger houses with a built-up area ranging from 35 to 44 square meters.This information gives an insight into the prevalent housing sizes in the surveyed area.The analysis reveals significant variations in income distribution based on the built-up area of houses.Mainly, when the built-up area is less than 25 square meters, more than two-thirds (70.37%) of households earn less than Rs 5000, indicating that smaller living spaces are associated with lower incomes.On the other hand, as the built-up area increases to more than 75 square meters, there is a steady increase in the number of households in the income bracket of Rs 20,000 to 24,999, suggesting a link between larger living spaces and higher incomes.The data suggests minimal variations in income distribution in the middle-income ranges despite differences in built-up areas.This finding indicates that the house size may not significantly impact income for households falling within these income brackets.This minimal variation could be attributed to the intervention of housing schemes like PMAY (Pradhan Mantri Awas Yojana) and programs like Life Mission, which aim to provide affordable housing to individuals across various income groups.These schemes are designed to provide financial assistance and support for housing construction to individuals from various income backgrounds, mainly targeting the economically weaker sections of society.As a result, households in lower-income brackets receive assistance under these schemes, leading to smaller living spaces.Conversely, households in higher-income brackets have greater financial capacity to afford larger living spaces without relying on government assistance.Nevertheless, prioritising larger housing units in housing policy can have adverse effects, including a potential neglect of essential services and a delay in housing provision (Lochner, 2007).The regression analysis examines the relationship between family income and house size (in square meters).The regression model explains approximately 20.6% of the variation in house size based on family income.The adjusted R-squared value, considering the number of variables, is 0.204.The regression model is statistically significant (p < 0.001), indicating a meaningful relationship between family income and house size.The intercept term is 736.05, but its statistical significance is not established (p = 0.378).For each additional square meter in house size, family income increases by 226.21 units.The size of the house coefficient is statistically significant (p < 0.001).In conclusion, the analysis reveals a meaningful positive relationship between family income and house size.On average, an increase of one square meter in house size corresponds to an increase of Rs 226.21 in family income.However, it is essential to recognise that while income explains a portion of house size variation, other factors not considered in the model might also influence house size.
Physical Condition of the House
Income plays a significant role in determining housing affordability and may be forced to choose between appropriate housing and other necessities (Lefebvre, 2002).The data analysis in Table 2 reveals significant findings regarding the housing conditions within the indigenous community.More than one-fifth (21.50%) of households in the indigenous community are categorised as dilapidated.Dilapidated housing refers to structures in a state of disrepair deterioration or require significant maintenance and renovation.These houses may have crumbling walls, leaking roofs, damaged foundations, and lack of proper amenities.The prevalence of dilapidated housing in the community indicates a significant housing challenge.A considerable portion of the housing stock needs urgent attention and improvement.The data highlights that more households in the lower income bracket, particularly those with an income of up to Rs 9,999, fall into the dilapidated housing category.This finding indicates a strong correlation between lower income levels and poor housing conditions.Households with lower incomes face more significant challenges in maintaining and improving their housing conditions due to limited financial resources.They might struggle to afford necessary repairs and renovations, which can lead to the deterioration of their homes over time.Routine dilapidation disproportionately impacts low-income homeowners, who are less likely to have the resources to pay for repairs (Bartram, 2023).In addressing the issue of dilapidated housing, targeted interventions and support are crucial.Viable solutions encompass a range of strategies.Firstly, implementing housing improvement programs is a pivotal approach, entailing financial assistance or subsidies for low-income households to facilitate repairs and renovations.Such programs can potentially uplift the living conditions and safety of the existing dilapidated housing inventory.Additionally, the development of affordable housing initiatives tailored explicitly to the requirements of low-income households offers a promising avenue.This may encompass creating low-cost housing projects meticulously adhering to safety and quality benchmarks.Moreover, capacity-building initiatives are pivotal, equipping communities with the skills and knowledge to undertake minor repairs and maintenance tasks independently, thereby reducing reliance on external contractors and associated expenses.Infrastructure development investments in indigenous areas are another critical strategy, effectively improving housing conditions and living standards.Lastly, fostering awareness and advocacy efforts is essential.These endeavours involve elevating awareness regarding the significance of housing conditions and advocating for policies that effectively address the housing needs of low-income households.In combination, these multi-faceted strategies hold the potential to alleviate the challenges posed by dilapidated housing and contribute to an improved quality of life for vulnerable communities.
Figure 1 -Income vs. Physical Condition of the House
The regression analysis examines the relationship between family income and the house's physical condition (good, livable, dilapidated).The regression model explains approximately 5.84% of the variation in income based on the house's physical condition.The adjusted R-squared value is 0.0560, indicating that the model's fit is reasonable, but it still accounts for a relatively small portion of the variability.The regression model is statistically significant (p = 1.00581E-06), indicating a significant relationship between family income and the house's physical condition.The intercept term is 12208.81,and its statistical significance is established (p = 8.52403E-48).The coefficient for the physical condition variable is -1811.22,indicating that a change in physical condition is associated with a decrease in income by this amount on average.The p-value of 1.01E-06 highlights the solid statistical significance of the physical condition variable in predicting income.Both the lower and upper 95% confidence intervals support the statistical significance of the coefficient.The negative coefficient suggests that as the physical condition of the house deteriorates (moving from "good" to "dilapidated"), the income tends to decrease.The analysis indicates a statistically significant relationship between family income and the house's physical condition.Unfair agricultural prices in poor countries lead to the dilapidation of human and natural resources (Pinheiro, 2009).This holds in the case of Wayanad, as the majority are agricultural labourers.Specifically, the income tends to decrease as the house's condition worsens.This finding suggests that households with lower income levels are more likely to reside in houses with poorer physical conditions.
Age of House
The provided information in Table 3 indicates a significant shift in the age distribution of houses in the surveyed area, influenced by housing schemes like VAMBAY (Valmiki Ambedkar Awas Yojana), IAY (Indira Awas Yojana), PMAY (Pradhan Mantri Awas Yojana) and programs like Life Mission.According to the analysis, more than three-fourths (75.25%) the houses have been built in the last 14 years.The data suggests that much of the housing stock is relatively new, and construction activity has recently been significant.On the other hand, only of the houses are older than 25 years, indicating that a relatively small percentage have existed for an extended period.The analysis attributes the shift in age distribution to the intervention of housing schemes.Government initiatives aim to provide affordable housing to individuals across different income groups, focusing on assisting economically weaker sections of society.By offering financial aid and support for housing construction, these schemes have encouraged the development of new houses, leading to a higher percentage of houses built in the last 14 years.The data highlights an observable decline in houses aged less than five years as income increases.This finding suggests that higher-income households are less likely to construct newer houses.The government housing schemes target lower-income households, leading to more new houses in those income categories.The analysis notes that many households live in temporary kutcha houses while awaiting the benefits of the government housing schemes.Kutcha houses are typically made of mud, thatch, or bamboo and are considered less durable and stable than permanent housing structures (Oleksandr, Lüdeke, & Reckien, 2012).Such temporary housing indicates a need for better infrastructure and living conditions in the area, especially for economically disadvantaged households waiting for government assistance.
Figure 2 -Income vs. Age of house in years
The regression analysis reveals a significant relationship between the Age of the house and income group.The model explains approximately 4.41% of the income variation based on the house's Age, with an adjusted R-squared of 0.0417.The ANOVA test confirms the model's significance (p = 2.31E-05).The coefficients show that as the Age of the house increases, household income tends to increase as well, with each additional year in house age corresponding to an average income increase of approximately 1725.942 units.The intercept of 5020.859suggests that for new houses, the expected income is 5020.859units.However, the model's fit is limited in explaining the variability in income, as indicated by the adjusted Rsquared value.
Duration of Residence
The data analysis in Table 4 provides insights into households' residency duration within their current settlements.The data indicates that more than one-third (34%) of households have been residing in their current settlements for 21 to 30 years, and around one-third (32%) of households have lived there for 31 to 40 years.The percentages for the other categories indicate varying lengths of residency within the community.The data analysis highlights that the percentage of households who have resided in the settlement for 31 to 40 years is higher in the lower-income category.This finding indicates a potential correlation between longer-term residency and lower income levels.The observation that a higher percentage of long-term residents falls within the lower income category raises essential points about the socio-economic dynamics within the community.Lower-income households need more mobility due to financial constraints.Moving to a new location or settlement often requires financial resources for relocating, finding new housing, and establishing oneself in a different area.Lower-income households may need more financial means to undertake such moves, leading to longer-term residency in their current settlements.The higher percentage of long-term residents in the lower-income category may suggest limited opportunities for upward mobility within the community.Economic opportunities, job prospects, and access to education and training may be restricted, making it challenging for lower-income households to improve their financial circumstances and move to better living conditions.Lower-income households need help accessing the financial resources needed to move to better housing or more prosperous areas.The lack of financial resources also limits their ability to invest in housing improvements or seek better opportunities elsewhere.Social and community ties also influence long-term residency within the community.Lower-income households may have stronger community connections, and these ties provide support systems that contribute to their decision to remain in their current settlements.The observation that 5 out of 8 high-income individuals have chosen to reside in their current settlements for more than 31 years underscores the complex interplay between income and residential duration.While the statistical analysis did not reveal a significant overall correlation, these cases emphasise the role of personal factors and individual decisions that can override general trends.Higher-income individuals might prioritise factors like property ownership, social connections, or lifestyle preferences, leading to extended stays despite their financial capacity to relocate.The regression analysis examines the relationship between family income and the duration of residence at the current place (categorised into different time intervals).The regression model explains approximately 0.06% of the variation in income based on the duration of residence.The adjusted R-squared value is -0.0019, indicating that the model's fit is very weak and does not effectively explain the variability in income.The regression model is not statistically significant (p = 0.625108931), indicating no significant relationship between family income and the duration of residence.The intercept term is 9280.91,and its statistical significance is established (p = 5.93038E-20).The coefficient for the duration of residence is -130.92,and its statistical significance is not established (p = 0.625108931).The p-value more significant than the significance level (0.05) indicates that the duration of residence does not significantly affect income.In the regression analysis, there is no statistically significant relationship between family income and the duration of residence at the current place.The model's fit is weak, and the duration of residence does not appear to be a significant predictor of income in this analysis.
The study by Michael B. Toney (1976) found no influence of economic opportunities on the duration of residence, indicating that income does not affect how long someone stays in a particular place (Toney, 1976).The observed longer-term residency patterns in lower and higher income categories might arise from differing motivations and circumstances.In the lower income group, individuals could be constrained by financial limitations, making them more likely to stay in their current settlements due to limited mobility options.Conversely, extended residency might reflect stable living conditions and wellestablished social networks in the higher-income group.The lack of statistical significance in the regression analysis could be attributed to various factors.Other variables not considered in the analysis, such as education, occupation, or family history, might play a more significant role in determining income levels.Additionally, the weak model fit could indicate the presence of confounding variables not included in the analysis, thereby not exposing the genuine relationship between income and residency duration.It is essential to acknowledge the complexity of socioeconomic dynamics and consider a broader range of factors when exploring these correlations.
Discussion and Recommendations
Based on the study's findings, several recommendations are proposed to enhance sustainable housing development at the dwelling unit level in Wayanad, Kerala.Firstly, it is suggested that eco-friendly construction technologies such as rammed earth for the basement and foundation, cob construction for the walls, and matured coconut palms for roof rafters be adopted.Additionally, incorporating hardwood for windows, doors, and joinery and plastering houses with locally available mud of different colours can promote sophisticated mud architecture and cultivate a skilled workforce in green architecture.Moreover, environmental friendliness should be prioritised throughout the construction process.This entails utilising local and familiar resources to reduce the carbon footprint associated with construction activities.
Community participation is crucial at every stage, from planning to implementation, monitoring, and evaluation.Involving residents fosters a sense of ownership, ensuring housing projects' sustainability and long-term success.To address challenges such as termite infestation, it is recommended to implement measures that focus on protecting houses from such threats, particularly by addressing dampness issues.Furthermore, labour distribution should prioritise beneficiary participation, empowering residents by assigning tasks other than masonry, carpentry, and finishing works, thereby reducing construction costs and distributing the budget among the community.Climate responsiveness is essential in selecting materials suitable for Kerala's climate, avoiding alternatives that may not withstand local environmental conditions.Finally, advocating for plot sizes of 120-150 square meters with a built-up area of 60-65 square meters, vertical expansion options and flexibility in dwelling unit plans enable residents to customise their living spaces according to their preferences and needs.
A potential approach to crafting a sustainable housing policy benefiting indigenous communities while tackling poverty and land alienation involves giving precedence to community-led initiatives.This involves actively involving indigenous communities in the planning, designing, and implementing housing projects, ensuring that their cultural values, preferences, and needs are respected and incorporated into the policies.Additionally, the policy should promote affordable and environmentally friendly housing solutions, such as eco-friendly construction materials, energy-efficient designs, and access to renewable energy sources.This helps reduce housing costs for indigenous families, promotes sustainability, and mitigates environmental degradation.Furthermore, the policy should include provisions for securing land tenure rights for indigenous communities and protecting their ancestral lands from encroachment and exploitation.This can be achieved by legally recognising indigenous land rights, establishing community land trusts, and implementing land titling programs.By ensuring secure land tenure, indigenous communities can have greater control over their resources, enhance their economic opportunities, and reduce the risk of land alienation.Moreover, the policy should promote capacity-building and skill development programs tailored to the needs of indigenous communities, empowering them to actively participate in housing construction, maintenance, and management processes.This includes providing training in construction techniques, sustainable land management practices, and small business development to create employment opportunities and enhance economic self-reliance within the community.Additionally, the policy should prioritise social inclusion and equity by addressing systemic barriers and discrimination indigenous communities face in accessing housing, land, and essential services.This involves implementing affirmative action measures, anti-discrimination laws, and social protection programs to ensure equal opportunities and rights for indigenous peoples.While acknowledging the importance of preserving traditional ways of life, the government should also be capable of developing policies that enable Indigenous populations to maintain a healthy lifestyle without having to migrate to urban regions.The modern industry does not prefer the unskilled indigenous population of these locations.As a result, given their low skill level and environmental regulations, implementing "ecotourism" in forest settlements may be the ideal income-generating activity.The workforce that may be used in tourism-related activities is among the greatest in the service industry, providing a diverse range of job prospects for millions of people with low skill levels (Chakrabarty, 2011)."Indigenous tourism," according to Ryan and Huyton, is a type of tourism activity in which tourists are drawn to indigenous places to observe artistic performances, festivities, beautiful spots, historical heritages, and rituals (Rayn, 2002).Tourists respect the Aboriginal culture so the indigenous population can maintain the traditional culture (Chang, Lin, & Chuang, 2021).Overall, a comprehensive sustainable housing policy for indigenous communities should be community-driven, environmentally sustainable, socially inclusive, and economically empowering, addressing the root causes of poverty while promoting the well-being and resilience of indigenous peoples.
Conclusion
In conclusion, this study underscores the critical importance of sustainable housing development in addressing the socio-economic challenges faced by indigenous communities in Wayanad, Kerala.By integrating various aspects such as income, housing conditions, and community resilience into a unified framework, the study provides valuable insights for policymakers and practitioners.Emphasising community-driven initiatives and culturally sensitive interventions can pave the way for more inclusive and effective policies, ultimately fostering sustainable development and improving the livelihoods of the Indigenous population.
Figure 1 -
Figure 1 -Income vs. Built-Up Area of the House in years <5 Percent Age of house in years 5 to 14 Percent Age of house in years 15-24 Percent Age of house in years > or = 25 Percent
Figure 3 -
Figure 3 -Income vs. Duration of Residence | 2024-05-10T15:09:46.787Z | 2024-05-05T00:00:00.000 | {
"year": 2024,
"sha1": "c9550a2c089b783597ff832673634fd79c550ac6",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2024/3/19480.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "79513f506d9a746359ab361680e6596ec80675c5",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": []
} |
258578966 | pes2o/s2orc | v3-fos-license | THE IMPACT OF THE BOARD OF DIRECTORS' CHARACTERISTICS AND OWNERSHIP STRUCTURE ON THE SUSTAINABLE DEVELOPMENT DISCLOSURE IN THE BANKS LISTED ON THE AMMAN STOCK EXCHANGE
Purpose : This study tries to comprehend how corporate governance (CG) affects disclosures on economic, social, and environmental sustainability. Theoretical framework : Recent literature has reported that CG has significant impact on disclosures on economic, social, and environmental sustainability. However, there is still much to investigate and learn about CG in sustainability process. Design/methodology/approach : For the time period spanning 2015 to 2021, information about study variables was gathered from thirteen (13) banks listed on the Amman Stock Exchange (ASE) through annual reports and quantitative approach. Findings: Study findings showed that CG components improve sustainability disclosures in general. The study results indicated that, a large board with a female director and a Corporate Social Responsibility Committee (CSRC) is better able to audit and control management choices related to sustainability issues (whether they be economic, environmental, or social) and produces better sustainability disclosure. Research, Practical and Social implications: This study is proposed to help bank managers understand the real impact of corporate governance practices on sustainability, especially economic, environmental and social indicators of sustainability and how to improve and develop them. Originality/value: Through quantitative and qualitative analysis, this study contributes methodologically and empirically to the literature on corporate governance and sustainability reporting in emerging and developing economies. Doi: https://doi.org/10.26668/businessreview/2023.v8i4.1032
INTRODUCTION
In light of the new financial and social issues and crises on a worldwide scale, organizations' interest in disclosing sustainable development methods has increased in recent years.The failures of numerous businesses cast doubt on the ability of conventional financial reports to continue to reflect the company's true success and provide a comprehensive picture of all its operations (Pizzi et al., 2022).
Countries around the world adopted theories and mechanisms of action, in order to accomplish social justice, environmental protection, and sustainable growth that will protect future generations' rights, although overall standards of living have increased, the environment is still in danger, and millions of people still live in poverty and hunger, besides serious impasse has also resulted from this disparity in the twenty-first century (Calabrese et al., 2022).
The crisis that affects society, the environment, and the global economy may thus be resolved thanks to breakthroughs in knowledge and technology.The disclosure of sustainable development and the availability of information on the effective practices of firms' activities and their effects on the environment and society as a whole is crucial for achieving goals of sustainable development (Di Vaio et al., 2021).The presentation of development lists of economic units in all of their component elements is a crucial element in enhancing understanding of business trends and operations.This reduces the risks involved in investment decisions and aids in their rationalization (Pizzi et al., 2022).
The role of corporate governance emerges here as a topic related to the signing of agreements and the implementation of improvements in three dimensions: environmental, social, and economic.As a result, it's crucial to broaden theoretical development to make it possible to best governance practices and ensure sustainability (López-Santamaríet al., 2021).
The foregoing makes it abundantly clear that the main driving factor is the financial reports' glaring omission from disclosing sustainable development practices, which prevents them from satisfying stakeholders Since the company's response to stakeholders has a favourable impact on it and its ability to meet their requests, which are necessary for upholding the company's legitimacy, gaining a competitive edge, and achieving the sustainability development.
The extent to which sustainability reports are produced and released varies between businesses and economic units, the researchers found, even though filing these reports is optional.While some businesses present their information on sustainable development in separate reports, others include it with their financial statements (Dumay & Hossain,2019
LITERATURE REVIEW AND HYPOTHESIS TESTING:
Board of Directors' Size and Sustainable Development Disclosure It can be argued that a board of directors' capacity for monitoring and approving sustainability-related issues increases with board size.Therefore, the ability to control, verify, and report management actions connected to the issue of sustainability is significantly affected by the board size of directors in the organization (Disli et al., 2022).
When a sizable board of directors, a director element, and a social responsibility committee are present, management decisions on sustainability issues-whether they are economic or environmental-are more skilful and successful (Li et al., 2022) Joseph et al. (2021) indicated a favourable correlation between the number of members on a board of directors and the volume and quality of sustainability information released.Also, the size of the board of directors has a beneficial impact on sustainability disclosure, according to (Masud et al., 2020).The board of directors' structure, which encourages improved board governance, is expected to raise owners' awareness of and interest in the companies and that improves the standard of sustainability reporting (Amidjaya, & Widagdo (2019).
Additionally, the study of (Jizi, 2017) shown that increased board independence enhances the communication of a firm's good citizenship image through increasing societal consciousness.The findings also revealed that female board participation had a positive impact on CSR engagement and reporting, as well as the adoption of ethical rules.
The number of Board Members and Sustainable Development Disclosure
According to agency theory, increased managerial oversight brought on by big boards of directors might affect company disclosure, on the other side; efficiency and construction are linked to a lower council size (Abbas et al., 2021).Studies that investigate at the relationship between board size and disclosure have also produced inconsistent results indicated that the amount of environmental disclosure made by Turkish corporations and the size of the board only have a statistically meaningful association.While Kalash (2021) indicates that businesses with large boards release more environmental information than firms with small boards.
According to Sekarlangit and Wardhani (2022) the presence of CSR committees and the percentage of board directors who attend meetings both have a favourable impact on SDG disclosures.It also mentioned the possibility of more thorough SDG disclosures being encouraged by the board's attendance at the meeting.Companies with strong commitments to sustainability, as seen by the creation of CSR committees, also tended to disclose more SDGs.Dobija et al. (2022) implied that the IO of the supervisory board as well as the collection of traits aid in the transition to sustainable development.Companies, regulators, and legislators should be interested in the findings to incorporate sustainable principles into their company plans.
Non-Executive Directors and CEO duality
The agency theory states that when a firm has non-executive directors, there will be less of a conflict of interest between the manager and the owners.Non-executive board members are board members who do not receive compensation from the business.It is also regarded as a control mechanism because it serves as an independent monitor (Mollah et al., 2021).One of the rules governing corporate governance is the division of duties between the CEO and board chairman.The Agency hypothesis claims that the CEO's multiple roles give him the freedom to behave opportunistically because of his influence on the board of directors.Therefore, CEO duplication is common in businesses and typically results in an inability to adapt to changing circumstances (Ajanthan & Ramesh, 2021).According to Alawaqleh et al. (2021), businesses may choose to hire the Big 4 as auditors as institutional ownership and board independence rise.Therefore, top management at organizations as well as regulators should take these criteria seriously in order to improve the audits quality and subsequently financial reporting.
Family Ownership and Sustainable Development Disclosure
A recent study Ananzeh, H. (2022) supports the contention that there is a connection between a family's management quality and the degree of sustainability disclosure.The Concentration of ownership and disclosure of sustainable development: Ownership clearly affects how much information is disclosed on sustainability, which appears to be a contentious issue in many researches.According to the study (Liu and Bai, 2022), businesses with concentrated ownership disclose sustainability information the least.
This study (Qa'dan & Suwaidan, 2018) which revealed an inverse statistical association between the percentage of ownership and the consistency of information supports this conclusion.As a result, the following theory was developed: Consolidation of ownership has little impact on how much information is disclosed on sustainable development.
Population and Sample of the Study
The ASE consisted of 195 listed companies as at 31 December 2021 distributed across three sectors (financial, industry, and services).This study focuses on the banks because of the nature of financial companies..During the period of this study between 2016-2021.
Foreign Ownership
Percentage of foreign-owned shares divided by total shares.
Board Size
Determined by the number of board members.
CEO
Role duality is measured by the value of 1 if CEO is also the chairman of the board and 0 otherwise Independence of the board of directors The ratio of independent directors on the board to the total number of board directors is used to calculate board independence.
Experience of the board of directors
It is 1 if all audit committee members are competent (have an undergraduate degree in accounting or finance) and at least one of them has an accounting professional qualification (as stipulated in Jordan's CG code), and 0 otherwise.number of nonexecutive members the non-executive director is measured following the previous studies by giving 1 if the non-executive director are in the board and zero if they are not in the board.
Descriptive of dependent variables
The dependent variables represent the disclosure of sustainability in Jordanian commercial banks during the period (2016-2021), and they are represented in economic disclosure, environmental disclosure, and social disclosure.
Descriptive of the independent variable
The independent variables describe the ownership structure and bank characteristics of Jordanian commercial banks between 2016 and 2021.
Bank's characteristics
The characteristics of the board of directors, which included their size, independence, expertise, and diversity, served as a representation of the qualities of the bank.the separation of the CEO and chairman roles, as well as the proportion of non-executive board members.
While foreign ownership and ownership concentration were both part of the ownership structure.positions.According to the ratios, the majority of commercial banks during the period showed a strong commitment to the standards of corporate governance, particularly those pertaining to the separation of the roles of the Chairman of the Board of Directors and the Executive Director.
Ownership structure
The ownership structure included concentration of ownership and foreign ownership.According to Table 4, the mean for the proportion of foreign ownership in Jordanian commercial banks from 2016 to 2021 was 39.933%, while the standard deviation was 29.203%, and the maximum percentage of foreign ownership during that time was 92.433%, while the
Descriptive Control Variable
The controlling variable includes the bank size, which is measured in the natural logarithm of the bank's total assets during the period (2016-2021).(2016)(2017)(2018)(2019)(2020)(2021), the mean of the natural logarithm of total assets (LTA) for Jordanian commercial banks was (9.453), with a standard deviation of 0. (of 0.365).The greatest value was (10.441), and the least significant value was (8.978).The numbers show that there are differences in the sizes of the commercial banks.
Estimate the model
The study used panel data econometric analysis, which mixes time series and crosssectional data.The study relied on the panel data model to explore the influence of study models: The Lagrange Multiplier was used to choose the best model from (PRM) and (REM), while the Hausman test was used to choose the best model from (FEM) and (REM) (REM).The results of Table ( 6) indicate that the random effects model was the most accurate in estimating the model for the study hypotheses (H0, H01), while it was found that the pooled effects model was the most accurate in estimating the study hypotheses (H02).
Multicollinearity Test
Pearson correlation coefficients were obtained between independent variables to evaluate the existence of multicollinearity between model variables.Correlation matrices and the VIF test explain the findings of assessing multicollinearity between independent variables as follows: The table above demonstrated that the maximum correlation coefficient (0.783) was found between the variables (FOREIGN) and (CONCEN).This number may suggest that multicollinearity is not present because the values were less than (0.80).The value (0.80) and greater are regarded as a sign of multicollinearity in statistical literature (Gujarati, 2004).
The variance factor inflation (VIF) was computed to assure the aforementioned outcome, the outcomes are presented in the following table: All VIF values were larger than (1) and less than (2), as shown in Table ( 8). ( 10).This demonstrates that none of the predictor variables are multicollinear (Gujarati, 2004).
Stationary Test (unit root)
This test is used to see if there is a systematic change in the mean or variance of the data.The Levin-Lin-Chu (LLC) test is commonly used for panel data.The significant level (Prob.)for all variables is less than 0.0; therefore the results of the stationary test (unit root) reveal that all variables are stationary at the level.These results reveal that the unit root existence (non-stationary) null hypothesis is rejected at the 1% level, indicating that all research variables are stationary at the level across the study period.
Hypothesis Testing
In this section multiple regression analysis was used to test the study hypothesis, as According to the data, R Square, the coefficient of determination, is equal to (.701), indicating that the model accounts for about 70.1% of the variance in disclosure.The effect of independent variables is significant since the F statistic's significance value (F=11.541) is less than (.05) and its probability value (Prob F =.000).Additionally, the regression's coefficients indicate that the state) has a considerable positive impact on disclosure, with a significant coefficient value of (.055) with a t-value of (5.362) and a P-value of (.000) less than (.05), (BIND) has a substantial detrimental effect, with a significant coefficient value of (-.035) at (t=-18.037) and (P-value=.000).With a coefficient value of (-.046) that is significant with (t=-8.877)and (P-value= .000),(BEXP) has a significant adverse effect.With a coefficient value of (.079) that is significant with (t=2.692) and (P-value= .009),(DUAL) has a substantial positive influence.Additionally, (NONEXE) has no significant influence, as evidenced by the coefficient value of (.024), which is not significant with respect to time (t=1.469)and P-value (0.147).With a coefficient value of (-.001), (CONCEN) has a substantial adverse effect, which is significant with (t=-4.979)and (P-value= .000).With a coefficient value of (-.001) and a significance level of (t=-1.140)and (P-value= .259), the effect of (FOREIGN) is not significant.
Last but not least, there is a very positive influence of (SIZE), with a coefficient value of (.066) and a sign with (t=6.263) and (P-value = .000).Additionally, the Durbin-Watson value close to (2) also suggested that there is no serial correlation between error terms, but (D-W= 1.558) shows that there is.
Based on the foregoing, the primary null hypothesis is rejected and the alternative hypothesis, which indicates: There is a statistically significant effect of the bank's characteristics and ownership structure on the disclosure of sustainability in Jordanian commercial banks.The aforementioned data showed that R Square is equal to (.694), which indicates that the model accounts for roughly (69.4%) of the variation in disclosure.Since the F statistic's significance value (F=26.868) is less than (.05) and its probability value (Prob F =.000), the influence of independent variables taken together is significant.Additionally, the regression's coefficients show that the (BSIZE) has a substantial positive impact on disclosure, with a significant coefficient value of (0.036) with a t-value of 7.772 and a P-value of 0.000, less than (.05).With (t=.280) and (P-value= .780), the coefficient value of (BIND) is (.001), which is not significant.The coefficient value of (-.033) for (BEXP), which is significant with (t=-8.792)and (P-value= .000),indicates a substantial adverse effect.With (t=1.393) and (P-value= .168), the coefficient value of (DUAL) is (.024), which is not significant.A significant positive effect is also produced by (NONEXE), with a coefficient value of (.016) that is significant with respect to (t=3.446) and (P-value=0.001).Finally, there is a substantial positive influence of (SIZE), with a coefficient value of.138that is significant with a t-value of 4.531 and a P-value of (.000).
Additionally, the Durbin-Watson value close to (2) also implies that there is no serial correlation between error terms, but (D-W= 1.614) shows that there is.
Depend on theses results, the first sub-hypothesis is rejected and the alternative hypothesis is accepted, which pointed that: There is a statistically significant effect of the bank's characteristics on the disclosure of sustainability in Jordanian commercial banks.
H02: There is no statistically significant effect of the ownership structure on the disclosure of sustainability in Jordanian commercial banks.The aforementioned data shows that R Square is equal to (.217), which indicates that the model accounts for around (21.7%) of the variation in disclosure.Since the F statistic's significance value (F=6.828) is less than (.05) and its probability value (Prob F = .000),the influence of independent variables taken together is significant.Additionally, according to the regression's coefficients, (CNCEN) has a significant negative impact on disclosure, with a coefficient value of (-.002) being significant with a t value of (-5.697) and a P value of (.000) less than (.05), and (FOREIGN) having a significant negative impact with a coefficient value of (-.001) being significant with a t value of (-8.047) and a P value of (.000).With (t=1.156) and (P-value= .252), the coefficient value of (SIZE) is (.026), which is not significant.
Additionally, the Durbin-Watson value close to (2) also implies that there is no serial correlation between error terms, but (D-W= 1.704) shows that there is.
The above results indicated that, rejected the second sub-hypothesis and accepts the alternative hypothesis, which indicated that: There is a statistically significant effect of the ownership structure on the disclosure of sustainability in Jordanian commercial banks.
DISCUSSION AND CONCLUSION
Throughout the 2017-2021 period, there were differences among Jordanian commercial banks in their level of disclosure of the sustainability dimensions.The arithmetic average value of this level of disclosure, which came in at 52.0%, suggested that lesser commercial banks on general had a low level of disclosure of the sustainability dimensions.This could be as a result Additionally, there are differences in ownership concentration between Jordanian commercial banks during the period (2017-2021), which could be a result of changes in the price and quantity of shares held by these institutions over that time as well as their shareholders' characteristics.The percentage of foreign ownership varies amongst commercial banks over the period (2017-2021), which may be a result of the limits placed by private commercial banks on foreign investment in shares or be connected to the stability and political and economic circumstances of the nation.Additionally, there is a comparative variation in the size (total assets) of Jordanian commercial banks across the time period (2017)(2018)(2019)(2020)(2021).This may be because Jordanian commercial banks conduct a varied number, variety, and volume of banking operations.The fact that the board of directors' and ownership structure's characteristics had a significant impact on the disclosure of the sustainability dimensions in Jordanian commercial banks demonstrates the beneficial role each of these characteristics played in encouraging the management of these banks to disclose the sustainability dimensions and pay attention to them, which is represented by improving both their and the bank's reputation before and after the , A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange According to the above this research is aimed to examine the impact of the board of directors' characteristics and ownership structure on the sustainable development disclosure in the banks listed on the Amman Stock Exchange.
the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange
Furthermore
, Al-Duais (2021) offered proof that property ownership has little effect on the standard of sustainability disclosure.Because the literature has produced different findings about the effect of family ownership on transparency, more research is required.Maani, A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange Baù et (2021) suggested that establishing relationships with other stakeholders and attaining a number of shared goals may depend heavily on family ownership.The idea that family ownership has no effect on the amount of disclosure of sustainable development, on the other hand, was born from the possibility that family ownership is opportunistic.
Maani, A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock This section of the study contains the findings of statistical analyses, such as descriptive measurements, model fit tests, and hypothesis testing.The statistical processing is based on Jordanian bank financial data from 2016 to 2021.
The results showed that commercial banks differ in their interest in disclosing the sustainability dimensions; the economic dimension garnered the most attention, which is consistent with the nature and significance of the work done by Jordanian commercial banks, while the environmental dimension garnered the least attention, possibly as a result of a lack of interest.The operations and nature of the activity of Jordanian commercial banks have a considerable impact on the environment.Maani, A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange commercial banks' boards of directors over the 2016-2021 period was 8.782; the standard deviation was 2.099; and the maximum number of directors viewed throughout the period was 13.0; the lowest number of directors watched was 5.0.The numbers show that commercial banks differ from one another in terms of the proportion of board members having accounting and financial expertise.Finally, and According to Table 2, the mean number of non-executive Maani, A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange directors on the boards of directors of Jordanian commercial banks over the 2016-2021 period was (11.641), the standard deviation was (0.966), the maximum viewing value was (13.0) members, and the lowest watching value was (9.0) members.The values show that there are variations in the number of non-executive directors on the boards of directors among commercial banks.Separation of the positions of the Chairman and CEO (DUAL) the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange lowest percentage was 4.545%.The values show that commercial banks differ in terms of the proportion of foreign ownership.
the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange is no statistically significant effect of the bank's characteristics on the disclosure of sustainability in Jordanian commercial banks.
the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange of certain of the sustainability dimensions outlined by the Global Initiative for the Preparation of Sustainability Reports (GRI) not being compatible with how Jordanian commercial banks operate their businesses, especially in the environmental sector.The high level of disclosure of the economic dimension in Jordanian commercial banks during the period (2017-2021), demonstrates the interest of Jordanian commercial banks in their operations, economic fields, competitive capabilities and market position, and the economic Maani, A. A., Issa, G., Alghananim, M. A. M., Aljada, R. A. M. (2023) The Impact of the Board of Directors' Characteristics and Ownership Structure on the Sustainable Development Disclosure in the Banks Listed on the Amman Stock Exchange effects of their financial activities and operations on society at large.This is because it is thought to be a key component in achieving its economic growth, generating profits, and ensuring its existence and continuity.Additionally, there was a low level of environmental disclosure in Jordanian commercial banks over the course of the study period (2017-2021), which may be explained by the fact that the activities and operations of the bank had little to no environmental impact because the areas of disclosure were restricted to protecting the environmental resources used, such as water and energy.The existence of items connected to subjects outside the purview of the operations of Jordanian commercial banks, such women and human rights, may be the reason why the level of disclosure of the social component in Jordanian commercial banks reached 54% during the period (2017-2021).Additionally, there is a difference between Jordanian commercial banks in the qualities of the board of directors (size, independence, experience, and executive members) over the period (2017-2021), and this might be connected to the circumstances and nature of the work performed by Jordanian commercial banks as well as their level of interest in fulfilling the requirements of governmental institutions related to the creation of the board of directors.Despite this, it was discovered that there was a strong commitment among the Jordanian commercial banks during the period to keep the roles of the chairman of the board of directors and the executive director separate.This shows the tendencies of the Jordanian commercial banks in providing the chairman of the board of directors with enough time to devote to carrying out his regulatory and supervisory duties.
Table 1 :
Descriptive statistic of disclosure of sustainability Table (1) above showed that the sustainability disclosure rate means in Jordanian commercial banks for the period (2016-2021) was (52.0%), with a standard deviation of (16.4%).Table (1) also showed that the economic discloser had the greatest mean, at 81.2%, followed by the social discloser in second place, at 54.9%, and the environmental discloser in third place, at 24%, with the highest mean.
Table 2 :
Descriptive statistic of bank characteristicsAccording to Table2, the mean size of the board of directors for Jordanian commercial banks over the 2016-2021 period was 12.128 members, the standard deviation was 1.210 members, the highest viewing value was 16.0 members during the period, and the lowest watching value was 10.0 members.With regard to the directors board size, where the rule of the board's composition indicates that the number of members should not be less than (11), the values show that there is a difference between the Jordanian commercial banks in terms of adherence to the rules of corporate governance during the period.Table2further reveals that the average level of board independence in Jordanian commercial banks from 2016 to 2021 was 90.1%, with a standard deviation of 22.4%, and that the highest observation during that time was 100.0%, while the lowest observation was 30.8%.According to the values, there are variations among Jordanian commercial banks in terms of the independence of the board of directors.Also, and according to Table2, the average number of directors on Jordanian
Table 3 :
Descriptive statistic of Separation of the positions of the Chairman and CEO whereas 1.3% of commercial banks recorded cases of non-separation between the two
Table 4 :
Descriptive statistic of ownership structure lowest percentage was 16.0%.The values show that the commercial banks' percentages of ownership concentration differ from one another.
Table 5
dinars.The highest value observed during this time was 27,615.479 million dinars, while the lowest value observed was (949.577 million dinars).The table(5)shows that over the period
Table 6 :
Results of Lagrange Multiplier and Hausman tests The Lagrange Multiplier test was used to pick an acceptable model (PRM) and (REM).H0: REM is more consistent than FEM in the Hausman test for selecting an acceptable model (FEM).
Table 8 :
Results of VIF
Table 9 :
Results of the stationary test (unit root)
Table 10 :
Random Effect Model for H0
Table 12 :
Pooled Effect Model for H02 | 2023-05-10T15:06:25.835Z | 2023-03-23T00:00:00.000 | {
"year": 2023,
"sha1": "4840e12fd7c9d41d7c85f9c9c43cf511997f3723",
"oa_license": "CCBYNC",
"oa_url": "https://www.openaccessojs.com/JBReview/article/download/1032/529",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f3190a70b7decbc516d4cf5732cbba480aaedba9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
12596071 | pes2o/s2orc | v3-fos-license | Infections in Transplant Patients
Recipients of solid organ transplants (SOT) need primary care providers (PCPs) who are familiar with their unique needs and understand the lifelong infectious risks faced by SOT patients because of their need for lifelong immunosuppressive medications. SOT recipients can present with atypical and muted manifestations of infections, for which the knowledgable PCP will initiate a comprehensive evaluation. The goal of this article is to familiarize PCPs with the infectious challenges facing SOT patients. General concepts are reviewed, and a series of patient cases described that illustrate the specific learning points based on common presenting clinical symptoms.
commonly seen by generalists now qualify patients for SOT, including symptomatic chronic conditions such as congestive heart failure (CHF), chronic obstructive lung disease (COPD), chronic hepatitis, and chronic kidney disease (CKD). Given the prevalence of these chronic conditions, more than 100,000 people are on waiting lists in the United States for SOTs. While wait-list length is influenced by changes in demand, listing practices, death rates, donation rates, and allocation policies, it is important to be aware that the demand for SOT is increasing (Fig. 1). 3 SOT recipients are at risk for infections in the short and long term, resulting in higher morbidity and mortality. 4 Unlike the vast majority of hematopoietic cell transplant recipients who eventually have immunosuppressive medications discontinued, SOT patients require lifelong immunosuppression and therefore remain at lifelong increased risk of infection. 5 To avoid transplant rejection the commonly used immunosuppression regimens are broadly acting, decreasing both cellular and humoral immunity. Reduced cellular immunity leaves hosts susceptible to viral, fungal, and intracellular pathogens while decreased humoral immunity increases the risk for encapsulated bacteria. In most SOT recipients the degree of immunosuppression, and hence the infectious risk, tends to decrease over time, but never returns to a normal baseline. For those who develop recurrent or chronic organ rejection and require intensification of their immunosuppressive regimen, the risk for infection remains high. Usual immunosuppressive regimens include a corticosteroid, a calcineurin inhibitor, and an antimetabolite. 5 More care is decentralized from major transplant centers.
Even patients who live in rural and geographically isolated areas are receiving SOTs. Because the risk of many transplant-related complications tends to decrease after 3 to 6 months posttransplant, patients generally are referred back to their PCPs/specialists by around 3 months after the transplant. Hence, subacute and long-term management is now being done by generalists with the transplant center providing a consultative role if needed. Most transplant centers will provide the patient and primary physician a packet with transplant-specific care guidelines, recommended monitoring, and follow-up requirements. PCPs of new SOT patients should request this information from the transplant center (Box 1).
PRACTICAL ISSUES IN THE MANAGEMENT OF SOT INFECTION
Typical signs and symptoms of infection are muted by immunosuppressive agents.
In immunocompetent hosts, the signs and symptoms of inflammation (rubor, dolor, calor) can be important clues to infection. However, the immunosuppressive agents that SOT recipients receive can blunt this inflammatory response (the mechanism by which they prevent rejection), thereby making the signs and symptoms of inflammation much more subtle. 5 Thus, it is important for PCPs and patients to remain vigilant of even subtle manifestations of infections. Common urgent-visit concerns such as low-grade fevers, new cough, or diarrhea can portend serious infections. This article reviews common presenting symptoms that should be approached differently in SOT patients than in immunocompetent persons.
In this time of heightened awareness of medical costs and growing resistance to antimicrobial therapy, the usual approach to average-risk ambulatory patients is to be judicious and pragmatic in our use of diagnostic and therapeutic interventions. 6 Several features of infection in SOT patients mandate a different approach than is used in immunocompetent patients (the clinical presentation can be more subtle,
Infections in Transplant Patients
multiple pathogens may be present concomitantly, and the progression of infection may be more rapid). As a result, in general more comprehensive diagnostic testing and earlier escalation to advanced imaging and invasive diagnostic procedures are warranted for SOT patients. 5 Although empiric therapeutic interventions have their place in SOT patients, thorough efforts to identify and characterize the etiologic agent(s) of infection are imperative.
Some tests used to diagnose infections in immunocompetent patients are much less sensitive in SOT patients. For example, tests that rely on the host's immune system (eg, serology) may be less sensitive than direct detection of the pathogen. Although serologic tests can help identify latent infections, antibody development to acute infections may be delayed or never develop in immunosuppressed patients. Instead, direct detection of pathogens using culture, polymerase chain reaction (PCR), and so forth, from optimal specimens is preferred. 5,7 To add to the complexity, SOT patients are more likely than immunocompetent patients to have rapid progression of infection because of the lack of an appropriate immune response. An example is pneumonia, which can be safely treated in the ambulatory setting in most immunocompetent patients but can often result in hospitalization in SOT patients. 8 SOT patients are more likely to have multiple pathogens and resistance patterns that complicate the choice of antimicrobial therapy. 7 Early involvement of infectious disease specialists may be warranted. PCPs should familiarize themselves with the timeline of susceptibility to infections.
In general, the intensity of immunosuppression decreases with the time from transplant and, as a result, the risk for serious opportunistic infections tends to decrease with time. However, if episodes of rejection occur, immunosuppressive medications are intensified and patients again become more vulnerable to opportunistic infections that might more typically be seen in the earlier period posttransplant. 5,7 The first month after transplant is a vulnerable time for nosocomial infections with multidrug-resistant organisms, including those related to complicated surgery.
Nosocomial infections may be related to the transplant surgery itself or exposures to the hospital environment. 5,7 Many such infections are the results of venous and urinary catheterization as well as intubation. Most SOT patients are aggressively treated for known infections before transplantation, given the risks of immunosuppression. protocols exist to screen donors for infections, but the short time frame necessitates limited diagnostic testing. Although serologic testing for viral hepatitis, human immunodeficiency virus (HIV), herpesviruses, and microbiological testing with blood and urine cultures can identify active bacterial and fungal infections, donor-related infections are well described. 9 During the first posttransplant month, bacterial and fungal infections are more common than viral infections, and often these nosocomial pathogens are drug-resistant strains. SOT patients in this time frame will typically still be under the direct care of the transplant center. The effects of immunosuppressive medications typically manifest in months 1 to 6, so this is when opportunistic infections tend to be most common.
Donor-acquired conditions, or reactivation of recipient infections such as hepatitis B and C, are apt to infect the patient after the first month. 7,9 By 3 months posttransplant, if all is going well, immunosuppression will typically be tapered and most patients will leave the direct care of the transplant program. Patients will be receiving prophylaxis for Pneumocystis jiroveci pneumonia (PJP) (trimethoprim/sulfamethoxazole or dapsone, or monthly inhaled pentamidine) and cytomegalovirus (CMV) (commonly valganciclovir). 5 Reactivation of latent infections can occur during this time period, including Mycobacterium tuberculosis and Strongyloides stercoralis. SOT patients are susceptible during this period to endemic mycoses including Coccidioides, Histoplasma, Blastomyces, and Cryptococcus. 7 Prophylactic medications are typically discontinued by 6 to 12 months after transplant. The need for prophylaxis diminishes over time, but SOT patients will always be susceptible to typical community-acquired viral and bacterial infections. Further concerns are raised 6 months or more posttransplant.
Most stable SOT patient are on reduced doses of immunosuppression, and therefore have decreased risk for opportunistic infections. CMV, Epstein-Barr virus, herpes simplex virus, and hepatitis viruses remain a concern, but more commonly SOT patients are infected with seasonal respiratory and gastrointestinal viruses, community-acquired pneumonias, and urinary tract infections during this period. 7,10 In patients who experience allograft rejection, doses of immunosuppressive medication are increased. In those who require higher levels of immunosuppression, the risk for opportunistic infections may be as high as during the 1-to 6-month posttransplant period; this would include an increased risk for PJP, Nocardia, Varicella, and Aspergillus (Fig. 2). 7
CASES OF INFECTIOUS DISEASE IN SOT PATIENTS
The Infectious Disease Society of America (IDSA) guidelines recommend that clinicians consider influenza for all patients with acute onset of fever and respiratory symptoms during the influenza season. 11 The IDSA does not have a unique diagnostic algorithm for immunosuppressed patients during or outside of the influenza season, but does warn that chronically ill/immunosuppressed patients could present atypically and have more severe consequences of infection with influenza. 11 Otherwise healthy patients can forgo diagnostic testing and be treated empirically if they present for care
Case 1
A 45-year-old man, 2 years post orthotopic liver transplant, presents to your clinic in January with 5 days of rhinorrhea, mild sore throat, dry cough, mild headache, and chills. ROS is negative for pleurisy or dyspnea, but positive for fatigue. Vital signs (VS): temperature (T) 37.7 C, heart rate (HR) 100 beats/min, blood pressure (BP) 110/70 mm Hg, respiratory rate (RR) 22 breaths/min, oxygen saturation (O 2 sat) 98%. His examination shows clear nasal secretions, mild pharyngeal erythema, and no adenopathy, and his lungs are clear to auscultation bilaterally. His transplanted liver is nontender on palpation, and he is hydrating orally and urinating without difficulty. He had an influenza vaccination 2 months prior.
Infections in Transplant Patients
within 48 to 72 hours of the onset of symptoms, but SOT patients should be approached differently.
Although the rate of influenza infection in SOT patients appears to be similar to the general population (2%-4% in SOT vs 3%-5% in the general population), the severity of infections is higher. 12 The type of organ transplant may influence the risk of complications. Lung transplant recipients are most vulnerable, followed by liver, then kidney transplant patients. 13 Whereas healthy patients typically shed the virus 1 day prior and up to 1 week following the onset of symptoms, SOT patients can be infectious for weeks to months because of their inability to clear the virus, 14 and are more likely to present with atypical symptoms including no or only a low-grade fever (50%-80% of SOT patients with influenza present with fever). Hence, symptoms of rhinorrhea, dry cough, sore throat, or gastrointestinal symptoms of stomach upset and diarrhea commonly seen in noninfluenza respiratory or gastrointestinal viral infections may be the only presenting symptoms of influenza. When fever is present in the setting of symptoms of upper respiratory tract infection, it is a very predictive sign for influenza in SOT patients. 12 SOT patients are more prone to develop lower respiratory tract infections, including influenza pneumonia (47% of hospitalized SOT patients), secondary bacterial pneumonia (Streptococcus and Staphylococcus, 17% of hospitalized SOT patients), other bacterial superinfections, and extrarespiratory manifestations such as central nervous system or myocardial involvement. 12 As a result, SOT patients may require hospitalization and aggressive management of influenza infection more commonly than immunocompetent patients.
Whereas empiric therapy or supportive therapy alone is acceptable in many otherwise healthy ambulatory patients, SOT patients benefit from specific identification of the pathogen(s). Immunocompetent patients should be tested as soon as symptoms begin, ideally within less than 5 days. However, regardless of the timing of symptom onset, it is appropriate to test SOT patients for influenza when the suspicion arises. Whereas nasopharyngeal washes and aspirates are superior in immunocompetent patients, upper and lower respiratory tract specimens can be helpful in SOT patients. 11 What is the appropriate workup of transplant patients suspected of influenza?
Rapid antigen tests have limited sensitivity for the diagnosis of influenza, and a negative test does not exclude the diagnosis. More sensitive tests such as respiratory virus PCR panels are becoming the gold standard at many laboratories. These tests are appropriate for use in symptomatic patients but can cause confusion by identifying multiple viruses in asymptomatic patients, making it difficult to determine if the identified viruses are pathogens. If patients have lower respiratory tract symptoms or clinical or radiographic evidence of lower tract infection, they should undergo bronchoscopy with testing. 12 For more information on testing, the reader is advised to visit the seasonal flu Web site of the Centers for Disease Control and Prevention: (http://www.cdc.gov/flu/professionals/diagnosis/labprocedures.htm).
Because of increasing resistance patterns to M2 inhibitors such as amantadine in influenza A and H1N1, neuraminidase inhibitors such as oral oseltamivir and inhaled zanamivir are considered the first-line therapy. The optimal duration of therapy is not well defined. In immunocompetent individuals the typical course is 5 days. However, SOT patients can continue to shed the virus for a longer duration. Active treatment for 10 to 14 days with weekly PCR monitoring should be considered. 12 Given the higher rate of bacterial superinfection or coinfection, antibiotics should be considered, especially in SOT patients with lower respiratory infection symptoms, while awaiting results of diagnostic studies. What is the best way to handle influenza vaccination in transplant patients?
SOT patients should be vaccinated with an inactivated influenza vaccine before and after transplantation. There is some controversy about the optimal timing of vaccination posttransplant, but the prudent approach is to vaccinate SOT patients as soon as the seasonal vaccine is available before influenza season. The efficacy of vaccination may be lower in SOT patients than in immunocompetent patients, but does appear to be safe. One study found that of the population of SOT patients diagnosed with influenza, 50% had received the vaccination and none of these patients had protective levels of antibodies against influenza at the time of admission. The same study showed that influenza vaccination decreased the risk of associated pneumonia (relative risk 0.3) in comparison with SOT patients who were not immunized. 15 There is no evidence that additional benefit is gained from highdose vaccines or intradermal inoculation. The live attenuated nasal vaccine is contraindicated in immunocompromised persons, including those who have received a SOT. Evidence does not show an increased risk for graft rejection or failure with influenza vaccination. 16 Community-acquired pneumonia (CAP) in SOT patients is common (3 times higher incidence than in immunocompetent individuals) and dangerous (11%-43% mortality rate). 17,18 A Canadian case-control study found that immunosuppressive medications increased the risk for CAP, with an odds ratio of 15. 17 Despite the uniquely higher risk facing SOT patients, the IDSA and the American Thoracic Society (ATS) do not specifically consider immunosuppressed patients in their consensus guidelines for CAP. Rather, the guidelines address CAP and separate guidelines address nosocomial pneumonia (recent hospitalization or institutional settings). 19 For this patient, the presentation in the 3-to 6-month posttransplant period increases her risk for opportunistic infections, given the effects of longer duration and higher dose of immunosuppressive medications. Viral infections such as CMV and respiratory viruses can cause primary lung infections during this time, and can also complicate matters by further decreasing immunity and increasing the risk for opportunistic infections such as Aspergillus fumigatus and PJP. 10
Case 2
A 52-year-old woman 3.5 months status post living-related kidney transplant presents to your clinic to reestablish care after her transplant. She reports 7 days of subjective fevers and a nonproductive cough. She has no rhinorrhea, sore throat, nausea, vomiting, diarrhea, or body ache, but she has had anorexia and some mild night sweats. She has had many wellwishers at her home since returning and suspects she could have had an infectious exposure. VS: T 38.1 C, HR 96, BP 132/90, RR 24, O 2 sat 94% on room air. On examination she is speaking in full sentences but looks tired. She has scattered shoddy cervical adenopathy, no nuchal rigidity, and some scattered inspiratory crackles bilaterally. There are wheezes, and her lungs are equally resonant on percussion. Her heart is regular without a murmur, she has a reassuring soft abdominal examination, and her transplant kidney is nontender on palpation. ATS and IDSA recommend the use of assessment tools of pneumonia severity to determine the appropriate care setting: ambulatory, inpatient, or ICU. CURB-65 is a recommended tool that gives 1 point for Confusion, Uremia, Respiratory rate greater than 30/min, Blood pressure less than 90 mm Hg, and age 65 or older. A score greater than or equal to 2 on CURB-65 should spur providers to consider hospitalizing patients. The pneumonia severity index (PSI) is the other recommended tool, and has 11 initial elements to the assessment. If patients have any of the 11 elements they are risk stratified into 4 higher-risk classes that correlate to 30-day mortality risk, as does CURB-65. 19 PSI has a higher discriminatory power and is more accurate in lower-risk patients. Neither tool specifically considers immunosuppression or SOT as a risk factor for severe disease. Hence, it is important for PCPs to use such tools with caution in SOT patients. Current guidelines recommend these tools be used for supplemental data, and that the physician's determination of the patient's global risk be the primary determinant of the treatment plan. 19 In SOT patients with lower respiratory symptoms of cough, dyspnea, increased respiratory rate, or fever, testing should include complete blood count, chemistry panel, blood cultures, sputum culture, and a chest radiograph. Although chest radiographs have lower sensitivity in immunosuppressed people, the pattern of disease on radiography can still be helpful. Focal airspace disease is correlated with bacterial (and mycobacterial) pneumonia. Multifocal airspace disease and nodular infiltrates have a much broader differential (as discussed earlier). Diffuse and interstitial patterns are concerning for pneumocystis or viral infections. In immunocompromised patients with pulmonary infiltrates, chest CT scan and bronchoscopy have clearly shown benefit in distinguishing among infectious and noninfectious causes. 20,21 Consensus guidelines recommend empiric treatment with institutionally tailored antibiotic choices that should reflect the resistance in the community. PCPs should not delay empiric therapy while waiting for testing. Usual choices of a respiratory fluoroquinolone, macrolides, or broader-spectrum b-lactams 1 a macrolide are also reasonable empiric therapy for SOT patients. 19 Depending on clinical presentation, severity scores, reliability and level of home support, some patients can be treated as an outpatient with very close follow-up, whereas others need to be managed in the inpatient setting. Hospitalizing immunosuppressed SOT patients and putting them at risk for nosocomial infections is an important consideration. Early consultation with pulmonary and infectious disease specialists, very close follow-up, and rapid escalation of the intensity of both diagnostic and therapeutic efforts depending on response are appropriate.
Regarding follow-up, because this patient is in the highest-risk time frame for opportunistic infections, and has nodular infiltrates on her chest radiograph conferring a broader range and possibly higher-risk situation, an infectious disease consultation should be initiated as well as hospitalization for intensified diagnostic and therapeutic interventions. Typically recommended regimens for CAP requiring hospitalization
Infections in Transplant Patients
(ceftriaxone 1 azithromycin, respiratory fluoroquinolone) would not be active against the cause of this patient's pneumonia.
Nocardia was diagnosed on Gram stain (beaded, branching, filamentous grampositive rod) and culture of a bronchial alveolar lavage (BAL) sample. Invasive procedures are often necessary to make the diagnosis of pulmonary nocardosis (44% in one study). 22 Nocardia is a soil-borne bacterium that more commonly presents as an opportunistic infection in immunosuppressed patients, although it can cause selflimited indolent disease in immunocompetent patients. SOT patients are particularly vulnerable when their T-cell immunity is suppressed, often when corticosteroid doses are higher. 23 Because Nocardia has a propensity to disseminate to other sites (brain, bones, skin), the patient should be carefully clinically assessed for these complications, with further imaging clinically appropriate. The primary treatment is typically with sulfamethoxazole-based antibiotics. This patient had no evidence of dissemination outside her lungs and was treated with imipenem initially, given her sulfamethoxazole allergy, then converted to linezolid orally for 6 months.
Diarrhea is a common symptom in both the general and SOT populations. Noninfectious and infectious causes are prevalent, but morbidity and mortality in susceptible immunocompromised SOT patients is much higher. 24 SOT patients can present with infectious causes of diarrhea, with acute onset and chronic presentations. The differential diagnosis favors infectious causes, then medication side effects, and in the setting of prolonged immunosuppression, posttransplant lymphoproliferative disease (PTLD) should be considered. 24 Unlike in stem cell transplant recipients, graft-versus-host disease is an uncommon reason for diarrhea following solid organ transplantation. CMV, C difficile, and bacterial pathogens are common infectious causes; parasitic infections are less common ( Table 1). 25 C difficile-associated diarrhea (CDAD) is the most common nosocomial antibioticassociated diarrhea in SOT populations. 26 In the general population we worry about risk factors such as hospitalizations, gastrointestinal surgery, advanced age, uremia, and multiple comorbidities. Most cases of CDAD in SOT patients occur during the first 3 months, owing to associated risk factors of prolonged hospitalization, prolonged
Case 3
A 48-year-old woman, 2 years post liver transplant for autoimmune hepatitis, presents with watery, profuse diarrhea for 3 days, resulting in 3 lb (1.36 kg) of unintentional weight loss. She reports no nausea, vomiting, blood in the stool, or tenesmus, but has had a low-grade fever. She has no recent travel, change in her medications, or unusual or risky food ingestions or sexual behavior. However, on further questioning she is a girl-scout leader for her 10-year-old daughter's group, and they recently had a day outing to a local water park. On examination she is tired, without jaundice. VS: T 38.1 C, HR 95, BP 110/65. Her abdominal examination is minimally tender without guarding or rigidity, and her liver is nontender on examination.
What testing would you pursue?
A. None indicated. Treat her with supportive therapy as she has no blood, or severe VS abnormality.
Pagalilauan & Limaye antibiotic exposure, and the intensity of immunosuppression. Late-onset CDAD in SOT can happen months to years after transplantation, and is associated with intensification of immunosuppression to address rejection or antibiotic exposure. 9,26 The presentation can vary from mild diarrhea to life-threatening sepsis.
A prospective Canadian study of more than 1300 SOT patients found that the incidence of CDAD increased from 4.5% in 1999 to 21% in 2005, and with interventions decreased to 9.5% in 2010. The study showed that CDAD resulting in graft loss, colectomy, or death was more likely in those with a white blood cell count of greater than 25,000, and the finding of pancolitis on CT scan. The presence of both conferred an increased risk for these complicated events by 42%. In such patients, disease progressed despite timely and appropriate antimicrobial therapy. 26 PCPs should be aggressive in their approach to diagnosis and treatment of SOT patients with suspected CDAD and should have a low threshold for hospitalizing SOT patients for expedited care.
Recommendations for diagnostic testing and the initial treatment of acute diarrhea, including C difficile, have been previously published. IDSA guidelines specifically consider immunosuppression for diarrhea lasting 7 days or more. However, SOT patients are at higher risk than immunocompetent populations, and for acute diarrhea, regardless of duration, providers should have a high suspicion of bacterial, viral, and parasitic causes. 24 The unique risks in SOT (chronic immunosuppression, medications that commonly cause diarrhea, risk of exposure to antibiotics) can make finding a diagnosis a complex process. Endoscopy with biopsy should be considered in those with a negative noninvasive evaluation. Reported rates of abnormality on colonoscopy and histology range from 20% to 40%, but only 10% of colonoscopy findings lead to a change in medical management. 25 Regarding follow-up, this patient was diagnosed with cryptosporidiosis via special staining and PCR of a fresh stool sample. Supportive therapy and antimotility agents were initiated alongside a cautious reduction in immunosuppression.
Cryptosporidium is a fecal-orally transmitted, water-borne protozoan found worldwide, including in the United States. It is a highly resistant parasite whose oocysts can survive for 3 to 10 days in water despite appropriate levels of iodine and chlorine treatment. 27 Water-borne outbreaks of cryptosporidiosis have been traced to water parks and public fountains. Food-borne outbreaks have also been linked to infected food handlers and unpasteurized apple cider, and person-to-person contact has been described at daycare centers. 27 Whereas it can cause self-limited mild diarrhea in the general population, in children and immunocompromised people showed Cryptosporidium to be an important cause of more severe infectious diarrhea in the SOT population; this is especially true for children with SOT or recipients of intestinal grafts. A retrospective cohort found an increased risk for cryptosporidiosis in men, and in those with a longer duration of diarrheal symptoms and increased tacrolimus (TAC) levels. Cases of cryptosporidiosis that were associated with higher TAC levels also correlated with a self-limited but important increase in creatinine (Box 2). 28 UTIs are very common in adult renal, and kidney-pancreas (K-P) transplant patients (6%-86%), 29,30 being reported in up to 40% of pediatric renal SOT patients. 31 UTIs also affect other SOT recipients but typically occur in the first month posttransplant, owing to the inherent risks of urinary catheterization. 29 In nonrenal SOT patients risk
Case 4
A 37-year-old woman with history of renal transplant 5 months ago and history of recurrent urinary tract infections presents with concern for a urinary tract infection (UTI). She endorses 2 days of frequency, urgency, and dysuria. She is mildly nauseated but has no chills, vomiting, or flank pain. She thinks her urine is mildly malodorous and cloudy. Her medications include tacrolimus, prednisone, mycophenolate mofetil, amlodipine, and trimethoprim/sulfamethoxazole SS as pneumocystis prophylaxis. VS: T 38.2 C, HR 88, BP 115/70. She appears well, has no rash or flank tenderness, she has mild tenderness over her surgical site and transplanted kidney, and in the suprapubic area. Her urine dipstick is positive for leukocyte esterase and nitrites.
In addition to sending urinalysis and urine culture, you should treat her empirically with: Correct answer: C.
Box 2 Diarrhea pearls
Routine diagnostic studies for the workup of diarrhea in the general population should be used for SOT patients but, if the etiology remains obscure and symptoms persist, meticulously assess for viral, bacterial, and parasitic infections including abdominal imaging and colonoscopy.
Consider C difficile in patients with risk factors, even for community-acquired diarrhea.
Consider cryptosporidiosis in SOT patients with a history of travel or exposures to water parks.
Cryptosporidium can shed oocysts intermittently, so serial testing on 3 separate days is appropriate. Providers should ask specifically for Cryptosporidium testing, as it may not be included on standard ova and parasite testing.
Immunosuppressive medications may cause diarrhea. Monitor immunosuppressive levels closely, and proactively adjust doses to avoid related toxicities.
Hand washing with water and soap is an important preventive strategy. Many causes of infectious diarrhea (rotavirus, norovirus, Cryptosporidium) are not effectively killed with alcohol-based hand sanitizers.
factors include female gender, increased age, and diabetes. In renal and K-P transplant patients, the risk or UTI is high for the first year after transplantation. Risk factors include a history of posttransplant dialysis, age, and female gender. 29 Factors that may increase the risk for recurrent UTI include ureteric stricture, vesicoureteral reflux, prolonged urinary catheterization, and overimmunosuppression. 29 UTIs occur at higher frequency with increased morbidity in renal transplant recipients. Studies are mixed, and have not convincingly demonstrated an increased risk for graft rejection or increased mortality due to UTIs. 29,31,32 Whereas some studies have found that specific immunosuppressive regimens and intensity of immunosuppression appear to increase the risk of UTIs, other studies do not support the notion that UTIs are opportunistic infections in SOT patients. Rather, the increased risk is thought to be due to other factors ( Table 2) and posttransplant anatomic changes that effectively define all UTIs in renal transplant patients as "complicated UTIs." 29,30 Asymptomatic bacteriuria (ASB) without signs and symptoms of infection is common in renal transplant patients. ASB has been associated with increased risk for UTI in the first year, but significant controversy exists about whether it should be treated within that time frame. 33 Candiduria is common, affecting up to 10% of renal transplant patients, but it is mostly asymptomatic. 33 The approach to asymptomatic candiduria is also controversial, given the paucity of available data on this topic. No convincing data exist to show that graft survival, morbidity, or mortality improves with treatment of asymptomatic candiduria in kidney transplant patients. 33 The diagnosis of cystitis is the same in SOT patients as it is for the general population. The important caveats are that urine culture should be obtained in all SOT patients given the higher risk of multidrug-resistant bacterial infections, and that upper tract (ie, kidney allograft) infections are very common in kidney transplant recipients. Pyelonephritis should be considered in kidney transplant patients with signs/symptoms of cystitis accompanied by fever, bacteremia, increased creatinine, leukocytosis, chills, or pain/tenderness over the transplanted kidney. 29 Many transplant centers use trimethoprim/sulfamethoxazole as prophylaxis against Pneumocystis in renal transplant patients during the first 6 to 12 months after transplantation, and this appears to also decrease the risk of UTI. However, this has also been associated with breakthrough infections caused by trimethoprim/sulfamethoxazoleresistant organisms. Long-term prophylaxis has been shown to decrease the incidence of UTI, although the growing resistance to trimethoprim/sulfamethoxazole among GNRs, reportedly more than 80% in some settings, limits the utility of long-term antibiotic prophylaxis. 29,33 A multicenter, prospective cohort study followed 4000 SOT patients over 2 years of follow-up, during which 208 episodes of UTI occurred in renal transplant recipients. The vast majority were due to GNRs (>50% Escherichia coli, 10% Pseudomonas aeruginosa and Klebsiella pneumoniae, 6% other gram negatives) and only 7% due to Enterococcus species. 29 Several studies also identify Staphylococcus species as common pathogens in renal SOT patients. 30,33 An alarming amount of antibiotic resistance was observed, with more than one-quarter identified as extendedspectrum b-lactamase resistance (ESBL). Carbapenem-resistant Klebsiella, Pseudomonas aeruginosa, and vancomycin-resistant Enterococcus are all increasing in frequency. 34 Regarding follow-up, this patient has typical symptoms of cystitis; however, she also has tenderness over her transplanted kidney and fever, suggesting associated allograft pyelonephritis. It is inappropriate to use antibiotics that are effective only for lower UTIs (eg, nitrofurantoin and fosfomycin). The infection has occurred while receiving prophylactic trimethoprim/sulfamethoxazole, making this inappropriate empiric therapy. Despite evidence of growing resistance in some strains, fluoroquinolones remain a reasonable option for empiric therapy. 29 However, at centers with high rates of resistance to fluoroquinolones, treatment with broader-spectrum antibiotics might be appropriate. Other common pathogens causing cystitis/graft pyelonephritis in kidney transplant patients include Enterococcus and Pseudomonas.
Clinically stable SOT patients without signs of sepsis but with evidence of cystitis can be appropriately managed in the ambulatory setting. However, any concern for graft pyelonephritis should trigger hospitalization and further evaluation. The rate of predisposing anatomic abnormalities is relatively high in kidney transplant recipients with graft pyelonephritis, so further diagnostic workup typically includes CT imaging of the kidneys and/or urological evaluation (for structural or functional abnormalities). 30
Case 5
A 48-year-old woman with history of renal transplant 6 years ago requests a routine preventive care examination. In addition to her usual SOT monitoring, she also requests "booster" shots of any vaccinations you recommend.
Past immunization history: influenza last year, Td 3 years ago, standard childhood vaccinations. As more SOT patients are living longer, PCPs should be prepared to address preventive care concerns. This section focuses on vaccine preventive care issues unique to the infectious risks facing SOT patients, and does not cover noninfectious preventive care.
The key pearls for vaccination in SOT patients are that in general, live vaccines are contraindicated posttransplant, and that the immunologic response to routine vaccinations may be diminished, especially in the first 3 to 6 months after transplantation. 35 This situation suggests that PCPs must be proactive; if a patient's chronic disease condition progresses on a path toward possible SOT, then appropriate live vaccinations (this may include measles, mumps, rubella, zoster, and varicella) should be administered pretransplant, before the patient is immunosuppressed. The American Society of Transplantation (AST) 2009 guidelines recommend that any live vaccines be administered at least 4 weeks before transplant. 36 Even inactivated/killed vaccinations should be considered pretransplant when possible, because of the anticipated better response rate. Although end-stage chronic illnesses that necessitate SOT might be associated with reduced immune response to vaccinations, immunologic response to vaccinations is thought to be even further suppressed in the posttransplant period. 35
Influenza
Three of 4 studies on the response to vaccination to influenza in SOT patients reported significantly reduced protective titers in comparison with normal controls. Lower responses were correlated to SOT patients on combination immunosuppressive therapy. Specifically, mycophenolate mofetil (MMF) and cyclosporine were implicated. 37 Given the substantial morbidity and mortality associated with influenza in immunosuppressed populations, yearly inactivated influenza vaccination is recommended. 35 Live attenuated influenza vaccine is contraindicated in SOT patients.
Pneumococcal
SOT patients are at higher risk than the general population for invasive pneumococcal infections. 8 Although pneumococcal vaccination has been shown to be safe in SOT patients, response rates to vaccination are reduced, ranging from 13% to 50% depending on the serotype measured. 38 Major guidelines (AST, ACIP) recommend pneumococcal vaccination in immunosuppressed patients, including those who have received a solid organ transplant, and recent updated guidelines that incorporate both the polysaccharide and conjugate vaccines have recently been published. 39 Vaccine-naïve SOT patients are advised to have PCV13 followed by PCV23 8 weeks later. In SOT patients who have previously received PPSV23, a dose of PCV13 should be given at least 1 year after the dose of PCV23. SOT patients younger than 65 years should have a repeat PPSV23 at 5 years, and those vaccinated before age 65 should have a repeat PCV23 at 65 or 5 years after the first dose.
Tetanus/Diphtheria/Pertussis
Immunologic response to tetanus is close to normal in SOT patients, and should be repeated every 10 years as per IDSA/ACIP guidelines. Diphtheria immunity wanes significantly even after the first year, but current guidelines do not recommend checking titers for diphtheria. 36 A single booster dose of pertussis vaccine (Tdap) should be given to adults older than 19 years. 35,40
Human Papilloma Virus
The human papilloma virus (HPV) vaccine has not been well studied in SOT patients. It is not a live vaccine and should therefore theoretically be safe in the posttransplant population. The indications for HPV vaccination are similar to those in non-SOT patients. 36,40 It is safe to give hepatitis A and B, meningococcal, and H influenzae B vaccinations after transplant in patients with indications. The first 6 months after SOT, when immunosuppression is at its peak, is not the ideal time for vaccination administration because the immunologic response is significantly diminished. 35 Because household/other close contacts are presumed to be a primary source for many important infections, it is important for PCPs to counsel SOT patients' families and close contacts on the importance for them to be appropriately immunized. PCPs should add this to their list of annual preventive care reminders for their SOT patients ( Table 3).
The risk for cervical cancer is reportedly increased (up to 11-fold) in immunosuppressed patients in comparison with the general population. 41 As such, the recommendation from the United States Preventive Task Force Service (USPSTF) and the American College of Obstetrics and Gynecology is for yearly Papanicolaou smear and pelvic examination for cervical cancer screening in immunosuppressed patients. 42,43 However, this practice is best supported for people immunosuppressed by HIV infection. The evidence that HPV causes cervical cancer is excellent, and the increased risk of cervical cancer evidenced in HIV immunosuppression has been extrapolated to apply to SOT patients. However, in 2011 Engels and colleagues 44 evaluated data from a United States registry of more than 400,000 SOT patients and found no increased risk of cervical cancer. In addition, a 10-year prospective case-control study of 48 renal and K-P SOT patients showed no increased risk of cervical cancer in the SOT patients. 45 Despite the emerging data that immunosuppression in SOT recipients might not necessarily increase the risk of cervical cancer, at present PCPs should follow updated USPSTF cervical cancer screening guidelines from 2012, which explicitly exclude immunosuppressed patients from lengthening the screening interval beyond 1 year. 43 SUMMARY SOT recipients need PCPs who are familiar with their unique needs. Understanding the lifelong infectious risks faced by SOT patients because of their need for lifelong immunosuppressive medications is fundamental. SOT recipients can present with atypical and muted manifestations of infections. The savvy, prepared PCP will keep a careful eye on the patient and initiate a comprehensive evaluation for infectious etiology. PCPs should work together with their local (infectious disease and other) specialists to generate care plans if the diagnosis or management is in question. | 2018-04-03T00:31:13.465Z | 2013-06-27T00:00:00.000 | {
"year": 2013,
"sha1": "ac3d77c2bb6e30735d72479b9cf58c4be553b763",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.mcna.2013.03.002",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ef9028768ab56509b8456999dda029c06a4cf36",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8775724 | pes2o/s2orc | v3-fos-license | Traumatic bronchial injury
Highlights • Bronchial injury.• Chest tubes.• Thoracotomy.• Motor vehicle accident.• Pulmonary contusion.
a b s t r a c t INTRODUCTION: Tracheobronchial injury is a recognized, yet uncommon, result of blunt trauma to the thorax. Often the diagnosis and treatment are delayed, resulting in attempted surgical repair months or even years after the injury. PRESENTATION OF THE CASE: We present a case report of a 31-year old female who suffered a left main bronchus transection after a motor vehicle accident. The diagnostic, management issues, and clinical findings surrounding this injury are reviewed. DISCUSSION: Tracheobronchial disruption is a rare, life-threatening injury. Suspicion should be high when pneumomediastinum and pneumothorax are refractory to adequate pleural drainage. Flexible bronchoscopy with intubation distal to the injury may be necessary to prevent loss of the airway. Advance preparation should include setups for bronchoscopy, thoracotomy, and cardiopulmonary bypass. Patient survival depends on preparation and prompt surgical intervention. CONCLUSION: A high level of suspicion and the liberal use of bronchoscopy are important in the diagnosis of tracheobronchial injury. A tailored surgical approach is often necessary for definitive repair. Tracheobronchial injury is an uncommon injury after blunt trauma. It is often associated with other fatal injuries. Here we describe a case of early identification of a bronchial injury after a motor vehicle accident.
A 31-year-old female was brought in as a critical trauma after being struck by a car on an urban street. She arrived with an intact airway and breath sounds heard bilaterally; ATLS was initiated and lead by the Trauma team. Just after stabilization, she became more distressed. Decision was made to intubate her. Standard intubation was performed by the PGY-2 ER resident with anesthesia team on standby using a 7 French endotracheal tube. The initial chest x-ray (CXR) demonstrated extensive left soft tissue emphysema extending to the neck. Computer tomography (CT) showed left side pneumothorax and pneumomediastinum (Fig. 1). She had a near obliteration of the left mid to distal main-stem bronchus proximal to the origin of the left upper lobe bronchus concerning for a bronchial injury (Fig. 2). Multiple fractures were noted throughout her chest wall, spine, and pelvis. No intra-cranial, intra-abdominal, or vascular injuries were seen (Fig. 3).
A left sided thoracostomy tube was placed. She had large air leak that was persisting on inspiration and expiration. She was hypoventilating with slight respiratory acidosis. A blood gas was performed which revealed a PCO2 of 55 and pH of 7.27. She was admitted to the ICU for bronchoscopy. Flexible bronchoscopy showed a 2 cm total disruption of the left mainstem bronchus. She was taken emergently to the operating room. A left posteriolateral thoracotomy was performed. There was a small opening in the aortopulmonary window with air leakage. This was enlarged. The transected left bronchus was found and repaired primarily in a two layered fashion with 4-0 Polydioxanone sutures (PDS) circumferentially around the cartilaginous area, then around the membranous portion. The anastomosis was patent on bronchoscopy intraoperatively and on post-operative day (POD) 4. She was extubated on POD 5 and discharged on POD 10. She was seen in clinic two months later with no respiratory complaints.
Tracheobronchial injury is a rare but morbid injury. In a large trauma autopsy series, 2% were found to have a tracheobronchial injury. Of those 81% died at the scene, mostly from associated injuries [1]. Motor vehicle accidents were the most frequent mechanism.
Most injuries occurred within 2 cm of the carina. Injury to the right main bronchus is more common and diagnosed earlier. This is thought to be due to the fact that the left main bronchus is protected by the aorta [2]. The median days until diagnosis for left sided injury was 30 days. The presentation for late diagnosis is often persistent pneumothorax. Historically, the outcome for left sided injury is more favorable than the right side, with a mortality rate of 8% compared with 16% [3].
The initial management should follow the Advance Trauma Life Support protocol. Findings on CXR can include pneumothorax, pneumomediastinum, subcutaneous emphysema and air surrounding deep cervical tissue. "Fallen lung sign" [4] where the collapsed lung falls away from the mediastinum is not often seen but is specific to bronchial injury. Findings on CT are similar to those of CXR [5]. The definitive diagnosis is made with bronchoscopy [6].
Surgical repair should be performed as soon as possible. If an injury is identified early, primary repair should be attempted. The mortality for those who underwent primary repair was lower than those who underwent resection of the injured bronchus and distal lung parenchyma (3% vs 13%) [3]. The outcome for non-operative management is generally worse than operative management. The operative approach differs depending on the location of the injury. Cervical trachea injury is repaired via collar incision. Distal trachea, carina, and right main stem bronchus are approached through right posterolateral thoracotomy. The left main bronchus is exposed via a left posterolateral thoracotomy [7]. Debridement and end to end anastomosis should be attempted for significant tracheal and bronchial injury. Pneumonectomy should be avoided if possible. Muscle flap is sometimes used to help with the reconstruction. Lobectomy is performed if the injury is associated with lobar destruction.
Post repair complications include pneumonia, suture granuloma, wound infection, fistula, and stricture. Broncho-or tracheo-esophageal fistula may require resection and reconstruction. Viable tissue coverage is necessary. A stricture may be temporarily stented or undergo laser fulguration.
The overall outcome depends on the number of associated injuries. With advancement in trauma resuscitation and surgical techniques, the mortality rate has declined from 36% before 1950-9% since 1970 [3]. The long term outcome for those who undergo immediate repair is good with few respiratory symptoms [8].
Tracheobronchial injury is a rare but a morbid injury. It is often associated with other organ injuries. A high index of suspicion will lead to a prompt diagnosis and ultimately an improved outcome.
Conflicts of interest
None.
Funding
None.
Ethical approval
Approval has been given by UCLA IRB and Trauma committee Number IRB#16-000451.
Consent
Consent was obtained from patient to publish the case.
Author contribution
Ali Cheaito is responsible for design, data collection, and writing of paper.
Financial support
None. | 2018-04-03T00:06:11.398Z | 2016-09-04T00:00:00.000 | {
"year": 2016,
"sha1": "03f58ef59d58236235c72a7599c57855430d244a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2016.08.014",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03f58ef59d58236235c72a7599c57855430d244a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258748299 | pes2o/s2orc | v3-fos-license | Synthesis and Characterization of Epoxidized Beechwood Pyrolysis Bio-Oil as a Curing Agent of Bio-Based Novolac Resin
: A bio-oil-based epoxy (BOE) resin was synthesized using phenolic compounds from beechwood pyrolysis oil. These compounds were separated from crude pyrolysis oil by coupling two methods: fractional condensation and water extraction. The chemical structure of the BOE resin was characterized by NMR and FTIR analyses. BOE resin was used as a curing agent of bio-oil glyoxal novolac (BOG) resin to gradually replace bisphenol A diglycidyl ether (DGEBA). The thermal properties of cured resins and kinetic parameters of the curing reaction using differential scanning calorimetry (DSC) were discussed. Incorporating the BOE resin resulted in a lower curing temperature and activation energy compared to using DGEBA. These results indicate that the water-insoluble fraction of pyrolysis oil condensate can potentially be used to synthesize high-thermal performance and sustainable epoxidized pyrolysis bio-oil resins and also demonstrate its application as a curing agent of bio-oil glyoxal novolac (BOG) resin.
Introduction
Epoxy resin is one of the most significant thermosetting polymers having excellent mechanical, thermal, and electrical properties, high adhesion strength, corrosive resistance, favorable processing ability, and low curing shrinkage. It is widely used in adhesives, coatings, encapsulation, composite materials, and packaging materials for electronic devices [1][2][3][4][5].
Depending on the different epoxy equivalent weights, molecular weights, and viscosities, various types of epoxy resins can be found on the market. Bisphenol A glycidyl ether (DGEBA) is the most widely used epoxy resin, accounting for more than 90% of the global epoxy resin market [6]. However, DGEBA is a toxic and petroleum-derived organic compound, as it is derived from bisphenol A (BPA). The replacement of BPA with alternatives from sustainable and renewable resources, such as bio-based polyphenols, is highly desired [7][8][9][10][11].
Researchers have recently been focusing on developing bio-based epoxy monomers or oligomers to replace the traditional DGEBA by investigating high-performance materials [12,13]. These bio-based epoxy resins have been prepared from a variety of renewable resources, such as vegetable oils (soybean oil, linseed oil, palm oil, and castor oil) [14][15][16]. Several promising results on the application of woody biomass to the production of epoxy resins have also been reported [17]. Zhang et al. [18] prepared an epoxy grouting resin using lignin, which is an underutilized by-product of the forestry industry that has outstanding viscosity with a low mechanical property. Another potential eco-friendly substitute for BPA is bio-oil from biomass pyrolysis [19]. Bio-oil produced by hydrothermal liquefaction of loblolly pine has been used for the synthesis of a bio-oil-based epoxy resin [7].
Bio-oil is a complex liquid mixture with over 255 different chemical compounds that can be classified into different "families" after identification; these include phenols, carboxylic acids, ketones, esters, alcohols, sugars, aldehydes, and furans, among others [20]. However, the application of phenol obtained from pyrolysis oil is limited due to fewer reactive sites and steric hindrance. For this reason, it is very necessary to upgrade biooil [19,[21][22][23][24][25]. In our previous study [26], the combination of the two environmentally friendly methods (fractional condensation and water extraction) was successful in separating phenolic compounds from beechwood pyrolytic oil. The extraction of bio-oil by water was able to separate phenol into water-insoluble fractions based on the broad range of dew points and the different polarities and affinities with water [27,28]. As bio-oil comprises a large yield of phenolics, it will surely become a very effective alternative to commercial BPA in the field of epoxy resin production [29]. Now, the polymerization of phenol and formaldehyde under acidic conditions leads to the production of a novolac-type resin, which needs to be cross-linked by a curing agent [30]. Considering the decomposition of hexamethylenetetramine (ammonia and formaldehyde), which is the most common curing agent during heating, there is an urgent need to develop an environmentally friendly curing agent [31].
In our previous research, the water-insoluble fraction of bio-oil fractional condensation products was successfully used for the synthesis of novolac-type resins (BOG) with glyoxal, and they were successfully cured with DGEBA [32]. In this study, the potential of a synthesized epoxidized bio-oil resin as a curing agent of the bio-based glyoxal novolac resin is evaluated and discussed. Firstly, the water-insoluble fraction of the bio-oil fractional condensation products were polymerized with epichlorohydrin to prepare BOE resin. The chemical structure of BOE resin was confirmed by FT-IR, GPC, and 1 H-NMR spectroscopy. In addition, the BOE resin gradually replaced DGEBA as a bio-based formaldehyde-free cross-linker for bio-based glyoxal novolac resin to build a "greener" bio-based material. In order to determine the kinetic parameters of the curing reaction, model-free methods were applied to the data obtained from differential scanning calorimetry (DSC). A comparison between the thermal properties of BOE/DGEBA cured BOG resins was carried out. To the best of our knowledge, a synthesized bio-based epoxy resin based on the upgraded beechwood pyrolysis oil has been used for the first time in this study. The aim was to gradually replace the DGEBA equivalent as a formaldehyde-free cross-linker for bio-based novolac resin to produce a 100% bio-oil-based green material.
Figure 1.
Overview of bio-oil production process and resin synthesis-curing system.
The recovered bio-oil fractions were analyzed by using a GC-MS (Varian 3900-Saturn 2100T), while a GC-FID (Scion 456-GC Bruker instrument, Billerica, MA, USA) was applied to quantify the identified components; the calculation method to determine molar fraction of phenolic compounds was detailed in our previous work [26]. During these analyses, the main phenols were identified and quantified, and the hydroxyl number was calculated (Table 1).
Synthesis of Bio-Oil Based Glyoxal Novolac Resin
The polymerization of bio-oil (OIL1WI) glyoxal (BOG) resin was conducted using glyoxal as an aldehyde precursor to replace formaldehyde and the OIL1WI fraction as the phenol precursor, as shown in Figure 1. The total concentration of phenolic compounds present in OIL1WI was calculated to be 5.80 mmol•g −1 . The recovered bio-oil fractions were analyzed by using a GC-MS (Varian 3900-Saturn 2100T), while a GC-FID (Scion 456-GC Bruker instrument, Billerica, MA, USA) was applied to quantify the identified components; the calculation method to determine molar fraction of phenolic compounds was detailed in our previous work [26]. During these analyses, the main phenols were identified and quantified, and the hydroxyl number was calculated ( Table 1).
Synthesis of Bio-Oil Based Glyoxal Novolac Resin
The polymerization of bio-oil (OIL1WI) glyoxal (BOG) resin was conducted using glyoxal as an aldehyde precursor to replace formaldehyde and the OIL1WI fraction as the phenol precursor, as shown in Figure 1. The total concentration of phenolic compounds present in OIL1WI was calculated to be 5.80 mmol·g −1 .
All experiments were carried out in a 100 mL 3-neck reactor equipped with a condenser, a nitrogen inlet, and pressure-equalizing dropping funnel placed at the middle neck and 2 side necks. An oil bath was used to preheat the phenol precursors melted (70 • C), and then, the temperature was increased to 125 • C under atmospheric pressure, and agitation was ensured through a magnetic stirrer at a speed of 300 rpm. Oxalic acid (10 mol.%) was added to obtain acidic conditions. The molar ratio of phenol to glyoxal precursors was set as 2, and the glyoxal was added drop-wise through the pressure-equalizing dropping funnel; the same ratio was fixed for each experiment. The reaction medium was continuously agitated for 7 h. The resin was produced by rinsing 3 times with distilled water, then the unreacted monomer and acid were removed by drying under vacuum at 125 • C for 24 h. In addition, the yield of resin products was calculated according to the following equation:
Synthesis of Bio-Oil-Based Epoxy Resin
The BOE resins were synthesized by a two-step glycidylation method [34,35]. A schematic of the resin synthesis is shown in Figure 2. Approximately 2 g (1.52 × 10 −2 mol) of OIL1WI, used as an alternative to phenol, was added into a 100 mL glass reactor, followed by Epichlorohydrin (4 M eq/hydroxyl). The temperature gradually increased to 100 • C under continuous stirring. Then, a phase transfer catalyst, benzyltriethylammonium chloride (TEBAC, 0.012 M eq/hydroxyl), was introduced, and the reaction was left for 1 h. During the second step, the temperature decreased to 30 • C. The same TEBAC as before and a 20 wt.% aqueous solution of sodium hydroxide (2 M eq/hydroxyl) were mixed and added drop-wise through the pressure-equalizing dropping funnel. The reaction continued for 1-2 h at 30 • C. The organic layer was successively washed three times with water and poured into an extraction funnel to separate the water and remove the salt. A rotary evaporator was used at 90 • C to remove unreacted ECH. The resin products were finally dried and concentrated at 50 • C overnight. Each experiment was repeated three times, and the average values are reported. The yield of the bio-oil-based epoxy resins was determined by using the following equation [10]: where S is the weight of the dried BOE, L is the weight of the dried OIL1WI, and 0.427 g is the stoichiometric amount of epichlorohydrin for 1 g of OIL1WI.
All experiments were carried out in a 100 mL 3-neck reactor equipped with a condenser, a nitrogen inlet, and pressure-equalizing dropping funnel placed at the middle neck and 2 side necks. An oil bath was used to preheat the phenol precursors melted (70 °C), and then, the temperature was increased to 125 °C under atmospheric pressure, and agitation was ensured through a magnetic stirrer at a speed of 300 rpm. Oxalic acid (10 mol.%) was added to obtain acidic conditions. The molar ratio of phenol to glyoxal precursors was set as 2, and the glyoxal was added drop-wise through the pressure-equalizing dropping funnel; the same ratio was fixed for each experiment. The reaction medium was continuously agitated for 7 h. The resin was produced by rinsing 3 times with distilled water, then the unreacted monomer and acid were removed by drying under vacuum at 125 °C for 24 h. In addition, the yield of resin products was calculated according to the following equation:
Synthesis of Bio-Oil-Based Epoxy Resin
The BOE resins were synthesized by a two-step glycidylation method [34,35]. A schematic of the resin synthesis is shown in Figure 2. Approximately 2 g (1.52 × 10 −2 mol) of OIL1WI, used as an alternative to phenol, was added into a 100 mL glass reactor, followed by Epichlorohydrin (4 M eq/hydroxyl). The temperature gradually increased to 100 °C under continuous stirring. Then, a phase transfer catalyst, benzyltriethylammonium chloride (TEBAC, 0.012 M eq/hydroxyl), was introduced, and the reaction was left for 1 h. During the second step, the temperature decreased to 30 °C. The same TEBAC as before and a 20 wt.% aqueous solution of sodium hydroxide (2 M eq/hydroxyl) were mixed and added drop-wise through the pressure-equalizing dropping funnel. The reaction continued for 1-2 h at 30 °C. The organic layer was successively washed three times with water and poured into an extraction funnel to separate the water and remove the salt. A rotary evaporator was used at 90 °C to remove unreacted ECH. The resin products were finally dried and concentrated at 50 °C overnight. Each experiment was repeated three times, and the average values are reported. The yield of the bio-oil-based epoxy resins was determined by using the following equation [10]:
100
(2) where S is the weight of the dried BOE, L is the weight of the dried OIL1WI, and 0.427 g is the stoichiometric amount of epichlorohydrin for 1 g of OIL1WI. A specific amount of NaOH is beneficial to the reaction, while an excess amount will increase the probability of self-polymerization of ECH [36]. The molar ratio of NaOH/OHN and TEBAC/OHN and the reaction temperature were based on previous studies [34,35]. A specific amount of NaOH is beneficial to the reaction, while an excess amount will increase the probability of self-polymerization of ECH [36]. The molar ratio of NaOH/OHN and TEBAC/OHN and the reaction temperature were based on previous studies [34,35].
Curing Process
Non-isothermal DSC measurement has been widely applied to study the curing behavior and kinetics of resins and polymers [37][38][39]. Dynamic experiments were carried out at different heating rates (5, 10, 15, and 20 • C/min) from 25 to 250 • C to study the epoxynovolac resin system, and the curing reaction of the BOG resins with 40 wt.% BOE/DGEBA (based on the weight of the BOG resin) was determined by DSC. The obtained data were used in the additional kinetic study.
Firstly, the vacuum-dried BOG resin samples were dissolved in acetone and uniformly mixed with 40% of DGEBA as curing agent at the beginning and then gradually replaced by BOE. A total of 2% TPP was used as catalyst. The quantification of all the chemicals was based on the quantity of the BOG resin. Then, the mixture was left at room temperature for 12 h, allowing the evaporation of solvent. A sample of about 5 mg in a sealed aluminum crucible was heated in a N 2 environment to obtain the heating cycles. The onset and peak temperatures of curing and the reaction enthalpy were determined from the exothermic peak of curing and its linear integration, respectively, using the TA Universal analysis software. The amount of curing agent was optimized by testing, according to our previous studies [32]; finally, 40 wt.% of DGEBA was selected to obtain a higher T g (117.39 • C) and a lower curing reaction temperature. The same quantity of DGEBA was then replaced by BOE resin.
T p corresponds to the maximum reaction rate or peak temperature (K), β is the heating rate (K/min), E a is the activation energy (kJ/mol), and R is the gas constant (8.314 J/mol. K). A, C, and n are the pre-exponential factor (min −1 ), a constant, and the order of the curing reaction, respectively.
A resin-curing reaction was also conducted by putting the resin and curing agent mixture in the oven to evaluate additional properties of the cured resins. The temperature set program was 120 • C for 30 min, 150 • C for 30 min, and 180 • C for 1 h [39].
2.6. Characterization 2.6.1. Attenuated Total Reflection Fourier-Transform Infrared (ATR-FTIR) ATR-FTIR spectroscopy was applied to verify the formation and change of the functional group in the bio-oil and resin samples (before and after curing). The spectrum was recorded in reflectance mode between 450-4000 cm −1 at a resolution of 4 cm −1 using a SHIMADZU IRTracer−100 spectrometer.
Nuclear Magnetic Resonance ( 1 H-NMR)
The resins were also characterized by 1 H-NMR spectroscopy. The spectra were recorded on a Bruker AVANCE III system (300 MHz). Additionally, 50 mg samples were dissolved in 0.4 mL d 6 -DMSO with slight heating, and the samples were stirred until they were completely dissolved.
Epoxy Equivalent Weight (EEW) Determination
EEW is the weight of the resin (in g) that contains 1 g equivalent of the epoxide group. The epoxy equivalent of the BOE resin was determined by the reaction of an aliquot of standard pyridine hydrochloride in excess pyridine at reflux and subsequent back titration with standard sodium hydroxide-ethanol. Approximately 0.25 g of the BOE resin sample was taken and put in a 100 mL flask. Then, 25 mL of a solution of 0.2 mol/L hydrochlorides in pyridine was prepared and added to the flask. The contents of the flask were stirred and heated under reflux at 115 • C for 20 min to dissolve the epoxy sample. After cooling down, the titration of the excess acid was carried out by a prepared 0.2 mol/L sodium hydroxide-ethanol solution using a pH meter to follow the pH value of the solution until it reached 13. The pH value was recorded each time 1 mL was added, and a graph was plotted to determine the volume of solvent added when the maximum change in the pH value occurred. A blank assay was performed under the same conditions in the absence of the BOE resin. The EEW was calculated by using Equation (6).
where P = BOE resin weight (g); V A = 0.2 mol/L NaOH volume (mL) for the blank; V B = 0.2 mol/L NaOH volume (mL) for the prepolymer; C NaOH = Molar concentration of 0.2 mol/L NaOH solution.
Gel Permeation Chromatography (GPC)
The average molecular weight (M w ) and average molecular weight number (M n ) of the resin samples were measured by Waters Breeze gel permeation chromatography (Waters, Milford, MA, USA, 1525 binary HPLC pump, RI detector at 270 nm, Waters Styragel HR1 column at 40 • C). DCM was employed as the eluent at a flow rate of 1 mL/min, and polystyrene was chosen as the calibration standard [19,[40][41][42]. Resin samples were prepared by dissolving 3 mg/mL in DCM and then filtering with a 0.45 µm nylon membrane. Later, the samples were injected into the GPC instrument, and the results were collected.
Thermal Analysis of the Cured Resins
The thermal properties were characterized by DSC (TA Q1000 instrument) by placing approximately 5 mg of the sample in a sealed hermetic crucible with purging with nitrogen gas at 50 mL/min. Samples were cycled between 25 and 200 • C at a heating/cooling rate of 5 • C/min. After that, the glass transition temperature (T g ) was measured.
The thermogravimetric analysis (TGA) was secondly used to monitor the thermal degradation of a sample of the cured resin. By using a TA Q600 TGA instrument, 5-10 mg of the sample was heated to 105 • C, equilibrated for 5 min and then heated up to 900 • C with a heating rate of 10 • C/min under a high purity nitrogen flow of 50 mL/min.
Synthesis and Characterization of the Bio-Oil Based Epoxy (BOE) Resin
BOE resin was synthesized by reacting bio-oil water-insoluble fractions with epichlorohydrin (ECH), as shown in Figure 2. After purification, the obtained BOE resin was a black semi-solid.
According to the literature [43], the epoxidation reaction can be divided into two steps (as shown in Figure 2). In the first step, a phase transfer catalyst (TEBAC) was used to form ion pairs, which further underwent an addition reaction with ECH to open the epoxy ring and then reacted with OH groups in the bio-oil [7]. In the second step of the reaction, the same quantity of TEBAC was added. NaOH was used as a catalyst, and its main function was to dihydrochloride the intermediates (ring-closing) and neutralize HCl.
The FTIR spectra (Figure 3) show the differences in the functional groups of the BOE resin, ECH, and their phenolic precursors (bio-oil). The signals at 911 and 855 cm −1 belong to the oxirane group [7,44], and they can be clearly found in ECH. However, there was no signal detected in the spectrum of the bio-oil. After resinification, there were some new peaks that appeared in the spectrum of the resulting BOE resin. These peaks confirmed the successful epoxidation reaction between phenols in the bio-oil and ECH to form the new epoxy ring in the resin structure. Furthermore, concerning the C-O-C group, deformation peaks were observed in the range of 1225-1250 cm −1 and asymmetric and symmetric bending can be found in the range of 1026-1043 cm −1 .
signal detected in the spectrum of the bio-oil. After resinification, there were some new peaks that appeared in the spectrum of the resulting BOE resin. These peaks confirmed the successful epoxidation reaction between phenols in the bio-oil and ECH to form the new epoxy ring in the resin structure. Furthermore, concerning the C-O-C group, deformation peaks were observed in the range of 1225-1250 cm −1 and asymmetric and symmetric bending can be found in the range of 1026-1043 cm −1 . There is a broad peak around 3300 cm −1 , belonging to the O-H stretching vibrations, which was diminished for the BOE resin compared to the bio-oil due to the conversion of the phenolic hydroxyl function into the glycidyl one. According to our previous analysis of bio-oil, the band correlating to the aldehyde group in the bio-oil appeared at 1702 cm −1 [26]. This also decreased after the reaction, indicating that a condensation reaction occurred.
The 1 H-NMR spectrum of the BOE resin also confirmed the successful epoxidation reaction of the bio-oil, as shown in Figure 4. The 1 H-NMR spectrum of OIL1WI and BOG resin has also been shown in Figure A1, in the Appendix A, as a reference. The signals at 2.50 ppm belong to the DMSO-D 6 solvent. The protons from the benzene ring skeleton and the phenol hydroxyl, which come from the phenolic compounds in bio-oil, were found in the BOE resin, and their signals are between 7.5 and 6 ppm (d, e, f) in the spectrum [45]. In addition, the signals at 3.37 ppm (h) indicate the presence of aromatic methoxy protons (-OCH3) related to the presence of guaiacols in the bio-oil [46]. As expected, the strong signals between 1.5 and 0.5 ppm represent protons ascribed to the aliphatic chain linked to the small molecules in the bio-oil. The peaks at 1.57 and 2.08 ppm correspond to the alkyl protons in the backbone of the phenolics [47]. Furthermore, the signals around 3.30 and 2.82 ppm are attributed to the protons of oxirane moieties. The protons at a and b are on the epoxide CH2 (O)-CH-structure, and the peaks of the protons (c) on the -CH2-next to the epoxy group at around 4 ppm were observed as well [48][49][50][51], which demonstrated that the target compounds were synthesized successfully. There is a broad peak around 3300 cm −1 , belonging to the O-H stretching vibrations, which was diminished for the BOE resin compared to the bio-oil due to the conversion of the phenolic hydroxyl function into the glycidyl one. According to our previous analysis of bio-oil, the band correlating to the aldehyde group in the bio-oil appeared at 1702 cm −1 [26]. This also decreased after the reaction, indicating that a condensation reaction occurred.
The 1 H-NMR spectrum of the BOE resin also confirmed the successful epoxidation reaction of the bio-oil, as shown in Figure 4. The 1 H-NMR spectrum of OIL1WI and BOG resin has also been shown in Figure A1, in the Appendix A, as a reference. The signals at 2.50 ppm belong to the DMSO-D 6 solvent. The protons from the benzene ring skeleton and the phenol hydroxyl, which come from the phenolic compounds in bio-oil, were found in the BOE resin, and their signals are between 7.5 and 6 ppm (d, e, f) in the spectrum [45]. In addition, the signals at 3.37 ppm (h) indicate the presence of aromatic methoxy protons (-OCH 3 ) related to the presence of guaiacols in the bio-oil [46]. As expected, the strong signals between 1.5 and 0.5 ppm represent protons ascribed to the aliphatic chain linked to the small molecules in the bio-oil. The peaks at 1.57 and 2.08 ppm correspond to the alkyl protons in the backbone of the phenolics [47]. Furthermore, the signals around 3.30 and 2.82 ppm are attributed to the protons of oxirane moieties. The protons at a and b are on the epoxide CH 2 (O)-CH-structure, and the peaks of the protons (c) on the -CH 2 -next to the epoxy group at around 4 ppm were observed as well [48][49][50][51], which demonstrated that the target compounds were synthesized successfully.
The yield and EEW are two important parameters for epoxy resin. The yield reflects the degree of the polymerization reaction, while the EEW indicates the reactivity of the epoxy resin. A lower EEW is desirable because it means that a higher concentration of the epoxide group is attached to the resin. It should also be noted that the reaction time of the second step was reported to be an influencing factor on the yield and EEW [36,52]. To maximize the yield and minimize the EEW value of the polymerization under the selected reaction condition, various reaction times (1.5, 2, and 2.5 h) were chosen and examined ( Table 2). It was evident that a longer reaction time resulted in a higher yield and EEW value of the epoxy resins. These results are in agreement with the literature [7,36]. To optimize the EEW value and yield at the same time, the optimal time for the ring close reaction was determined to be 2 h. The yield and EEW are two important parameters for epoxy resin. The yield reflects the degree of the polymerization reaction, while the EEW indicates the reactivity of the epoxy resin. A lower EEW is desirable because it means that a higher concentration of the epoxide group is attached to the resin. It should also be noted that the reaction time of the second step was reported to be an influencing factor on the yield and EEW [36,52]. To maximize the yield and minimize the EEW value of the polymerization under the selected reaction condition, various reaction times (1.5, 2, and 2.5 h) were chosen and examined ( Table 2). It was evident that a longer reaction time resulted in a higher yield and EEW value of the epoxy resins. These results are in agreement with the literature [7,36]. To optimize the EEW value and yield at the same time, the optimal time for the ring close reaction was determined to be 2 h. As the OIL1WI fraction was a mixture of monomeric, oligomeric, and macromolecular phenolic components, it had a higher molecular weight and a lower phenolic hydroxyl reactivity than commercial phenols. In addition, the EEW of DGEBA was also determined As the OIL1WI fraction was a mixture of monomeric, oligomeric, and macromolecular phenolic components, it had a higher molecular weight and a lower phenolic hydroxyl reactivity than commercial phenols. In addition, the EEW of DGEBA was also determined by the titration method, and BOE showed a higher EEW than DGEBA, which is a commercially manufactured epoxy resin with a desired EEW (181).
The molecular weight distribution (as shown in Table 2) of the BOE resins was obtained by the GPC chromatograms; the GPC profile of the BOE2 resin is shown separately in Figure A2. The difference between the M w and M n values of the BOE2 resin (M w = 1019 ± 31 g/mol, M n = 368 ± 18 g/mol) resulted in a large polydispersity (2.77). Various substituents of phenolics in the bio-oil limited their activity, and it was difficult for some of them (e.g., 4-ethylguaiacol) to participate in the further cross-linking reaction due to the addition of the methoxy group that increased their steric hindrance. Moreover, the strong presence of small molecules in the bio-oil reduced the value of M n . Meanwhile, the glycidylation reaction occurred, leading to the formation of oligomers. These newly formed glycidyl groups can react with the phenolic group of another molecule to create higher molecular weight oligomers [9,18]. This can result in an epoxy resin with a large polydispersity. Moreover, the change in the molecular weight was related to the EEW value and the yield of the epoxy resins (as can be seen in Table 2).
Characterization of BOG Resin
Similar to the BOE resin, OIL1WI was also used as a phenol precursor to replace pure phenol to synthesize the BOG resin. After drying, the obtained BOG resin was a black-colored solid.
The FTIR spectra illustrate the functional groups in the BOG resin ( Figure 3) and the assignment of the corresponding bands. Based on the structure of the resin (Figure 2), the methylene group (-CH 2 -), aldehyde group (C=O), carbon-carbon double bond (C=C), aromatic C-C, and ether group (C-O-C) were identified according to the stretching modes at 2962, 1702, 1600, 1512, 1440, 1187, and 1018 cm −1 , respectively, which indicate the condensation reaction between the aldehyde in glyoxal and phenols to form -CH 2 -C=O and -C=C-O-C=O. These two different linkages were formed during polymerization [53]. Additionally, these peaks appeared in the spectrum of the BOG resin, indicating that the synthesis was successful. A comparison shows that there were some differences between the bio-oil and BOG resin, including the obvious increase in the peaks at 2962, 1702, 1187, and 1018 cm −1 . The absorptions at 1300-1000 cm −1 were due to C-O-C stretching, which could be found in the bio-oil and BOG resin, related to the methoxy group of guaiacols. The signals at 871, 825, and 760 cm −1 belonged to out-of-plane bending of the aromatic C-H bonds [24]. The O-H stretching at around 3300 cm −1 and the C-O stretching of aromatics at 1210 cm −1 , which were already mentioned above, were attributed to the stretching vibration of active phenolic hydroxy in the resin structure.
Kinetic Study of BOE/DGEBA-BOG Curing System
The curing reaction of the BOG resins with 40 wt.% BOE/DGEBA (based on the weight of the BOG resin) was performed. Figure 5 displays the DSC curves of the BOG + DGEBA/BOE curing system with a heating rate of 10 K/min. It shows a single exothermic peak for all the reactions, which corresponds to the completion of the thermal curing reaction [50]. When BOG was cured with only DGEBA, it exhibited a sharp exothermic peak. With the increase of the BOE ratio, the peak becomes smaller and flat. In addition, more information on the curing characteristics of the reaction was obtained by analyzing the DSC data and was listed in Table 3. According to the DSC curves, the curing temperature and reaction enthalpy of the BOG/DGEBA/BOE system decreased with an increasing BOE ratio. The result indicates that the curing reaction of BOG with BOE is easier than with DGEBA.
In a previous work, the novolac-epoxy resin curing system was built using the OHepoxy reaction [54]. Furthermore, the three main reaction steps of the chain-wise polymerization curing mechanisms of the novolac-epoxy system catalyzed by TPP are initiation, propagation, and branching [38]. The exothermic peak temperature at various heating rates of the curing reactions was between 84-176 • C ( Table 4). The curing onset and peak temperatures of BOG resin cured with BOE resin changed significantly compared with curing only with DGEBA.
As Figure 6 illustrates, the slope of the line fitted on the curves gives the E a , which can be calculated by using 2 models (Equations (3) and (4)) by plotting ln (β/T p 2 ) versus 1/T p and log β against 1/T p , respectively [39]. Additionally, the reaction order (n) was determined by plotting ln β versus 1/T p (Equation (5)), and its values are listed in Table 4. The value of the correlation coefficient (R 2 ), mostly greater than 0.99, allows for verifying the accuracy of the linear fitting. The overall curing reaction for all of the resins was approximately first order (n = 0.92-0.95), which agreed with previous studies [22,55]. Figure 5. DSC curves of the BOG+DGEBA/BOE curing system with a heating rate of 10 K/min. Figure 5. DSC curves of the BOG + DGEBA/BOE curing system with a heating rate of 10 K/min. Consistent with the result of the curing temperature, the E a decreased with the increase in the ratio of BOE. This result indicates that the chemical reactivity of BOE with BOG was higher than with DGEBA. The increase in the percentage of BOE created a mixture of DGEBA, and their apparent activation energy was between the two initial apparent activation energies. In correlation with the GPC result of BOE resin, it had a very high value of polydispersity. Although it possessed a larger average molecular weight, it still contained a large number of unreacted small compounds, which included many reactive functional groups, such as hydroxyl groups and aldehyde groups (Figure 3). These can bring more unoccupied reactive sites, promoting cross-linking with BOG resin compared with only using DGEBA. T g values of the epoxy-cured BOG resins were higher than that of the uncured BOG resin (75.8 ± 1.1 • C), with a range of 96.1-121.1 • C (Table 5). However, higher proportions of BOE resulted in a lower T g . When only BOE was used as a curing agent, the T g reached a minimum value of 96.1 • C, probably because of the higher EEW of BOE (317.1) than DGEBA (181), and the unreacted small molecules from the bio-oil also lowered the T g value. This result is in agreement with other research [48,50]. Moreover, unlike DGEBA, which is a rigid molecule with only aromatic rings, BOE has both rigid and flexible connections thanks to the presence of oligomers. This flexible structure and methoxy in bio-oil may decrease the Tg of cured resins [47]. Similarly, the presence of non-phenolic compounds in the BOE may also influence the polymer Tg [34].
Thermal Characterization of the BOE/DGEBA Cured BOG Resins
TGA profiles under N2 are shown in Figure 8 in order to evaluate the thermal stability of the cured resins. Results included the initial degradation temperature for 5% weight loss (Td5), the temperature of maximum decomposition rate (Tmax), and the residue percentage at 800 °C (R 800 ), all presented in Table 5. Compared to uncured BOG resins, the thermal stability of cured resins was significantly improved, as observed by the higher values of Td5, Tmax, and R800. Td5 and Tmax of the cured resins decreased slightly with the decrease in DGEBA until they reached their lowest values (271.7 and 381.5, respectively) when 40% of BOE was used for curing. When a larger amount of BOE is used, a DTG peak around 271 °C is present. The decrease in the thermal stability may be attributed to the presence of methoxy groups on the aromatic ring resulting from bio-oil-derived guaiacols. The latter donates electrons to the aromatic ring, reducing thermal stability [47]. However, it can be seen in Figure 8 that the increase in the BOE proportion led to a slower thermal decomposition rate.
The residual materials of the cured resins at 800 °C are also listed in Table 5. The presence of the BOE resin led to a significant increase in the R800 value of the cured resins (43.26%), which was higher than that obtained when pure DGEBA was used (39.66%); this indicates the positive effect of BOE on reducing resin decomposition, and it is attributed to the rich aromaticity in the main chain between rigid rod-like aromatics linked by carbon-carbon double bonds inherent into BOE resin from bio-oil. These polymer fragments would be transformed into char at high temperatures [50]. It also explains why BOE and BOG have more cross-linking reactions under high-temperature conditions than DGEBA. Moreover, unlike DGEBA, which is a rigid molecule with only aromatic rings, BOE has both rigid and flexible connections thanks to the presence of oligomers. This flexible structure and methoxy in bio-oil may decrease the T g of cured resins [47]. Similarly, the presence of non-phenolic compounds in the BOE may also influence the polymer T g [34].
TGA profiles under N 2 are shown in Figure 8 in order to evaluate the thermal stability of the cured resins. Results included the initial degradation temperature for 5% weight loss (T d5 ), the temperature of maximum decomposition rate (T max ), and the residue percentage at 800 • C (R 800 ), all presented in Table 5. Compared to uncured BOG resins, the thermal stability of cured resins was significantly improved, as observed by the higher values of T d5 , T max , and R 800 . T d5 and T max of the cured resins decreased slightly with the decrease in DGEBA until they reached their lowest values (271.7 and 381.5, respectively) when 40% of BOE was used for curing. When a larger amount of BOE is used, a DTG peak around 271 • C is present. The decrease in the thermal stability may be attributed to the presence of methoxy groups on the aromatic ring resulting from bio-oil-derived guaiacols. The latter donates electrons to the aromatic ring, reducing thermal stability [47]. However, it can be seen in Figure 8 that the increase in the BOE proportion led to a slower thermal decomposition rate. Fuels 2023, 4, FOR PEER REVIEW
Conclusions
A synthesized bio-based epoxy resin based on the upgraded beechwood pyr oil was used for the first time to gradually replace the DGEBA equivalent (0-100% formaldehyde-free cross-linker for bio-based novolac resin to produce a bio-based material through this study. Incorporating the bio-based epoxy led to a lower curin action temperature and activation energy compared to when using a commercial c agent (DGEBA). When the bio-based epoxy resin content increased, there were no v effects on the degradation of cured resin. The good thermal stability of the cured illustrated that the synthesized epoxy resin from a biomass source is a promising substitute for commercial DGEBA. Additional mechanical analyses would be intere to carry out to link the thermal stability of the green material synthesized with its che structure.
Data Availability Statement:
The data presented in this study are available on request fro corresponding author. The data are not publicly available.
Conflicts of Interest:
The authors declare no conflict of interest.
A
Pre-exponential factor (min −1 ) BOE Bio-oil based epoxy resin BOG Bio-oil based glyoxal novolac resin BPA Bisphenol A The residual materials of the cured resins at 800 • C are also listed in Table 5. The presence of the BOE resin led to a significant increase in the R 800 value of the cured resins (43.26%), which was higher than that obtained when pure DGEBA was used (39.66%); this indicates the positive effect of BOE on reducing resin decomposition, and it is attributed to the rich aromaticity in the main chain between rigid rod-like aromatics linked by carboncarbon double bonds inherent into BOE resin from bio-oil. These polymer fragments would be transformed into char at high temperatures [50]. It also explains why BOE and BOG have more cross-linking reactions under high-temperature conditions than DGEBA.
Conclusions
A synthesized bio-based epoxy resin based on the upgraded beechwood pyrolysis oil was used for the first time to gradually replace the DGEBA equivalent (0-100%) as a formaldehyde-free cross-linker for bio-based novolac resin to produce a bio-based green material through this study. Incorporating the bio-based epoxy led to a lower curing reaction temperature and activation energy compared to when using a commercial curing agent (DGEBA). When the bio-based epoxy resin content increased, there were no visible effects on the degradation of cured resin. The good thermal stability of the cured resin illustrated that the synthesized epoxy resin from a biomass source is a promising green substitute for commercial DGEBA. Additional mechanical analyses would be interesting to carry out to link the thermal stability of the green material synthesized with its chemical structure. | 2023-05-18T15:08:21.193Z | 2023-05-15T00:00:00.000 | {
"year": 2023,
"sha1": "f30a4872276cdf96c2a87921f29c5e13f5d2c687",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/fuels4020012",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "787a3a3de8accba1ab9df6e707e257b273781866",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
14603812 | pes2o/s2orc | v3-fos-license | Glycosuria and Renal Outcomes in Patients with Nondiabetic Advanced Chronic Kidney Disease
Sodium glucose cotransporter 2 inhibitors have shown a potential for renoprotection beyond blood glucose lowering. Glycosuria in nondiabetic patients with chronic kidney disease (CKD) is sometimes noted. Whether glycosuria in CKD implies a channelopathy or proximal tubulopathy is not known. The consequence of glycosuria in CKD is also not studied. We performed a cross-sectional study for the association between glycosuria and urine electrolyte excretion in 208 nondiabetic patients. Fractional excretion (FE) of glucose >4% was 3.4%, 6.3% and 62.5% in CKD stage 3, 4 and 5, respectively. These patients with glycosuria had higher FE sodium, FE potassium, FE uric acid, UPCR, and urine NGAL-creatinine ratio. We conducted a longitudinal study for the consequence of glycosuria, defined by dipstick, in 769 nondiabetic patients with stage 4–5 CKD. Glycosuria was associated with a decreased risk for end-stage renal disease (adjusted hazard ratio: 0.77; CI = 0.62–0.97; p = 0.024) and for rapid renal function decline (adjusted odds ratio: 0.63; CI = 0.43–0.95; p = 0.032); but glycosuria was not associated with all-cause mortality or cardiovascular events. The results were consistent in the propensity-score matched cohort. Glycosuria is associated with increased fractional excretion of electrolytes and is related to favorable renal outcomes in nondiabetic patients with stage 5 CKD.
We hypothesized that, first, glycoscuria is a sign for proximal tubulopathy and, second, glycosuria could be associated with more favorable renal outcomes in patients with nondiabetic CKD and proteinuria. We tested these 2 hypotheses in an observational cohort study.
Renal Handling of Electrolytes and Glucose in Nondiabetic Patients Stratified by CKD Stages and Factors Associated with Fractional Excretion of Glucose in the Cross-Sectional Study.
In the cross-sectional study of 208 patients, 145 (69.7%) had nondiabetic CKD (Table 1). Fractional excretion (FE) of glucose, FE sodium, FE potassium, and FE uric acid were higher in the patients with stage 4 or 5 CKD than in those without CKD or with stage 1-3 CKD. The percentage of glycosuria measured using a dipstick was 56.3% and that measured using FE glucose of > 4% was 62.5% in patients with stage 5 CKD. (Table 1) Glycosuria measured using a dipstick and equivalent FE glucose were shown in Supplementary Table 1. Other characteristics of these patients were shown in Supplementary Table 2.
The patients with FE glucose ≥ 4% had higher FE sodium, FE potassium, FE uric acid, urine protein-creatinine ratio (UPCR), and urine neutrophil gelatinase-associated lipocalin (NGAL)-creatinine ratio than those with FE patients who took RAS blockers, other antihypertensives, statins, or aspirin. The glycosuria group had lower eGFR, serum hemoglobin, body mass index (BMI), total cholesterol, triglyceride, potassium, calcium, bicarbonate, and uric acid level than did the nonglycosuria group. Moreover, the glycosuria group had higher UPCR and serum phosphorus. The glycosuria and nonglycosuria groups did not differ in age, percentage of baseline hypertension, mean BP, serum albumin, and sodium level. After propensity score matching, the glycosuria group had lower serum uric acid, triglyceride, calcium and higher glucose level. All other characteristics were not different between glycosuria group and nonglycosuria group (Supplementary Table 3).
Glycosuria, End-Stage Renal Disease (ESRD), and Rapid Renal Function Decline. Over a median
follow-up period of 3.1 years, 385 patients progressed to long-term hemodialysis, peritoneal dialysis, or renal transplantation ( Table 4). The crude event rate of ESRD per 100 patient-years was 18.13 and 11.00 in the glycosuria and nonglycosuria groups, respectively. In Cox regression analysis, the glycosuria group had a higher unadjusted risk of ESRD; the hazard ratio (HR) was 1.78 (95% confidence interval [CI]: 1.45-2.18). However, after adjustment for age, sex, eGFR, UPCR and causes of CKD in Model 1, the glycosuria group had a lower risk of ESRD than did the nonglycosuria group. Furthermore, in the fully adjusted model, the glycosuria group had a lower risk of ESRD; the HR was 0.77 (0.95% CI: 0.62-0.97). The multivariate logistic regression analysis of rapid renal function decline by glycosuria is provided in Table 4. Rapid renal function decline was defined as an eGFR slope of less than − 5 mL/min/1.73 m 2 /y according to the KDIGO guidelines 13 . In the fully adjusted model, the odds ratio of rapid renal function decline in the glycosuria group was 0.63 (95% CI: 0.43-0.95) compared with that in the nonglycosuria group. In the propensity score matched cohort, the fully adjusted HR of ESRD in glycosuria group was 0.78 (95% CI: 0.62-0.98) compared with that in the nonglycosuria group. The OR of rapid renal function decline in glycosuria group was 0.64 (95% CI: 0.41-0.99) in the fully adjusted model (Supplementary Table 4).
The crude event rate per 100 patient-years was 3.70 and 4.73 in the glycosuria and nonglycosuria groups, respectively. In fully adjusted Cox regression, the HR of all-cause mortality in the glycosuria group was 0.92 (95% CI: 0.62-1.37) compared with that in the nonglycosuria group. Furthermore, 107 patients experienced cardiovascular (CV) events. The crude event rate per 100 patient-years was 2.89 in the glycosuria group and 3.65 in the nonglycosuria group. In fully adjusted Cox regression, the HR of CV events in the glycosuria group was 0.88 (95% CI: 0.56-1.37) compared with that in the nonglycosuria group. In the propensity score matched cohort, the fully adjusted HR of all-cause mortality in the glycosuria group was 0.89 (95% CI: 0.58-1.36) compared with that in the nonglycosuria group. The fully adjusted HR of CV events in the glycosuria group was 0.88 (95% CI: 0.55-1.41) compared with that in the nonglycosuria group (Supplementary Table 4).
Discussion
In this study, we observed that glycosuria is common in patients with nondiabetic stage 4-5 CKD. We also revealed these patients with glycosuria had higher FE sodium, FE potassium, FE uric acid, UPCR, and urine NGAL-creatinine ratio. We demonstrated, for the first time, that glycosuria was associated with a lower risk of ESRD (0.77-fold) and rapid renal function decline (0.63-fold) in patients with nondiabetic stage 5 CKD. The results were consistent in the propensity-score matched cohort. Glycosuria commonly occurs in patients with diabetes when the amount of the filtered glucose exceeds the capacity of renal tubular reabsorption. The role of glycosuria as a screening tool is very limited because of its low sensitivity 14 and the high individual variability of the renal threshold of glucose excretion 15 . The effects of glycosuria on patients with poorly controlled diabetes have been previously proposed 16,17 . Glycosuria induces osmotic diuresis and is thus concerning. Recently, the association of glycosuria and clinical outcomes has again generated interest because of the use of SGLT2 inhibitors. Short-term trials of SGLT2 inhibitors have reported that, in addition to reducing blood glucose, SGLT2 inhibitors have other potential benefits, including blood pressure reduction, diuretic effects, body weight reduction, and uric acid reduction 1,2,7 . One hypothesis has been proposed that, SGLT2 inhibition could restore the tubuloglomerular feedback and reduce hyperfiltration in diabetic nephropathy 2 . Glycosuria is rarely observed in the general population and may be found in patients with nondiabetic CKD. Our study was the first to investigate glycosuria in this population, and it showed that glycosuria becomes relatively frequent with renal function decline. However, glycosuria in patients with nondiabetic CKD in our study might be associated with proximal tubulopathy rather than merely SGLT channelopathy. First, the median FE glucose was 10% in our patients with glycosuria, which was lower than that in the patients to whom full-dose SGLT2 inhibitors were administered (approximately 30% in clinical trials) 18,19 . Second, simultaneously, the median FE Na was 3.4% in our patients with glycosuria, which was higher than that in the patients to whom full-dose SGLT2 inhibitors were administered (< 1% in clinical trials) 19,20 . The median FE UA was also higher in our patients with glycosuria than in the patients to whom full-dose SGLT2 inhibitors were administered 21 . These data suggested that the urine excretion of electrolytes in our study exceeds the effect of SGLT2 inhibition. The mechanism underlying the proximal tubulopathy in those with advanced CKD remains unclear. We hypothesized from the remnant nephron theory that in patients with proximal tubulopathy, the reabsorption in proximal tubules could not match the hyperfiltration from the glomeruli. The effects of glycosuria on patients with nondiabetic CKD are largely unknown. Our study was the first to examine the factors associated with in nondiabetic patients. We also observed a lower blood pressure, serum uric acid, and BMI in patients with glycosuria, similar to the findings in SGLT2 inhibitor trials. Our study was also the first to examine the clinical outcomes of patients with glycosuria. Our data suggested that, in the patients with nondiabetic stage 5 CKD, glycosuria associates with lower risk of ESRD and lower risk of rapid renal function decline. Current trials of SGLT2 inhibitors exclude patients with advanced CKD. Large-scale studies are warranted to study the cause and effect of glycosuria in patients with nondiabetic CKD.
However, because only low-grade glycosuria occurs in patients with nondiabetic CKD in our study, and the glycosuria is related to increased fractional excretion of electrolytes in these patients, we propose another hypothesis. That is, proximal tubulopathy might play a role on the favorable renal outcomes in these patients. In patients with CKD and proteinuria, filtered proteins are mainly reabsorbed and accumulated within PTECs, causing tubulointerstitial inflammation and fibrogenesis by activating immunoregulatory cytokines and vasoactive genes, such as endothelin-1 and regulated upon activation normal T-cell expressed and secreted 9,10 . The protein traffic induces PTECs to acquire an inflammatory phenotype both in vitro 22,23 and in vivo 24,25 . Another study reported that albumin-induced reactive oxygen species generation, nuclear factor kappa beta activation, and interleukin-8 secretion are endocytosis dependent 12 . In animal studies, limiting the protein traffic has been shown to prevent renal disease progression 11,26,27 . Thus, proximal tubulopathy might associate with favorable renal outcomes in patients with CKD.
Regarding the association between glycosuria and CV events, our results revealed no difference between the glycosuria and nonglycosuria groups ( Table 4). The EMPA-REG OUTCOME study reported that empagliflozin intervention was associated with decreased CV events in patients with type 2 diabetes mellitus having a high CV risk 22 . However, the patients in our study were nondiabetic and had advanced CKD with low-grade glycosuria; therefore, the effect might have differed. Regarding the association between glycosuria and all-cause mortality, our study revealed no difference between the glycosuria and nonglycosuria groups (Table 4). No previous study has investigated this association.
This study had several limitations. First, this was an observational study, and causal relationships thus could not be delineated. Second, dipstick urinalysis cannot always reveal an accurate concentration of glucose in the urine because of the influences of certain substances, such as ascorbic acid and strong oxidizing agents 23 were 100%, 98.5%, 87%, and 100%, respectively, in a previous study 24 . Third, we could not determine whether the favorable renal outcomes were specifically associated with glycosuria, proximal tubulopathy, or both. Fourth, glycosuria did not exhibit a dose-dependent effect because of the limited number of events. Fifth, CV events might be underestimated. Sixth, although we did the analysis in a propensity-score matched cohort, the confounding could not be completely eliminated.
In conclusion, we found the association between glycosuria and increased fractional excretion of electrolytes in nondiabetic patients with advanced CKD. We also demonstrated that glycosuria is associated with favorable renal outcomes. However, the cause and mechanism of glycosuria remain unclear. Large scale studies are necessary to clarify this phenomenon.
Methods
Patients and Measurements. From November 11, 2002 to May 31, 2009, 3749 patients with stage 1-5 CKD were included from the Integrated CKD Care Program in Kaohsiung for Delaying Dialysis. This observational study was conducted at 2 affiliated hospitals of Kaohsiung Medical University in Southern Taiwan. All the patients were followed until May 31, 2010 or death, as previously reported 28 . To investigate the renal handling of electrolyte and glucose, we conducted a cross-sectional study which was designed to include 60 nondiabetic patients without CKD and 30 patients in each CKD groups (CKD stage 1-2, CKD 3a, CKD 3b, CKD 4 and CKD 5) from the nephrology outpatient department. Finally, we included 63 nondiabetic patients without CKD and 145 nondiabetic patients with CKD. Furthermore, to investigate the association between glycosuria and clinical outcomes, we selected 769 patients with nondiabetic stage 4-5 CKD and a UPCR of ≥ 500 mg/g in the longitudinal study. DM was diagnosed on the basis of the treatment administered or a glycated hemoglobin level of ≥ 6.5% at the time of enrollment. The study protocol was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (KMUH-IRB-990198), and all the patients provided written informed consent for study participation. The methods were performed in accordance with relevant guidelines and regulations.
The baseline characteristics of all the patients included demographic data, comorbidities, medication history, lifestyle factors, physical examination findings, and laboratory data. Glycosuria was defined as a urine glucose level of ≥ 1+ at 2 or more times in 3 consecutive urine analyses at the outpatient department with at least one week interval within 3 months. The samples exhibiting > 5 white blood cells per high power field (hpf) or > 5 red blood cells per hpf in urinalysis were excluded. Furthermore, urine glucose and electrolytes and the corresponding serum data were collected on the same day for measuring FE. Patient demographic data were recorded at the first visit, and their medical histories were recorded according to a chart review. Hypertension was defined on the basis of clinical diagnoses and the prescribed medications. CV diseases were defined according to the clinical diagnoses of heart failure, acute or chronic ischemic heart disease, or cerebrovascular disease. Moreover, laboratory data were obtained at the first visit. Clinical Outcomes. Four clinical outcomes, namely ESRD, rapid renal function decline, CV events, and all-cause mortality, were assessed. ESRD was defined as the initiation of maintenance hemodialysis, peritoneal dialysis, or renal transplantation. Moreover, ESRD was ascertained according to a chart and catastrophic card review. A rapid renal function decline was defined as an eGFR slope of less than − 5 mL/ min/1.73 m 2 /y on the basis of Kidney Disease Improving Global Outcomes (KDIGO) guidelines. The eGFR was defined using the simplified Modification of Diet in Renal Disease Study equation: eGFR (mL/ min/1.73 m 2 ) = (186) × (serum creatinine − 1.154) × (age − 0.203) × (C), where C is 0.742 for women, 1.212 for African American patients, and 1 for other patients. In addition, CV events were ascertained by reviewing charts to identify hospitalization for acute coronary syndrome, acute cerebrovascular disease, congestive heart failure, and peripheral arterial occlusion disease and death resulting from any of the aforementioned causes. The survival status and cause of death were determined by reviewing death certificates, patient charts, and the National Death Index.
Statistical Analysis. The summarized statistical results of the baseline characteristics of all the patients and stratification by glycosuria status were expressed as percentages for categorical data, mean ± standard deviation for continuous variables with an approximately normal distribution, and median and interquartile ranges for continuous variables with a skewed distribution. Moreover, linear regression analysis was performed to study the association between FE glucose and other parameters. Cox proportional hazards analysis was used to assess the association between glycosuria and the clinical outcomes. Multivariate logistic regression analysis was used to evaluate the association between glycosuria and rapid renal function decline. The covariates were selected according to our previous studies 29 . The continuous variables with skewed distributions were log-transformed to reduce the skewness. The fully adjusted model was adjusted for age; sex; causes of CKD, eGFR, log-transformed UPCR, cholesterol, and C-reactive protein; baseline hypertension and CV disease; mean blood pressure; hemoglobin; albumin; BMI; and phosphorus. P < 0.05 was considered statistically significant. The models for all-cause mortality were censored only at death or the end of follow-up. Moreover, the models for CV events were censored at the development of these events, death, or the end of follow-up. The models for ESRD were censored at the commencement of renal replacement therapy, death, or the end of follow-up.
The propensity score is the conditional probability of receiving an exposure given a set of measured covariates. We estimated propensity scores for glycosuria for each of the 769 patients using a non-parsimonious multivariable logistic regression model including all parameters shown in Table 1. The model was well-calibrated (Hosmer-Lemeshow test: P = 0.167) with reasonable discrimination (c statistic = 0.68). We matched patients in glycosuria group with patients in non-glycosuria group who had similar propensity scores to five, four, three, two and one decimal places in five repeated steps. In the first step, we multiplied the raw propensity scores by 100 000, then Scientific RepoRts | 6:39372 | DOI: 10.1038/srep39372 rounded it to the nearest value. This was repeated, multiplying by 10000, 1000, 100, and 10. Statistical analysis was performed using R 3.3.0 software (R Foundation for Statistical Computing, Vienna, Austria) and Statistical Package for Social Sciences, Version 21.0, for Windows (SPSS Inc., Chicago, IL, USA). | 2018-04-03T05:47:31.532Z | 2016-12-23T00:00:00.000 | {
"year": 2016,
"sha1": "da967e2a97aeeda275645b141f790cbfef0f0bda",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep39372.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da967e2a97aeeda275645b141f790cbfef0f0bda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10845977 | pes2o/s2orc | v3-fos-license | A case report of prostate cancer metastasis to the stomach resembling undifferentiated-type early gastric cancer
Background Occurrence of metastatic cancer to the stomach is rare, particularly in patients with prostate cancer. Gastric metastasis generally presents as a solitary and submucosal lesion with a central depression. Case presentation We describe a case of gastric metastasis arising from prostate cancer, which is almost indistinguishable from the undifferentiated-type gastric cancer. A definitive diagnosis was not made until endoscopic resection. On performing both conventional and magnifying endoscopies, the lesion appeared to be slightly depressed and discolored area and it could not be distinguished from undifferentiated early gastric cancer. Biopsy from the lesion was negative for immunohistochemical staining of prostate-specific antigen, a sensitive and specific marker for prostate cancer. Thus, false initial diagnosis of an early primary gastric cancer was made and endoscopic submucosal dissection was performed. Pathological findings from the resected specimen aroused suspicion of a metastatic lesion. Consequently, immunostaining was performed. The lesion was positive for prostate-specific acid phosphatase and negative for prostate-specific antigen, cytokeratin 7, and cytokeratin 20. Accordingly, the final diagnosis was a metastatic gastric lesion originating from prostate cancer. Conclusion In this patient, the definitive diagnosis as a metastatic lesion was difficult due to its unusual endoscopic appearance and the negative stain for prostate-specific antigen. We postulate that both of these are consequences of hormonal therapy against prostate cancer.
Background
The prevalence of metastatic cancer to the stomach is low and ranges from 1.7% to 5.4% based on autopsy findings [1]. The most common sites of gastric metastasis are breast cancer, lung cancer, and malignant melanoma. Metastatic lesions are often more solitary than multiple occurrences and are frequently located on the greater curvature in the middle and upper third of the stomach. Endoscopically, a metastatic lesion is typically observed as a submucosal tumor with or without central depression. However, some of these metastatic lesions resemble primary gastric cancer and histological confirmation, including that by immunohistochemistry, is indispensable for differential diagnosis [2].
In prostate cancer, metastases to bones and lymph nodes are common, but metastasis to the stomach is extremely rare [3]. Prostate cancer can occasionally present as a metastatic carcinoma with unknown primary origin; however, the origin of metastasis in such a setting is easily identified by using immunohistochemistry for prostate-specific antigen (PSA) and prostate-specific acid phosphatase (PSAP).
Here, we report a case of prostate cancer metastasis to the stomach, which resembled undifferentiated-type early gastric cancer (UD-EGC), as observed on both conventional and magnifying endoscopies. Endoscopic biopsy from the lesion was negative for PSA staining and was not useful for facilitating a correct diagnosis. Caution must be applied in interpreting endoscopy findings in patients with malignancies, particularly those under treatment.
Case presentation
The patient was a 75-year-old Japanese male who had prostate cancer with bone metastasis and high serum PSA level (7040 ng/ml, reference range < 4 ng/dL) that responded well to luteinizing hormone-releasing hormone (LH-RH) agonist for 8 months. Abdominal CT scan revealed no evidence of prostate cancer progression. He was referred to our department due to a 4week history of epigastric discomfort. Physical examination was not remarkable. Laboratory work-up was not significant except for elevated ALP, LDH and PSA levels, which were improved compared to values before hormone therapy (Table 1).
Esophagogastroduodenoscopy (EGD) was performed and revealed a slightly depressed, discolored lesion with sharp margin against non-atrophic mucosa on the anterior wall of the middle gastric body (Fig. 1). Magnifying endoscopy (ME) with blue laser imaging (BLI) and linked color imaging (LCI) demonstrated a sparse and partially absent microsurface pattern with irregular microvessels in the depressed area. These findings are compatible with UD-EGC. Biopsy showed moderately differentiated adenocarcinoma and immunohistochemistry with PSA was negative. Contrasted computed tomography demonstrated absence of significantly enlarged perigastric lymph nodes and also there were no new sites of metastatic disease. Thus, we initially diagnosed it as a primary early gastric cancer. Considering his prostate cancer and estimated prognosis of several years, endoscopic submucosal dissection was performed. En bloc resection was successfully achieved without complication. Histopathologic findings from the resected specimen were remarkable for moderately to poorly differentiated adenocarcinoma, which predominantly existed in the superficial layer of the submucosa. Atrophy of the gastric fundic glands, which were replaced with fibrous tissue, were observed focally near the tumor infiltration site (Fig. 2). As metastasis was suspected, immunochemical staining was performed. The tumor was negative for PSA, cytokeratin (CK) 7, CK 20, and positive for PSAP (Fig. 3). Consequently, the lesion was finally confirmed as a metastatic gastric lesion of the prostate cancer.
At the time when the pathological diagnosis of the gastric metastases was made, patient's extragastric lesions were responding to endocrine therapy, and because of this we did not change his systemic treatment for prostate cancer.
Discussion and conclusions
Prostate cancer metastases to the stomach is very rare. As far as we know, there are only ten cases has been reported previously ( Table 2). Most of the gastric metastases were detected at the primary staging or at the time of progression. Common endoscopic features were nodules with ulceration, folds thickening and multiple ulcerations. Notably, all previous cases were positive for PSA stain.
We initially failed to achieve the correct diagnosis because of two reasons. Features of both conventional and magnifying endoscopies of our case mimic those of UD-EGC, and biopsies from the gastric lesions were negative for PSA stain.
An endoscopic examination with conventional white light imaging (WLI) demonstrated a discolored and slightly depressed lesion with clear margin, which is recognized as the typical characteristic of UD-EGC and an uncommon manifestation of metastatic stomach lesions [4]. We presume that discoloration observed on WLI is related to histological improvement, which occurs in response to hormonal treatment. Histological changes, resulting from hormonal therapy for prostate cancer, include decreased number of cancer glands and increased periglandular collagenous stroma [5]. Therefore, we hypothesized the following mechanism of discoloration. Cancer infiltration resulted in atrophy of the fundic glands. Then, in response to hormonal therapy, malignant glands disappeared and were replaced with fibrous tissue. Consequently, the mucous layer became scarce, giving rise to the discolored appearance on WLI. Similar discoloration is observed in mucosa-associated lymphoid tissue (MALT) lymphoma at the site of tumor regression following Helicobacter pylori (H.pylori) eradication [6]. This change in color is considered to be due to a decreased number of gastric glands caused by neoplastic infiltration and elimination of lymphoid cell infiltration after H. pylori eradication. This histological change corresponds to our observation and may support our theory.
We also postulate that hormonal therapy contributed to the lesion's slightly depressed appearance. In primary gastrointestinal malignancies, flattening of elevated mucosa and ulceration are observed in response to chemotherapy [7]. Considering this, slight depression of the lesion may indicate a good response against hormonal therapy and is possibly preceded by more common endoscopic pattern, e.g., a bull's eye configuration.
On ME with BLI and LCI, we found a sparse microsurface pattern and an irregular microvessel pattern in the depressed area, which are nearly identical to those associated with UD-EGC [4,8]. BLI and LCI are novel technologies of image-enhanced endoscopy (IEE) and considered to possess good visibility as narrow-band imaging (NBI) [9]. The utility of magnifying IEE on metastatic lesions is not well studied. In our case, observation with ME suggested U-EGC. This might occur as a A B C D E Fig. 1 Endoscopic Findings. a Conventional endoscopy with WLI. A slightly depressed, discolored lesion with sharp margin was observed against non-atrophic mucosa on the anterior wall of the middle gastric body. b-e ME with BLI (b, c) and ME with LCI using indigo carmine dye spray (d, e). c and e are images with the highest power optical magnification. In the depressed area, microsurface pattern was sparse and partially absent. Microvascular pattern was irregularly irregular, that is, a variation in caliber, non-uniform shapes, and an asymmetric distribution. Both microsurface and microvascular patterns were indistinguishable from UD-EGC. WLI, white-light imaging; ME, magnifying endoscopy; BLI, Blue Laser Imaging; LCI, Linked Color Imaging; UD-EGC, undifferentiated early gastric cancer consequence of histological change following hormonal treatment.
By negative staining for PSA, we reached a false initial diagnosis of primary gastric cancer for this patient. Both PSA and PSAP are highly sensitive and specific immunohistochemical markers of prostate cancer [10], which are IHC immunohistochemistry, (+) positive, (−) negative, CK cytokeratin, CG chromogranin, PSA prostate-specific antigen, PSAP prostate-specific alkaline phosphatase, AMACR alpha-methylacyl-coenzyme A racemase useful for establishing the prostate origin of metastatic adenocarcinoma in diagnostic practice. However, PSA and PASA are less frequently expressed in small cell or poorly differentiated prostate carcinoma and pretreated carcinoma [5]. Unfortunately, we didn't perform biopsy of prostate. Considerably good response to hormone therapy is incompatible to clinical feature of prostate cancer associated with aggressive histology [11]. Therefore, we suppose negative staining for PSA in this case is likely due to hormonal therapy, whereas we cannot explain why reactivity to PSAP was maintained. We only used PSA staining prior to endoscopic resection because we did not suspect metastasis based on endoscopic findings. We might have avoided unnecessary endoscopic resection if we had included additional immunohistochemical stains, such as staining for PSAP on biopsy specimen, after considering the patient's history of treatment.
To our knowledge, we are the first to describe the case of prostate cancer metastasis to the stomach that was indistinguishable from UD-EGC. We suggest that the alterations in morphology and immunohistochemical staining owing to hormonal treatment made it a challenging diagnosis. Caution should be applied in interpreting endoscopic findings in patients with malignancies, particularly those undergoing treatment. | 2017-10-19T05:31:04.605Z | 2017-08-07T00:00:00.000 | {
"year": 2017,
"sha1": "c5cfbed361247025152a412ac1f7005b60fe9e18",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-017-0655-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5cfbed361247025152a412ac1f7005b60fe9e18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257583975 | pes2o/s2orc | v3-fos-license | Atlas of ticks (Acari: Argasidae, Ixodidae) in Germany: 1st data update
The first data update of the atlas of ticks in Germany published in 2021 is presented here. This atlas provides maps based on georeferenced tick locations of 21 species endemic in Germany as well as three tick species that are regularly imported to Germany. The data update includes the following numbers of newly georeferenced tick locations: 17 Argas reflexus, 79 Carios vespertilionis, 2 Dermacentor marginatus, 43 Dermacentor reticulatus, 4 Haemaphysalis concinna, 3 Haemaphysalis punctata, 3 Hyalomma rufipes, 3 Ixodes apronophorus, 9 Ixodes arboricola, 1 Ixodes ariadnae, 30 Ixodes canisuga, 3 Ixodes frontalis, 80 Ixodes hexagonus, 3 Ixodes lividus, 497 Ixodes ricinus/inopinatus, 1 Ixodes rugicollis, 17 Ixodes trianguliceps, 14 Ixodes vespertilionis, and 45 Rhipicephalus sanguineus sensu lato. Old and new tick findings were mapped, such as the northernmost occurrence of D. marginatus in Germany observed in 2021, but also the historical records from the first descriptions of I. apronophorus and I. arboricola, which were georeferenced here for the first time. The digital dataset of tick locations available for Germany is supplemented by 854 new tick locations. These records increase the number of tick species mapped in the federal states Bavaria, Brandenburg and Mecklenburg Western Pomerania by five each, those in Berlin and Schleswig-Holstein by four each, those in Hamburg by three, those in Baden-Wuerttemberg, Bremen, Lower Saxony, Northrhine-Westphalia, Rhineland Palatinate and Thuringia by two each, and those in Hesse, Saxony and Saxony-Anhalt by one each. Thus, the first data update of the tick atlas in Germany and the underlying digital dataset significantly improve our knowledge of the distribution of these tick species and helps to investigate the effects of climate change and habitat changes on them. Supplementary Information The online version contains supplementary material available at 10.1007/s10493-023-00784-5.
Knowledge gaps in Rubel et al. (2021) were due to the fact that some relevant papers on ticks have not been written in English and, moreover, old articles and former journals are often not available in digital form. Much of the references used here can therefore not be found through common database queries, but only through expert knowledge. In addition, the lockdown in 2020/2021 caused by the COVID-19 pandemic made it more difficult to obtain literature from libraries that restricted their services or were completely closed (Nicola et al. 2020). Therefore, for example, the locations of the short-legged bat tick C. vespertilionis (Rupp et al. 2004) could only be assigned to the federal state of Bavaria. Due to the work of Sándor et al. (2021) geographical coordinates are now available. Some colleagues reacted to the atlas of ticks in Germany by pointing out the occurrence of ticks that were not taken into account by Rubel et al. (2021) by making their own studies accessible (Henkel et al. 1983). Last but not least, the historical work of Paul Schulze was included in the tick atlas. For example, the finding of I. apronophorus from the first description by Schulze (1924) and several findings of I. arboricola from the first description by Schulze and Schlottke (1929) have been georeferenced. Data from the publications mentioned, numerous other previously unavailable data sources and new tick findings of the authors might justify this data update, with which further gaps in the mapped tick occurrence in Germany have been closed. For this purpose, the new georeferenced locations and the updated distribution maps are presented here.
Data and methods
The data used here are georeferenced tick locations in Germany described by Rubel et al. (2014Rubel et al. ( , 2021 supplemented by 854 new records. The geographical coordinates of the new tick locations are provided in the supplement together with an indication of their accuracy and the sources. The coordinates are given in decimal degrees with a measure of accuracy identical to those previously introduced by Rubel et al. (2014Rubel et al. ( , 2018Rubel et al. ( , 2021. The tick locations are mapped using R, a language and environment for statistical computing (R Development Core Team 2019). Artificial data clusters caused by single studies were reduced using a random selection and a thinning algorithm (Aiello-Lammens et al. 2019). For example, the newly georeferenced tick locations of the study by Centurier et al. (1979) and Hoffmann (1981) significantly increased the number of R. sanguineus sensu 1 3 lato reports. However, only 42 out of 76 known locations were mapped to avoid overlapping location points.
Tick species, for which only a few locations are known, are grouped according to their host preferences as proposed by Hornok et al. (2020). For example, the bat ticks C. vespertilionis, I. ariadnae, I. simplex, and I. vespertilionis are shown in the same map.
Results
The outcomes of this study are updated geographical maps that depict the occurrence of all tick species that have so far been reported in Germany. It should be noted that the widespread I. inopinatus (Hauck et al. 2019) has been combined with I. ricinus and they are called I. ricinus/inopinatus hereinafter as in Rubel et al. (2021). The improvements resulting from the data update are summarized in Table 1. Accordingly, the first data update increases the number of tick species mapped in the federal states Bavaria, Brandenburg and Mecklenburg Western Pomerania by five each, those in Berlin and Schleswig-Holstein by four each, those in Hamburg by three, those in Baden-Wuerttemberg, Bremen, Lower Saxony, Northrhine-Westphalia, Rhineland Palatinate and Thuringia by two each, and those in Hesse, Saxony and Saxony-Anhalt by one each. All tick species are presented below with a brief summary of the numbers of updated locations compiled for this study. If the ticks were collected from hosts, these are also mentioned. For information on the global distribution, biology, hosts, as well as the medical and veterinary importance of the tick species identified in this paper the reader is referred to Petney et al. (2012Petney et al. ( , 2015 and Rubel et al. (2014Rubel et al. ( , 2021.
Dermacentor marginatus (Sulzer)
Two new locations of the ornate sheep tick D. marginatus were added to the distribution map. An adult male picked up by a woman south of Düsseldorf represents the new 1 3 northern distribution limit of D. marginatus in Germany at a latitude of 51.02 • N (determined by Olaf Kahl, 2021). Recent records from a citizen science study indicate that D. marginatus may occur even a bit further north . The second new tick location was reported by Weigand et al. (2023). A total of 95 out of 120 known locations is mapped in Fig. 3.
Dermacentor reticulatus (Fabricius)
The following 43 locations were added to the distribution map of the ornate dog tick D. reticulatus: 1 (Negrobov and Borodin 1964), 1 (Maasjost 2006), 19 (Liebisch and Liebisch 2007), 4 (Schreiber et al. 2014), 9 (Rehbein et al. 2016, 1 (Ott et al. 2020), 1 (leg. Ixodes ricinus (Maasjost 2006), the updated distribution map shows D. reticulatus findings in all federal states except Hamburg and Schleswig-Holstein. However, new records from a citizen science study were recently published for these federal states, which prove the Germany-wide occurrence of D. reticulatus ). Since D. reticulatus has been found in Hamburg and Schleswig-Holstein but no 1 3 georeferences are available, its occurrence is marked as a circle in Table 1. A total of 228 out of 404 known locations is depicted in Fig. 4. 1 3
Haemaphysalis (Aboimisalis) punctata Canestrini and Fanzago
The following three locations were added to the distribution map of the red sheep tick Ha. punctata: 1 (Koch 1877), 2 (Hesse and Völker 1983). The tick findings of Hesse and Völker (1983) were reported from Siegaue near Bonn. The meadow landscape of Siegaue is known as a resting place for migrating coastal birds, which might be why Ha. punctata was probably introduced by them. The ticks were collected from a stone marten (Martes foina) and flagged from the vegetation. The infestation of a human was also documented. The ticks found by Koch (1877) were collected near a lake at Dutzendeich, Nuremberg. The tick species described under the synonym Rhipicephalus expositicius is clearly Ha. punctata (Schulze 1925). A total of six known locations is mapped in Fig. 5. 1 3
Hyalomma (Euhyalomma) marginatum Koch
No locations were added to the map of Hy. marginatum. A total of 14 known locations is depicted in Fig. 6. 1 3
Hyalomma (Euhyalomma) rufipes Koch
The following three locations were added to the map of the hairy or coarse bont-legged Hyalomma tick, Hy. rufipes: 3 . A total of 11 known locations is depicted in Fig. 6. 1 3
Ixodes (Ixodes) acuminatus Neumann
No locations were added to the distribution of I. acuminatus. A total of three known locations is depicted in Fig. 7.
Ixodes (Ixodes) apronophorus Schulze
The following three locations were added to the distribution map of I. apronophorus: 1 (Schulze 1924), 1 (Negrobov and Borodin 1964), 1 (Aeschlimann et al. 1970). The location of I. apronophorus from the first description by Schulze (1924) could be assigned to the Kremmener Luch nature reserve in Brandenburg. A total of five locations is depicted in Fig. 7.
Ixodes (Pholeoixodes) arboricola Schulze and Schlottke
The following nine locations were added to the distribution map of I. arboricola: 4 (Schulze and Schlottke 1929), 1 (Schulze 1937), 1 (Schilling et al. 1981), 3 (Walter 1992). Among other places, Schulze (1932) reported I. arboricola from the Harz. The Harz is the highest mountain range in northern Germany (elevation: 1141 m). It lies at the intersection of Lower Saxony, Saxony-Anhalt and Thuringia. Since there are no georeferenced locations of I. arboricola for Saxony-Anhalt and Thuringia, its occurrence is marked as a circle in Table 1. A total of 29 known locations is depicted in Fig. 8.
Ixodes ariadnae Hornok et al.
The following location was added to the distribution map of I. ariadnae: 1 Weigand et al. (2023). At this new location near Friedewald, Hesse, one nymph of I. ariadnae was found in each of the winters of 2021 and 2022. A total of two known locations is depicted in Fig. 2.
Ixodes (Trichotoixodes) frontalis (Panzer)
The following three locations were added to the distribution map of I. frontalis: 1 (Stadler and Schenkel 1940), 1 (Walter et al. 1979), 1 . A total of 65 out of 92 known locations is depicted in Fig. 8.
Ixodes (Pholeoixodes) hexagonus Leach
The following low-resolution map of Germany presented here, although the georeferenced coordinates are provided in the supplement. A total of 217 out of 397 known locations is depicted in Fig. 10. have not differentiated between I. ricinus and I. inopinatus and reliable differentiation of both species is very difficult, the two species are combined herein and referred to as the I. ricinus/inopinatus species complex. Moreover, a recent study based on genomic data indicates that German I. inopinatus samples may represent I. ricinus (Rollins et al. 2023). Consequently, it seems that the morphological and mitochondrial genome-based methods used so far are not sufficient to distinguish between I. inopinatus and I. ricinus. A separate map for I. inopinatus was therefore not compiled. 1 3
Ixodes (Pholeoixodes) lividus Koch
The following three locations were added to the distribution map of the nest-dwelling bird parasite I. lividus: 2 (Schulze and Schlottke 1929), 1 (Stadler and Schenkel 1940). Müller (1977) described the occurrence of I. lividus in the former district of Magdeburg (former GDR). In the breeding periods 1972-1976 more than 1,800 sand martins Riparia riparia and some of their burrows were examined in various unspecified sand pits. The proportion of sand martins infested with I. lividus varied greatly from year to year between 0.7 and
Ixodes (Pholeoixodes) rugicollis Schulze and Schlottke
The historical Berlin location by Schulze and Schlottke (1929) was added to the distribution map of I. rugicollis. Since there is no exact location for this finding, the point in the map is to be interpreted symbolically for the occurrence of I. rugicollis in Berlin. Consequently, I. rugicollis was not mapped in the high-resolution city map of Berlin . A total of six known locations is depicted in Fig. 8.
Ixodes (Pomerantzevella) simplex Neumann
No location was added to the occurrence of I. simplex in Germany. The only up to now known location is depicted in Fig. 2.
Ixodes (Ceratixodes) uriae White
No location was added to the occurrence of the seabird tick I. uriae in Germany. One known location is depicted in Fig. 1. 1 3
Rhipicephalus sanguineus (Latreille)
The following 45 locations were added to the occurrence of the brown dog tick R. sanguineus sensu lato: 35 (Centurier et al. 1979), 10 (Hoffmann 1981. A further study concerning 60 dogs infested with R. sanguineus s.l. documented the occurrence of the brown dog tick in eight federal states in West Germany (Gothe 1999). No exact location information was given in this study. However, the study documents the occurrence of R. sanguineus s.l. in another federal state without georeferenced locations, namely Rhineland Palatinate (Table 1). A total of 42 out of 76 known locations is mapped in Fig. 6.
Discussion
The greatest progress compared to the first version of the atlas of ticks in Germany (Rubel et al. 2021) was made in mapping the bat ticks C. vespertilionis and I. vespertilionis. The previous atlas of ticks in Germany showed 32 locations of the short-legged bat tick C. vespertilionis, but these were almost all in the northwest of Germany. With another 79 locations of C. vespertilionis it could be shown that this tick is widespread almost all over Germany (Fig. 2). The five locations of the long-legged bat tick I. vespertilionis described by Walter and Kock (1985) could be supplemented by another 14 locations. Both the bats and the caves they inhabit are now under strict protection and scientific surveillance. However, there is little ongoing work on bat ticks in Germany, and ticks found in caves are only a side result of those investigations (Weigand et al. 2023). Nevertheless it can be assumed that I. vespertilionis is by no means rare, as the findings in numerous karst caves in the neighbouring countries Austria (Rubel and Brugger 2022) and Belgium (Obsomer et al. 2013) indicate.
With the georeferencing of 47 locations from Centurier et al. (1979), the map of R. sanguineus s.l. could also be significantly improved. Findings of R. sanguineus s.l. were reported from the metropolitan areas of Frankfurt/M., Hanover, Munich, Berlin and also from some other areas (Fig. 6). It is striking that all these findings of R. sanguineus s.l. are located in the former Federal Republic of Germany and in the former Berlin (West). The majority of these records date from before 1990, when people from the former East Germany were usually not allowed to visit Mediterranean or any subtropical countries. In contrast, the two Hyalomma species are brought to Germany via migratory birds in each spring and have been found all over the country. But because all these cases most probably reflect single (temporary) cases of importation, we do not talk about distribution. From an ongoing citizen science project (Fachet et al. 2020) 10 findings of R. sanguineus s.l. in Germany have been presented (not mapped), so even more current data are known. Because R. sanguineus s.l. in Germany usually occurs inside houses of people and quickly becomes irritating to the inhabitants, their presence might be in most cases only short-lived due to the control measures that have been introduced.
The data update also expands the knowledge of the distribution of those tick species, for which only a few new locations have been georeferenced. For example, with a new finding in Pulheim south of Düsseldorf (leg. Olaf Kahl, 2021) the northern distribution limit of D. marginatus in Germany shifts to the geographical latitude of 51.02 • N. With the georeferencing of a location from the map of Maasjost (2006) the occurrence of D. reticulatus in Bremen has been documented. With updated reports of the red sheep tick Ha. punctata (Koch 1877;Hesse and Völker 1983) it is documented that this originally Mediterranean tick species occurs at the resting places of migratory birds on their way to Northern Europe. As a result, Ha. punctata is widespread on the North Sea coast of England (Tijsse-Klasen et al. 2013), the Netherlands (Hofmeester et al. 2016), and Germany (Fig. 5). The 17 newly georeferenced locations of the rarely investigated vole tick I. trianguliceps indicate its occurrence throughout Germany. Finally, the distribution map of the best-studied tick, I. ricinus, was updated to 915 plotted locations now (Fig. 11). It seems quite certain that areas without any data points of I. ricinus in few parts of Germany probably mirror missing investigation rather than unsuitable areas for this tick species, e.g. parts of Schleswig-Holstein in northern Germany. Mountainous areas above an altitude of 1200 m might also be an exception.
Looking at the tick data presented here from the perspective of climate change, it seems that there have been only minor effects on the German tick fauna, as yet. However, D. reticulatus has been found much more frequently and in much larger numbers in parts of northern Germany in the past 2-3 decades than before. It is unclear to what extent this effect is due to climate change or habitat modification. Kahl and Dautel (2013) suggested that D. reticulatus might profit from increasing temperatures at its northern edge of distribution because development from oviposition to the F1 adult stage must take place within only one growing season in this species. The tick D. marginatus, which was clearly restricted to the mild climate of the Rhine-Main area, also seems to have expanded its range somewhat to the north.
Conclusions
The first data update of the atlas of ticks in Germany is presented here. Greatest progress compared to the first version was made in mapping the occurrence of the ticks C. vespertilionis, R. sanguineus s.l., I. arboricola, I. hexagonus, I. trianguliceps, and I. vespertilionis. The data update also expands knowledge of the distribution of rare tick species. In individual federal states, the number of documented tick species has increased by up to five. Thus, the first data update of the tick atlas in Germany and the underlying digital dataset in the supplement significantly improves our knowledge of the distribution of tick species and may be useful for future investigations to determine the effects of climate change and habitat changes on them. | 2023-03-18T06:17:47.842Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "ac7fc1fa06ab5b53e4a039fee68d52d9e53c71a2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10493-023-00784-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "08eb85dfa673afbd3ed294a0ac4b1bf90677a178",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6200709 | pes2o/s2orc | v3-fos-license | A rich hierarchy of functionals of finite types
We are considering typed hierarchies of total, continuous functionals using complete, separable metric spaces at the base types. We pay special attention to the so called Urysohn space constructed by P. Urysohn. One of the properties of the Urysohn space is that every other separable metric space can be isometrically embedded into it. We discuss why the Urysohn space may be considered as the universal model of possibly infinitary outputs of algorithms. The main result is that all our typed hierarchies may be topologically embedded, type by type, into the corresponding hierarchy over the Urysohn space. As a preparation for this, we prove an effective density theorem that is also of independent interest.
1. Introduction 1.1. Discussion. One of the important paradigms of the theory of computing, and of that of computability, is that we may view algorithms and programs as data. We are not going to challenge this paradigm. The paradigm is important practically in the design of digital computers, where everything, input data, programs and output data deep down are just sets of bits and bytes. It is also important theoretically, as it makes the existence of a universal algorithm possible and the unsolvability of the halting problem a mathematical statement.
However, using almost any programming language in practice, we have to distinguish between input data and output data, or at least declare what is what, and the programs are considered as syntactical entities that for most cases are distinguished from other kinds of data.
In this paper we will be interested in models for computing where the input data and the output data may be infinite entities. As a simple, but basic example, let us discuss the operator and how we should construct mathematical models for the kinds of data involved in computing integrals. Of course, in the world of digital computers, what we will aim at is to compute the integral as a floating point value, and then the input function f has to be digitally represented in some way suitable for this aim. From the point of view of numerical analysis, this is not hard to achieve, and in fact, the computability of the integral is not a big issue. However, from the point of view of a conceptual analysis it is undesirable to make the leap all at once from the set theoretical world of mathematical analysis to the finitistic world of digital computers. There are several reasons for this. We will discuss two of them: (1) The step from the continuous to the discrete inevitably has to violate some of the geometrical, algebraic and analytical properties of the reals. Unless one shows some care, it is not obvious that as numerically calculated integrals, and there are certainly going to be algebraically valid identities of this sort that are not identities in the numerical interpretation. Though the practical harm of phenomena like this may be kept at a minimum, it will be nice to have a model of computability in analysis that does not suffer from such deficiencies. (2) Though technological standards for representing various kinds of data are important for the exchange of data and programs, a conceptual analysis of computability where data of the form reals and real valued functions appear, should not be restricted to a particular standard for digitalization. It is of course impossible to view a real as the genuine output of an algorithm, since such outputs, even in a mathematical model, should be of a finitistic nature. An algebraic expression denoting a real may be considered to be such a finitistic entity, but then we will be facing the problem of the meaning of calculating the value of expressions like this. Thus, algebraic expressions are not satisfactory representations of outputs in the sense of this paper.
We will view output data as data of a particular kind, and we will advise some care in the choice of representing such data. Of course we have to consider more than just the set of data, we have to consider approximations to these data as well. But, and this is the core of our view, since it is the output data themselves that are of importance, the structure used to model the outputs of algorithms computing such data should contain the output data we are really interested in as a kind of substructure. We may view an algorithm computing a real as running in infinite time, producing better and better approximations as time passes, but in the end, in an ideal world, and after possibly an infinite elapse of time, the output should be the real itself.
If we consider the directed complete partial ordering (dcpo) of all closed intervals ordered by reversed inclusion, we may identify a real x with the closed interval [x, x], and in this way, R may be viewed as a substructure of the closed interval domain.
If we want to stick to finitistic representations of approximations of reals, e.g. as closed intervals [p, q] with rational endpoints or as closed intervals [ n 2 k , m 2 k ] with dyadic endpoints, and represent a real as an ideal of such approximations, we may canonically represent a real x as the ideal of all approximating intervals with x in the interior. This latter kind of representation is known as a retract domain representation, and we will come back to this.
In Section 1.3 we will bring this discussion further, and draw the conclusion that the class of complete, separable metric spaces is a suitable choice of spaces modeling types of output data, or more generally, as types of ground data.
In our example of the integral, there are two other kinds of data that may concern us, those of the input function f and the integration operator itself. Here we will view functions from reals to reals as operators on ground data, and the integral as an operator at the next level, and we will use a convenient cartesian closed category containing the complete, separable metric spaces to model such classes of operators or functions.
1.2. Outline of the paper. We will address the following general problem; − Given (interpretations of the expressions) σ(X) and σ(Y ) where X and Y are complete, separable metric spacesand σ is a type, how will relationships between X and Y give rise to relationships between σ(X) and σ(Y )? In Section 2 we give a brief introduction to qcb-spaces and domain representations in general, and we define our "convenient" class Q of qcb-spaces. In Section 3 we introduce the Urysohn Space U [19,20], and survey some of the main properties.
One of the key results of the paper is that the universality of U extends to higher types. Let X = (X 1 , . . . , X n ) be a sequence of complete, separable metric spaces, and let U be the sequence of n occurrences of U . In Section 5 we will show that if σ is a type with n free variables for base types, then we have a topological embedding of σ( X) into σ( U ). If we replace occurrence no. i of U in U with a separable Banach space Y i and let each X i be homeomorphic to a closed subset of Y i , our proof can also be used to prove that σ( X) can be topologically embedded into σ( Y ).
An embedding-projection pair between spaces Y and X is normally a pair (ε, π) of continuous functions ε : Y → X and π : X → Y such that π(ε(y)) = y for all y ∈ Y . If we have two typed structures, one with base type Y and one with base type X, one standard way to show that we may embed the first into the second type by type is to establish an embedding-projection pair between Y and X and then show that this generates an embedding-projection pair at each type.
Sometimes it is topologically impossible to have a continuous projection from X to Y , for instance when X = R and Y = N. We will see that for many important cases, we can replace the use of the projection with a sequence of probabilistic approximations.
For spaces in Q, we introduce probabilistic embedding-projection pairs in Section 5 as a tool in the proof of the embedding theorems.
Prior to this, we introduce the concept of density with probabilistic selection in Section 4. In some sense, this is a warm-up for the more general concept, but it is also used as a tool for proving effective density theorems of independent interest.
The introduction of probabilistic embedding-projection pairs, and the simpler concept density with probabilistic selection can be seen as the main methodological contribution in the paper. The method first appeared in Normann [11] with N for Y and R for X.
In our setting, the proof of an effective density theorem will involve a construction of an enumeration of a topologically dense set. We will be more precise in the sequel.
The main result in Section 5 is a purely topological result, with no constructive or computable content. There is an effective, but restricted, version of the imbedding theorem from Section 5 in preparation, and the proof of the effective density theorem in Section 4 can be viewed as a preparation for this as well.
1.3. Representing output data. Blanck [4,5] carried out some pioneering work on the use of domain theory for representing topological spaces. Though we add some conceptual analysis, the technical definitions and results of this section are due to Blanck. We have to assume some familiarity with basic domain theory, see e.g. Abramsky and Jung [1], Stoltenberg-Hansen & al. [16] or Amadio and Curien [2] for introductions to the subject. Definition 1.1. In this paper, if X is a topological space, then a domain representation of X will consist of a separable algebraic domain (D, ⊑), a nonempty set D R ⊆ D of representing objects with the induced Scott topology and a continuous surjection δ : The representation is dense if D R is a dense subset of D in the Scott topology.
If (D, D R , δ) is a domain representation of X, and we let D 0 be the set of compact or finitary elements of D, we may view the elements of D 0 as approximations to the elements of X. Now, if X is a set of ideal output data, the elements of D 0 may be chosen as the possible intermediate approximative values obtained through the computation of some element x of X. If we view this set of approximations as an extension of X, it is natural to identify each x ∈ X with some canonical set of approximations of x, preferably a set that in some abstract sense can "be computed" from x itself. This leads us to consider the retract representations, representations where there is a continuous right inverse ν : Finally, an output should be complete with no room for computing another output with strictly more information. This leads us to consider upwards closed representations, i.e. representations where, if α ∈ D R and α ⊑ β, then β ∈ D R and δ(α) = δ(β).
Blanck [5] proved that if a topological space X accepts an upwards closed retract representation, then X is a regular space, and in fact it is normal. Since we restrict our attention to separable domains, X will have a countable base. Then, as an application of the Urysohn metrization theorem, X will be metrizable.
We will bring this analysis a bit further. If we use a domain representation of a space of output data, it is reasonable to assume that the set of representing objects is a closed set in the Scott topology, simply because we then work with the completion of the approximating finitary data. This leads us to consider Polish spaces, topological spaces that can be induced from complete, separable metric spaces. In Section 3 we will introduce the Urysohn space U . This is universal in the sense that Polish spaces are exactly, up to homeomorphisms, the topological spaces that are closed subsets of U with the induced topology. Thus we consider U to be a suitable candidate for the universal datatype of output data, or of ground data in general.
Blanck [4] showed how we can construct a representation of each separable metric space, and this representation will indeed be an upwards closed retract representation. Since in later sections we will want to refer to Blanck's construction, we give some of the details here.
Definition 1.2. Let X, d be a nonempty separable metric space with a countable dense subset {a n | n ∈ N}. (a) For each n ∈ N and positive rational number r, let B n,r = {x ∈ X | d(x, a n ) ≤ r}, i.e. the closed ball of radius r around a n .
A RICH HIERARCHY 5 (b) Let E 0 be the set of finite sets of such closed balls, such that whenever B n,p and B m,q are in the set, then p + q ≥ d(a n , a m ). (The balls have at least a potential of a nonempty intersection.) (c) If K and L are in E 0 , we let K ⊑ L if for all balls B n,r in K there is a ball B m,s in L such that s + d(a n , a m ) ≤ r. In this case ⊑ will be a preorder. (This express that L has to be a subset of K, as a consequence of the triangle inequality.) (d) An ideal I in E 0 represents x ∈ X if: (i) x ∈ B n,r whenever B n,r ∈ K ∈ I.
(ii) For each ǫ > 0 there is a K ∈ I such that all balls in K have radii < ǫ. (e) We let D = D X be the ideal completion of E 0 , i.e. the set of ideals ordered by inclusion.
Then the set of finitary elements D 0 will be the set of prime ideals in D.
This construction may seem unnecessarily complicated, but something of this complexity is required if one wants to construct an effective domain representation uniformly from an effective metric space. Like all domains, D X is equipped with the Scott topology, where a typical element of the basis will consist of all ideals containing some fixed element of E 0 . Then the map sending a representative for x ∈ X to x will be continuous. Now, an element x ∈ X may have more than one representative, but there will always be a least one in the inclusion ordering of the set of ideals, and in fact, the function mapping an element x ∈ X to the least ideal representing x is continuous with respect to the Scott topology. Thus X is homeomorphic to a subspace of the representing space D X . The least ideal representing x ∈ X will consist of all K such that x is in the interior of each B n,r ∈ K. It is the fact that we restrict ourselves to clusters of neighborhoods where x is in the interior that makes this construction continuous.
Also observe that if I ⊆ J are two ideals, and if I represents x ∈ X, then J represents x. Moreover, due to the fact that metric spaces are Hausdorff, the same ideal may not represent two different elements of X. Blanck's construction is that of an upwards closed retract representation.
A simpler approach. If we are not concerned with effectivity, we may construct the representing domain based on nonempty finite intersections of closed balls. Then we automatically get a dense retract representation that is upwards closed. This approach will be taken in Section 5.
A category of qcb's
In this paper, we will assume that all spaces are nonempty.
Adopting the convention from Battenfeld, Schröder and Simpson [3] we say that a topological space X is a qcb-space if it is T 0 and can be viewed as the quotient space of an equivalence relation on a space with a countable base. The corresponding category QCB is, in some sense, the richest category of topological spaces that can be handled with decency using domain theory.
Definition 2.1. Let X be a topological space. A pseudobase for X is a family P of nonempty subsets of X closed under finite nonempty intersections such that whenever x = lim x n in X and A topological space is sequential if the topology is the finest one where the convergent sequences indeed are convergent. Schröder showed that all qcb-spaces will admit countable pseudobases and that every T 0 -space with a countable pseudobase will be a qcb-space. If we consider the Blanck representation of separable metric spaces, we may form a pseudobase from the set of finitary objects, which is a set of clusters of closed balls, by letting the pseudobase elements be all nonempty intersections of such clusters. These pseudobase elements will be closed.
In QCB we use continuous functions as morphisms. Since the spaces are sequential, a function f : X → Y is continuous if and only if it maps a convergent sequence and its limit point to a convergent sequence and its limit point.
We are going to work within a subcategory Q of QCB: Let Q be the class of sequential Hausdorff spaces that permit a countable pseudobase of closed sets.
By the observation above, every complete, separable metric space will be in Q. We will show that Q is closed under the function space operator used in QCB, and Q will then be a convenient subclass of qcb for us to work with. For our next result, we need the concept of an admissible domain representation due to Hamrin [6], based on a similar concept due to Schröder [12,13], see also Weihrauch [22]: Definition 2.4. Let D, D R , δ be a representation of the space X, see Definition 1.1. We call the representation admissible if for every dense representation E, E R , π of a space Y and every continuous function f : Y → X there is a continuous function φ : E → D such that φ maps E R into D R and such that for all e ∈ E R . Remark 2.5. If D, D R , δ is an admissible representation of X and x = lim n→∞ x n , there will be a convergent sequence α = lim n→∞ α n in D R with x = δ(α) and x n = δ(α n ) for each n ∈ N.
We call this a lifting of the convergent sequence, and the existence of a lifting is easy to prove given an admissible representation. This is a standard observation. Lemma 2.6. Every space in Q has an upwards closed admissible representation.
Proof. Let X ∈ Q and let P be a countable pseudobase of closed subsets of X. We apply the argument from Hamrin [6], and assume w.l.o.g. that P is closed under finite unions. Then the ideal completion D, ⊑ of P, ⊇ offers an admissible representation of X, where each x ∈ X is represented by the elements of By Hamrin [6] this is an admissible representation, and we are left with showing that D R x is upwards closed. If α ∈ D R x and α ⊆ β ∈ D, the second requirement for β ∈ D R x is trivially satisfied. Now, let q ∈ β and assume that x ∈ q. Then x ∈ X \ q, which is open, so Then p ∩ q ∈ β since β is an ideal. But p ∩ q = ∅ and β will only contain nonempty sets. This is a contradiction, so x ∈ q.
These spaces are sequential, which means that the topology will be the finest topology where all convergent sequences do converge. This offers a natural topology on the function spaces X → Y of continuous functions, induced by the limit-space construction Proof. Let p 1 , . . . , p n be closed pseudobase elements in X and q 1 , . . . , q n be closed pseudobase elements in Y such that for all K ⊆ {1, . . . , n} , The nonempty such sets will form a pseudobase of closed sets for X → Y . X → Y is clearly Hausdorff.
Remark 2.8. We do not use that X is in Q, only that X is a qcb.
Still using continuous functions as morphisms, we may view Q as a category.
Our key examples will be the spaces we may obtain from complete, separable nonempty metric spaces closing under the function space construction. It is known, see Schröder [15], that these spaces need not be regular (or normal) spaces. We will be interested in the finest regular (or normal, this amounts to the same in this case) subtopology of the sequential one: We say that A is functionally closed if there is a continuous map f : X → [0, 1] such that x ∈ A ⇔ f (x) = 0. The complement of a functionally closed set is functionally open.
Remark 2.10. This is standard terminology from general topology. Functionally closed sets are also known as zero-sets.
It is not hard to show that the functionally open sets form a regular subtopology on X.
The fact that the topology on X is hereditarily Lindelöf, i.e. that every open covering of a subset accepts a countable subcovering, is useful in showing that this class is closed under arbitrary unions. These concepts will be important in Section 5.
In the sequel we will use the fact that if X ∈ Q and P is a pseudobase for X consisting of closed sets, and Y ⊆ X, then forms a pseudobase of closed sets for Y .
In this paper, we let V 1 , . . . , V k be formal variables for complete, separable metric spaces, and we define the formal types as the least set of expressions containing each variable V i and closed under the syntactical operation σ, τ ⊢ (σ → τ ).
If X 1 , . . . , X k are separable, complete metric spaces and σ is a type in the variables V 1 , . . . , V k , its interpretation σ(X 1 , . . . , X k ) is given in Q.
It is easy to see that if each X i is nonempty, then σ(X 1 , . . . , X k ) is nonempty.
The Urysohn Space
In Section 1 we were primarily interested in mathematical models for data-types where the data could be viewed as the ultimate outputs of algorithms running in infinite time, and we observed that we may use Polish spaces or separable, complete metric spaces for this purpose. Given some metric spaces as basic data-types, we will then be interested in derived data-types, where the objects in a sense are operators with ultimate values in metric spaces. In this paper, we will be mainly interested in hereditarily total objects of this kind, but of course, if one is interested in functional programming where such base types are involved, the hereditarily partial operators are essential for the construction of denotational semantics. Urysohn [19,20] showed that there is a richest separable metric space, the so-called Urysohn space, and the main aim of this paper is to show that any space of hereditarily total continuous functionals over any set of complete separable metric spaces can be topologically embedded into a space of functionals of the same type, but now over just the Urysohn space.
In order to be able to prove our results, we have to refer to the basic original properties of this space and to some of the more recent results about it.
Definition 3.1. Let X be a metric space. We call X finitely saturated if whenever K ⊆ L are finite metric spaces, and φ : K → X is a metric-preserving map, then φ can be extended to a metric-preserving map ψ from L to X.
Remark 3.2. The word saturated is common in model theory for this kind of phenomenon, so we adopt it here.
Urysohn proved that there exists a complete, separable metric space U that is finitely saturated, and that, up to isometric equivalence, there is exactly one such space. This space is known as the Urysohn space.
Urysohn gave an explicit construction of U , as the completion of a countable metric space where all distances are rational numbers, and which is saturated with respect to pairs of finite spaces with rational distances. He showed that if X is a metric space, x 1 , . . . , x n are elements of X and {x 1 , . . . , x n } is extended to a metric space {x 1 , . . . , x n , y} where y is a new element with distance d(x i , y) to each x i , we may consistently define a distance from y to any element x ∈ X by By iterating this construction using some book-keeping that ensured that all rational one point extensions of finite subspaces of the set under construction will be taken care of, he constructed the dense subset U 0 of U .
There are both effective (Kamo [7]) and constructive (Lešnik [9,10]) versions of the main results of Urysohn. Since effectivity is essential for our results in Section 4, we will give a brief introduction to what we mean by effectivity.
Definition 3.3.
A real x is computable if there is a fast converging computable sequence {x i } i∈N of rationals with x as the limit, where fast converging means that |x n − x| ≤ 2 −n for all n.
A sequence {x n } n∈N of reals is computable if there is a computable map γ of N into the set of fast converging sequences of rational numbers such that x n = lim n→∞ γ(n) for each n.
A metric space (X, d) is effective if there is an enumeration {r i } i∈N of a dense subset of X such that the map A careful reading of Urysohn's construction tells us that U is effective in this sense.
In order to prove that the completion U of U 0 is saturated, we will start with elements u 1 , . . . , u k in U and requirements d(u i , x) = a i consistent with the axioms of metric spaces for i = 1, . . . , k, and we have to prove that there is some u ∈ U satisfying these requirements.
The proof can be made effective in the following sense: If we represent u 1 , . . . , u k with fast converging sequences from U 0 and a 1 , . . . , a k with fast converging sequences from Q, we can construct a fast converging sequence from U 0 converging to a desired u. There are details to be filled in here, of course.
Then, by an application of the recursion theorem, we see that every effective metric space (X, d) can be effectively embedded into U . Thus we have: Theorem 3.4. Every separable metric space X can be isometrically embedded into the Urysohn space, and if X is an effective space, the embedding can be made effective.
We of course have that the image of X will be functionally closed (i.e. just closed) in U exactly when X is complete, and this is the reason for why we restrict our attention to complete, separable metric spaces in the technical sections of the paper.
There has been a renewed interest in the Urysohn space over the last 25 years. One result in particular is of importance to us: Uspenskij [21] shows that U as a topological space is homeomorphic to the Hilbert space l 2 , and thus to any separable Hilbert space of infinite dimension. Uspenskij depends on a characterization of the class of topological spaces homeomorphic to Hilbert spaces due to Toruńczyk [17].
The combined Toruńczyk -Uspenskij proof gives us no information about whether this result is constructive in any sense.
In the case of choosing a domain representation for the Urysohn space, the two approaches discussed in Section 1 are equivalent. This can be seen from the following Observation 3.5. Let U be the Urysohn space and let B 1 , . . . , B n be a family of closed balls where B i has radius r i and center in a i , and assume that none of the balls are contained in the interior of any of the others.
Then the following are equivalent: (2) ⇒ (1) is a consequence of saturation, there is an element in the intersection of the spheres of radius r i around a i for i = 1, . . . , n.
Effective density theorems
The underlying problem in this section is when we may effectively enumerate a dense subset of the set of continuous functionals of a fixed type using effective, separable metric spaces at base types. We will not answer this problem completely, but that the answer is not "always" is demonstrated by the following example, where we construct an effective metric space A such that there is no effective enumeration of a dense subset of A → N: Example 4.1. Let A ⊆ N be recursively enumerable but not computable, and let f : N → N be a computable 1-1 enumeration of A.
We will construct an effective subspace of the Banach space l ∞ of all bounded sequences of reals.
Let a < b be reals, and let [a, b] n be those g ∈ l ∞ where g(n) ∈ [a, b] and g(m) = 0 for m = n.
Let X consist of the constant 0 together with all [0, 3] n for n ∈ A and all [1, 3] n for n ∈ A.
It is easy to see that we can effectively enumerate a dense subset of X with a computable metric, using a stage m where f (m) = n to decide to extend the ongoing sub-enumeration of [1,3] n to a sub-enumeration of [0, 3] n . Thus X is an effective metric space.
If we have an effectively enumerated dense set {g n | n ∈ N} of total functions in X → N, we see from the obvious connectedness-properties of X that n ∈ A ⇔ ∃m(g m (λk.0) = g m (n → 2)) where n → 2 is the element in [1,3] n that takes the value 2 on n.
This would imply that A is computable, so there is no such sequence {g n } n∈N .
As a tool of independent interest, we develop the concept of density with probabilistic selection. Probabilistic selection from a dense set may replace the use of a continuous or even effective selection of a sequence from a dense set converging to a given point, when such selections are topologically impossible. Let A = {a 1 , . . . , a n } be a finite set. A probability distribution on A is a map m : A → R [0,1] such that k≤n m(a k ) = 1.
A probability distribution on a finite set A induces a probability measure on the powerset of A, and we will not distinguish between the distribution and the induced measure.
We let P D(A) be the set of probability distributions on A, where we assume that A comes with an enumeration. P D(A) can be viewed as a convex subspace of a finite dimensional Euclidean space, and thus P D(A) has a canonical topology. P D(A) can actually be identified with the standard simplex in R n . Definition 4.2. Let { A n , ν n , m n } n∈N be a sequence of finite sets A n , maps ν n : A n → X into a space X ∈ Q together with probability distributions m n on each A n . Let x ∈ X. We say that x = lim n→∞ ν n [A n ] mod m n if whenever we for each n ∈ N select an a n ∈ A n with m n (a n ) > 0 , then x = lim n→∞ ν n (a n ).
We write ν n [A n ] since it is actually the image of A n under ν n that converges modulo the sequence of measures. such that for each x ∈ X: When this is the case, we call { A n , ν n , µ n } n∈N a probabilistic selection on X.
If { A n , ν n , µ n } n∈N is a probabilistic selection on X, then i∈N ν n [A n ] will be dense in X and for every x ∈ X, the set of sequences {a n } n∈N ∈ n∈N A n such that x = lim n→∞ ν n (a n ) will have measure 1 in the product measure n∈N µ n (x). Remark 4.4. This concept will be an important tool in showing density theorems. In order to prove embedding theorems, we will extend this concept in Section 5 to what we will call a probabilistic projection.
In our applications, X will be a space where each X i is a complete, separable metric space. Then A n will consist of finite functionals of the same type, where the base types are interpreted as finite subsets of the metric spaces in question. Then ν n represents a way to embed these finitary functionals into the space of continuous functionals.
Lemma 4.5. Let X be a separable metric space. Then X satisfies density with probabilistic selection.
Proof. Let d be the metric on X, and let {a 0 , a 1 , . . .} be a countable dense subset of X. Let A n = {a 0 , . . . , a n } and let ν n be the inclusion function from A n to X. − For any x ∈ X, let d(A n , x) = min{d(x, a i ) | i ≤ n}. − If u and v are non-negative reals, let u · −v = max{u − v, 0}.
DAG NORMANN
− For each x ∈ X and a ∈ A n , let where δ n is the minimum of 2 −n and all distances d(a, b) for a = b in A n . The required properties are easy to verify. Definition 4.6. Let X ∈ Q. We say that X is semiconvex if for every finite set A = {a 1 , . . . , a n } and map ν : A → X, there is a continuous h A,ν : P D(A) → X such that the following holds: Whenever − A n is finite for each n ∈ N, − ν n : A n → X for each n ∈ N, − m n ∈ P D(A n ) for each n ∈ N, − x ∈ X is such that Lemma 4.7. The Urysohn space U is semiconvex.
Proof. Let A = {a 1 , . . . , a n } be finite and let ν : A → U . Let v i = ν(a i ) and let V = {v 1 , . . . , v n }. We may let φ embed V isometrically into R n with the max-norm and we may let ψ embed R n with the max-norm isometrically into U such that ψ(φ(v i )) = v i for all i ≤ n. Then let where the algebra takes place in R n . It is easy to see that this works.
Remark 4.8. Clearly, every Banach space X is semiconvex. If A = {a 1 , . . . , a n } and ν : A → X we let Theorem 4.9. Let X and Y be Q-spaces that satisfy density with probabilistic selection, and assume that Y is semiconvex. Then X → Y satisfies density with probabilistic selection.
Proof. Let {A n } n∈N be a sequence of finite sets with maps ν n : A n → X and continuous functions µ n : X → P D(A n ) forming a probabilistic selection.
Let C n with θ n : C n → Y and λ n : Y → P D(C n ) for each n ∈ N witness that Y satisfies density with probabilistic selection. Let h n : P D(C n ) → Y be derived from the map C → h C witnessing that Y is semiconvex.
Let B n = A n → C n and let φ ∈ B n . First we will see how to construct a continuous ν * n (φ) : X → Y : Let x ∈ X. For each c ∈ C n let µ −1 n,x,φ (c) be defined as µ −1 n,x,φ (c) = µ n (x)(φ −1 ({c})) and let ν * n (φ)(x) = h n (µ −1 n,x,φ ). We will see how the sets B n together with the maps ν * n from B n to X → Y can be organized to a probabilistic selection.
Let f : X → Y be continuous. We will define the probability distribution η n (f ) on B n as a product measure and prove the required properties. Let η n (f ) will be a probability distribution since it is the finite full product of probability distributions. We have to show Claim: Let f = lim n→∞ f n in X → Y and assume that η n (f n )(φ n ) > 0 for each n. Then f = lim n→∞ ν * n (φ n ). Proof of Claim: Since we are operating in the category of sequential topological spaces, this amounts to showing that if x = lim n→∞ x n in X, then f (x) = lim n→∞ ν * n (φ n )(x n ) in Y . This will follow from the construction of the ν * n 's, the properties of the h n 's and the following Subclaim: f (x) = lim n→∞ θ n [C n ] mod µ −1 n,xn,φn . Proof of Subclaim: Let µ −1 n,xn,φn (c n ) > 0 for each n. Then there is an a n ∈ A n with φ n (a n ) = c n and µ n (x n )(a n ) > 0.
x = lim n→∞ ν n (a n ) since we have probabilistic selection on X, so f (x) = lim n→∞ f n (ν n (a n )).
Since η n (f n )(φ n ) > 0 we must have that λ n (f n (a n ))(φ n (a n )) > 0 so f (x) = lim n→∞ θ(φ n (a n )), or, in other words This ends the proof of the subclaim, the claim and the theorem.
The proof of Theorem 4.9 is effective in the sense that we have given explicit constructions of all items involved. In particular this means that if we start with effective domain representations where the extra parameters (ν, µ etc.) are effective, then X → Y will be represented over an effective domain, with effective density with probabilistic selection.
We have not proved that X → Y will be semiconvex under the assumptions of Theorem 4.9. In order to make use of Theorem 4.9 as an induction step, we in addition need the following observation: Observation 4.10. If X and Y are in Q and satisfy density with probabilistic selection, then so does X × Y , where X × Y is the sequentialisation of the product topology on the set X × Y (i.e. the product in QCB).
Clearly this observation extends to finite cartesian products. Using standard currying of types, Observation 4.10 and Theorem 4.9 for the induction step, we then get Theorem 4.11. Let each of X 1 , . . . , X k be either an effective Banach space or the Urysohn space U , let σ be a type and let X = σ(X 1 , . . . , X k ). Then there is an effective sequence of finite sets A n , an effective sequence of finite maps ν n : A n → X and an effective sequence of continuous maps µ n : X → P D(A n ) such that { A n , ν n , µ n } n∈N is a probabilistic selection on X.
Our starting point was the search for an effective enumeration of a dense subset of some spaces of functionals of a given type. We have obtained Corollary 4.12. Let each of X 1 , . . . , X k be either an effective Banach space or the Urysohn space U . Let σ be a type and let X = σ(X 1 , . . . , X k ).
Then there is an effective enumeration of a dense subset of X.
Proof. Recall the comment after Definition 4.3 and then use Theorem 4.11.
An embedding theorem
In this section we will prove a theorem that is strictly topological in formulation, but where the motivation for proving it comes from the wish to understand the nature of the spaces used in the semantics of functional programming. We will prove the following: Theorem 5.1. Let σ be a type in the variables V 1 , . . . , V k and let X 1 , . . . , X k be complete, separable metric spaces. Then σ(X 1 , . . . , X k ) is homeomorphic to a functionally closed set in σ(U, . . . , U ), where U is the Urysohn space.
In order to prove this theorem, we have to work with a combination of the concept of an embedding-projection pair and probabilistic selection as defined in Section 4. such that: − When x = lim n→∞ x n in X with x = ε(y) for some y ∈ Y , and a n ∈ A n for each n ∈ N is such that µ n (x n )(a n ) > 0, we have that y = lim n→∞ ν n (a n ). We will call a sequence { A n , ν n , µ n } n∈N like this a probabilistic projection.
In a probabilistic embedding-projection pair as above, we clearly have that ε is injective. Lemma 5.3. Let X and Y be complete separable metric spaces, and let Y be isometric to a subspace of X via ε : Y → X. Then ε is the embedding-part of a probabilistic embeddingprojection pair between Y and X.
Proof. We use the construction from the proof of Lemma 4.5, replacing the enumeration of a dense subset of X with an enumeration of a dense subset of Y , and relating x ∈ X to the ε-range of finite parts of the dense subset of Y . There are no new technical aspects of the proof. Note that since Y is complete, the image of ε is closed in X, and thus functionally closed.
The key lemma in proving Theorem 5.1 is Lemma 5.4. Let X ∈ Q, Y homeomorphic to a functionally closed set in X via an embedding ε : Y → X. Let A ⊆ U be a closed subset of the Urysohn space U .
If ε is the embedding-part of a probabilistic embedding-projection pair between Y and X, then Y → A is homeomorphic to a functionally closed set Z in X → U admitting a probabilistic embedding-projection pair between Y → A and X → U .
Remark 5.5. We restrict ourselves to Q everywhere, also in cases where the proof works for qcb-spaces in general, or even in a greater generality.
Theorem 5.1 is proved by induction on the type, using Lemma 5.3 in the base case and Lemma 5.4 in the induction step. For the induction step, we will also need Lemma 5.6 handling cartesian products.
Proof of Lemma 5.4. For each n let A n ⊆ Y be finite, ν n : A n → Y and µ n : X → P D(A n ) be continuous such that the sequences form a probabilistic projection.
Let f : X → [0, 1] be continuous such that First we will show how to embed Y → A into X → U . We will use that U is homeomorphic to l 2 , see Uspenskij [21], and the linear operations below are carried out via this homeomorphism.
Let g : Y → A be continuous and let x ∈ X. Let where n ∈ N and λ ∈ [0, 1) are unique such that f (x) = 1 n+λ , otherwise We have to show that ε * (g) ∈ X → U is continuous and that is continuous.
Since we are working with sequential spaces, this amounts to showing 16 DAG NORMANN Claim 1: If g = lim n→∞ g n in Y → A and x = lim n→∞ x n in X then Proof of Claim 1: There will be two cases : Then f (x) = 0 and f (x n ) = 0 for almost all n. Then, locally around x, everything is continuous.
: Then ε * (g)(x) = g(ε −1 (x)). We may, without serious loss of generality, assume that for every n ∈ N we have that x n ∈ ε[Y ] (since g is continuous on Y and g = lim n→∞ g n as functions defined on Y in the limit space sense). Then where m n ∈ N and λ n ∈ [0, 1) are such that f (x n ) = 1 mn+λn . Now, if we for each n select a n such that a n ∈ A mn and µ mn (x n )(a n ) > 0 or such that a n ∈ A mn+1 and µ mn+1 (x n )(a n ) > 0, we may use that x = lim n→∞ x n and the properties of probabilistic projections to see that ε −1 (x) = lim n→∞ ν mn/mn+1 (a n ), where we choose the index m n or m n + 1 that is relevant for a n .
Since ε * (g n )(x n ) is a weighted sum of values g n (ν mn (a)) for a ∈ A mn and g n (ν mn+1 (a)) for a ∈ A mn+1 , where the sum of the coefficients is 1 and the coefficients are given by the probabilities derived from x n , it follows from the consideration above that ε * (g)(x) = g(ε −1 (x)) = lim n→∞ ε * (g n )(x n ).
This ends the proof of Claim 1.
Claim 2: There is a continuous Proof of Claim 2: Let {y n } n∈N be a dense subset of Y and {x m } m∈N a dense subset of X. Given γ : X → U we will let h(γ) measure to what extent γ does not map ε[Y ] into A and to what extent γ will differ from ε * ((ε * ) −1 (γ)).
Note that the definition of (ε * ) −1 (γ) makes sense since we never use that a function takes values in A in the definition or in the proof of Claim 1.
We simply let This ends the proof of Claim 2.
It remains to produce the probabilistic projection. Let P be a countable pseudobase for Y , see Section 2. Let {ξ n | n ∈ N} be a countable dense subset of U . For r > 0, r ∈ Q, we let B n,r = {a ∈ U | d A (a, ξ n ) ≤ r}. Let { p i , B i } i∈N be an enumeration of all pairs p, B where p ∈ P and B is a nonempty finite intersection of closed neighborhoods of the form B n,r .
We say that p i , B i approximates γ ∈ X → U if γ(y) ∈ B i whenever y ∈ p i , cf. the construction of pseudobase elements for function spaces. Let If K is relevant, let g K satisfy * .
If K is not relevant, let m be maximal such that K ∩ {1, . . . , m} is relevant, and let Now, we assume that the enumeration {y j } j∈N of the dense subset of Y used in the proof of Claim 2 is chosen such that for all p ∈ P, {y j | y j ∈ p} is a dense subset of p. Then, whenever p ∈ P, B ⊆ A is a closed set and g : Y → A is continuous we have that Now, let C n be the powerset of {1, . . . , n}. We will construct a sequence of continuous functions µ * n : (X → U ) → P D(C n ). Let k = k n be so large that for all i ≤ n there is a j ≤ k such that y j ∈ p i . − Let µ * n,i (γ)(∈) = 1 if γ(ε(y j )) ∈ B i for all j ≤ k with y j ∈ p i . − Let µ * n,i (γ)(∈) = 0 if d U (B i , γ(ε(y j ))) ≥ 2 −n for at least one j ≤ k with y j ∈ p i − Let µ * n,i (γ)(∈) = 1 − λ if 2 −n · λ = max{d U (B i , γ(ε(y j ))) | j ≤ k ∧ y j ∈ p i } otherwise. − Let µ * n,i (γ)( / ∈) = 1 − µ * n,i (γ)(∈). µ * n,i (γ) is a probability distribution on the two-point set {∈, / ∈} where the probability of ∈ is measuring how probable it is, given n, that p i , B i approximates γ. Let This gives us the n'th estimate of how likely it is that K is the set of indices of the approximations to γ.
Claim 3: Assume that g : Y → A, γ = ε * (g) and that γ = lim n→∞ γ n . Assume further that for each n ∈ N, K n ∈ C n is such that µ * n (γ n )(K n ) > 0. Then g = lim n→∞ g Kn . Proof of Claim 3: Using the lim-space characterization it is sufficient to show that whenever z = lim n→∞ z n ∈ Y , then g(z) = lim n→∞ g Kn (z n ) in U . We will use Lemma 2.6.
Let (D, D R , δ) be the admissible domain representation of Y , where D consists of ideals of pseudobase elements in P, and let (E, E R , δ 1 ) be the corresponding domain representation of Y → U , see the proof of Lemma 2.7 for the construction and the notation used below.
Let α = lim n→∞ α n be a convergent sequence from E R representing g = lim n→∞ (ε * ) −1 (γ n ) and let ζ = lim n→∞ ζ n be a convergent sequence from D R representing z = lim n→∞ z n , see Remark 2.5.
Let ǫ > 0. Since α represents g and ζ represents z, there is an m ∈ N such that P { pm,Bm } ∈ α, p m ∈ ζ and such that the diameter of B m is less than ǫ. We will show that for sufficiently large n we have that g Kn (z n ) ∈ B m . This will show the claim.
Let n 0 be such that for n ≤ n 0 we have that P { pm,Bm } ∈ α n and that p m ∈ ζ n . Recall how we used k n in the construction of µ * n (g). Let n 1 be so large that for any i ≤ m, if g[p i ] ⊆ B i , then there is a j ≤ k n 1 such that y j ∈ p i and g(y j ) ∈ B i . Select one such j i for each relevant i ≤ m, and then choose n 2 so large that for each n ≥ n 2 and each relevant i ≤ m we have that This is possible since γ(ε(y j i )) = lim n→∞ γ n (ε(y j i )).
Let n ≥ max{n 0 , n 1 , n 2 } and let K ⊆ {1, . . . , n} be such that µ * n (γ n )(K) > 0. For i < m we have ensured that if γ n [ε[p i ]] ⊆ B i , then µ * n,i (γ n )(∈) = 0 and since P { pm,Bm } ∈ α n we also have that µ * n,m (γ n )(∈) = 1. It follows that g witnesses that K ∩ {1, . . . , m} is relevant and contains m. This holds in particular for K = K n , so g Kn (z n ) ∈ B m . This ends the proof of Claim 3. Now the proof of Lemma 5.4 is complete, but let us summarize what we have achieved. − We have defined the embedding ε * : (Y → A) → (X → U ) and proved that it is continuous and has a continuous inverse on its range. − We have proved that the range of ε * is a functionally closed set. − We have defined the finite set C n and the map For each γ ∈ X → U , we have defined the probability distribution µ * n (γ) on C n and proved that altogether, ε * and { C n , ν * n , µ * n } n∈N form a probabilistic embeddingprojection pair between Y → A and X → U .
We have not included cartesian products as one type constructor, but in order to handle types of the form σ = τ → δ in the reflection of Lemma 5.4 it will make life simpler if we view any type σ as a type σ = τ 1 , . . . , τ m → V i where V i is interpreted as some separable metric space. This means that we need an extra induction step in the proof of Theorem 5.1, the case of products.
If X 1 , . . . , X m are spaces in Q or in qcb in general, the product m i=1 X i is not just the standard topological product, but the finest topology accepting the induced convergent sequences in the product topology as convergent. We then have Lemma 5.6. Let Y 1 , . . . , Y m and X 1 , . . . , X m be two sequences of spaces in Q, and assume that there are probabilistic embedding-projection pairs between Y i and X i for each i ≤ m. Then there is a probabilistic embedding-projection pair between m i=1 Y i and m i=1 X i . Proof. This is more an observation than a lemma: − If ε i is the embedding for each i ≤ m, we let ε = m i=1 ε i .
− If f i witnesses that the range of ε i is a functionally closed set for each i ≤ m, let witness that the range of ε is a functionally closed set. − If A k n and ν k n : A k n → Y i are the finite "approximations" to Y i used for the probabilistic projections, we let A n and ν n be obtained by just taking products. − The probability distributions of the product are just the products of the probability distributions of each coordinate. It is easy to verify that all properties are preserved in this construction. Now we have all the ingredients needed to prove Theorem 5.1: If X 1 , . . . , X k are complete, separable metric spaces, and σ is a type expression in the variables V 1 , . . . , V k , we prove by induction on σ that there is a probabilistic embeddingprojection pair between σ(X 1 , . . . , X k ) and σ(U, . . . , U ), where the image of the embedding is a functionally closed set.
The induction start σ = V i is covered by Lemma 5.3. For the induction step, we let σ = τ 1 , . . . , τ m → V j . We then use Lemma 5.6 and the induction hypothesis to show that there is a probabilistic embedding-projection pair between m i=1 τ i (X 1 , . . . , X k ) and m i=1 τ i (U, . . . , U ).
We then use Lemma 5.4 to complete the induction step.
Remark 5.7. This proof is noneffective. We have used that U is homeomorphic to l 2 , and we do not know of any effective proof of that. There are likely to be methods that get us around this problem, using effective semiconvexity like we did in Section 4. However, the concept of a relevant set of natural numbers, and the choice of the functions g K in the proof of Lemma 5.4, are not effective in a general situation, even when the metric spaces X 1 , . . . , X k are effective. Thus we may as well use the topological characterization of U as homeomorphic to l 2 in this proof.
Remark 5.8. If we let ε V i be the isometric map from X i to U used in this proof, we in reality construct, in the proof of Theorem 5.1, an embedding ε σ : σ(X 1 , . . . , X k ) → σ(U, . . . , U ) by recursion on σ. Actually, we construct an embedding of the typed hierarchy over X 1 , . . . , X k to the corresponding hierarchy over U, . . . , U in the sense that our local embeddings commute with application in the two hierarchies. We did not stress this in the proof, and leave it as an observation.
Conclusions and further research
We have shown that the typed hierarchy of hereditarily continuous and total functionals over the Urysohn space U is rich enough to contain all typed hierarchies over separable metric spaces as topological sub-hierarchies. One problem is if this can be generalized to a situation where we do not consider only the full space of continuous functions at types σ = τ → δ, but also cases where we select a functionally closed subset of the set of all continuous functions. If we work within the category of qcb-spaces with a pseudobase of functionally closed sets, see Schröder [14], we may apply his result stating that functionally closed in functionally closed is functionally closed, and our embedding theorem should also be valid in this generalized context. We consider this as a conjecture since we have not worked out a detailed proof.
All our spaces σ( X) are homeomorphic to functionally closed subsets of spaces of the form X → U where X ∈ Q, but we have not studied this class, denoted by zero(Q → U ), more closely. (Recall that these sets are also called zero-sets.) P. K. Køber [8] has obtained some partial results related to strictly positive inductive definitions of topological spaces, and one consequence of his results is that there is a least fixed point of a strictly positive inductive definition with parameters from zero(Q → U ) in zero(Q → U ) itself.
We know that U is homeomorphic to l 2 , but we do not know if the l 2 -structure on U is effective in the sense that there are computable representing the l 2 -structure, or any other Banach space structure on U . It may be of interest to equip U with some structure offering an internal computability theory, e.g. by identifying subsets representing N, Z and R. | 2009-09-24T03:02:03.000Z | 2009-09-07T00:00:00.000 | {
"year": 2009,
"sha1": "87ba8f35ad05edded8ffc13cdf8f9a5712acdc1e",
"oa_license": "CCBYND",
"oa_url": "https://lmcs.episciences.org/954/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "cedace3e73ade2a3acc2393a277a7817b2adeb5d",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
13833127 | pes2o/s2orc | v3-fos-license | Variation of Oscillation Mode Parameters over the Solar Cycle 23: An Analysis on Different Time Scales
We investigate the variation in the mode parameters obtained from time series of length 9, 36, 72 and 108 days to understand the changes occurring on different time-scales. The regression analysis between frequency shifts and activity proxies indicates that the correlation and slopes are correlated and both increase in going from time series of 9 to 108 days. We also observe that the energy of the mode is anti-correlated with solar activity while the rate at which the energy is supplied remains constant over the solar cycle.
Introduction
Observations show that the mode frequencies change with the solar cycle and the correlation between the frequencies and solar activity measured by different proxies is solar cycle phase dependent (see Jain, Tripathy, & Hill 2009, and references therein). There is also an indication that the correlation between frequency and activity depends on the length of the observation due to the finite lifetime of the modes (Tripathy et al. 2007). Therefore, it is important to understand the relation between mode frequencies and other mode parameters with solar activity on different time scales. With this objective, we have analyzed the Global Oscillation Network Group (GONG) time series data by splitting it into segments of length 9, 36, 72 and 108 days.
Data
The GONG data analyzed here cover the period between 7 May 1995 and 31 August 2007 and consist of 125 sets of time series of length 36 days. These are processed with a multi-taper spectral method to produce power spectra (Komm et al. 1999;Tripathy et al. 2007). The mode frequencies and other mode parameters are estimated by fitting the individual peaks (Anderson et al. 1990); not all modes are fitted successfully at every epoch due to the stochastic excitation nature of the modes. In this paper, we concentrate on the behavior of individual modes rather than the average quantities. Although each data set has a large number of multiplets, only 19 multiplets are found to be common to all of the data sets and have an inner turning point half-way between the tachocline and solar surface. These multiplets cover a frequency (ν) range of 2138 -3362 µHz and degree (ℓ) range between 50 and 81.
The mode parameters are correlated with four well known surface activity indicators: the integrated radio flux at 10.7 cm (F 10 ) obtained from Solar Geo- physical Data (SGD), Magnetic Plage Strength Index (MPSI) from Mt. Wilson magnetograms (Ulrich 1991), the International Sunspot Number (R I ) obtained from SGD and Mg II core-to-wing ratio (Viereck et al. 2001)
Results
Figure 1(a) shows the temporal evolution of frequency shifts corresponding to the n = 8, ℓ = 72 multiplet with respect to the average frequency of the mode; different symbols correspond to time series of different lengths. A distinct temporal variation depicting the solar activity cycle is easily observed in each data sets. Although the modes are well resolved in the 9-day time samples, a large scatter is seen for the frequencies, probably due to the lower frequency resolution and broader line widths. Figure 1(b) demonstrates the linear relationship between the frequency shifts and 10.7 cm radio flux measurements.
As an example of the relation between the frequency shifts and activity indices, the left panel of Figure 2 shows the correlation coefficients for four different multiplets as a function of the length of the time series. For all of the activity indices considered here, the correlation improves with the length but the nature of the increase is significantly different for different multiplets. For example the n = 8, ℓ = 81 multiplet shows a steep increase in correlation from 9 to 108 days while for the n = 8, ℓ = 63 multiplet, the correlation flattens after 36 days. The variations of the slope as a function of the correlation coefficients for different data sets are shown in the right panel of Figure 2. Each point in the figure represents a common multiplet. We find that the correlation and slopes are correlated and both increase from 9 days to 108 days. However, our earlier studies (Tripathy et al. 2007) involving average frequency shifts had shown that the slopes obtained from 9 days are higher than those from 108 days. Thus, the behavior of individual modes appear to be different from their average quantities. We correct the mode amplitudes and line widths for gaps in the temporal window function and then combine them to estimate the mode energy (∝ A × Γ) and energy supply rate (∝ A × Γ 2 ) assuming stochastic excitation (Goldreich et al. 1994). The temporal evolution of these parameters for the n = 8, ℓ = 72 multiplet is shown in Figure 3. As discussed in Komm et al. (2002), the plots demonstrate an annual modulation. We further observe that the energy of the mode (Fig. 3a) is anti-correlated with solar activity with correlation coefficients of −0.62, −0.83, −0.86, and −0.90 with F 10 for 9 to 108 days respectively. This again demonstrates that the correlation increases with the length of the time series. On the other hand, the energy supply rate (Fig. 3b) shows only short-term variations and no apparent correlation with the solar activity cycle. These general behaviors are consistent with those reported by Chaplin et al. (2000) for the low-degree modes. Further, we also note differences between energy and energy supply rate obtained from time series of different lengths; the values are higher for shorter time series. Similar to the mode frequency, it is also possible that different modes may behave differently and as a result the average behavior may be different than for individual multiplets.
Conclusion
Analyzing mode parameters of individual multiplets for data sets of different lengths, we find that the slope and linear correlation coefficients between mode frequencies and activity indicators increases with the length of the time series. In all cases, the slopes are found to be correlated with the correlation coefficients. We also observe that the mode energy decreases with increase in solar activity while the energy supply rate is approximately constant over the entire solar cycle; the values of these parameters are progressively higher for shorter time series. The study also indicated that the behavior of individual modes may be different from their average behavior. Additional work is in progress to confirm these findings. | 2009-03-11T23:16:19.000Z | 2009-03-11T00:00:00.000 | {
"year": 2009,
"sha1": "a39113514de6dfa5703c529e65614e5ac49d02fe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "870d03da2ac18af200dd41e8e46e7a1e4c62dd49",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
239170379 | pes2o/s2orc | v3-fos-license | Risk Prediction of Hepatocellular Carcinoma in Patients With Alcoholic Cirrhosis in an Area With Intermediate Prevalence for Hepatitis B Virus Infection: a Competing Risk Analysis
Background & Aims: The role of hepatocellular carcinoma (HCC) surveillance is being questioned in alcoholic cirrhosis because of the relative low incidence of HCC. Comorbid viral hepatitis may modify the risk, and competing outcomes may influence the actual incidence of HCC in alcoholic cirrhosis. This study aimed to assess the risk and predictors of HCC in patients with alcoholic cirrhosis by using competing risk analysis in an area with intermediate prevalence for hepatitis B virus.Methods: A total of 965 patients with alcoholic cirrhosis were recruited at a university-affiliated hospital in Korea and randomly assigned to either the derivation (n=643) and validation (n=322) cohort. Subdistribution hazards model of Fine and Gray was used with deaths and liver transplantation treated as competing risks. Death records were confirmed from Korean government databases. A nomogram was developed to calculate the Alcohol-associated Liver Cancer Estimation (ALICE) score.Results: Markers for viral hepatitis were positive in 21.0 % and 25.8 % of patients in derivation and validation cohort, respectively. The cumulative incidence of HCC was 13.5 and 14.9 % at 10 years for derivation and validation cohort, respectively. Age, positivity for viral hepatitis markers, alpha-fetoprotein level, and platelet count were identified as independent predictors of HCC and incorporated in the ALICE score, which discriminated low, intermediate, and high risk for HCC in alcoholic cirrhosis at the cut-off of 120 and 180. Conclusions: ALICE score reliably stratifies HCC risk of alcoholic cirrhosis in an area where the prevalence of viral hepatitis is substantial.
Introduction
Alcohol-related liver disease (ALD) poses great global health burden. According to the Global Burden of Disease study 2017, 332,300 people died of ALD annually, which comprises approximately one fourth of mortalities associated with chronic liver disease [1]. Hepatocellular carcinoma (HCC), the most common form of primary liver cancer in ALD, is responsible for one-third of ALD-related mortality, and one-third of all HCC-related deaths are attributed to alcohol use globally [2]. Surveillance for HCC is recommended for high-risk groups in order to facilitate early detection and improve survival [3]. However, alcohol-related HCC is prone to insu cient surveillance and therefore delayed detection compared to viral hepatitisassociated HCC [4]. One of the reasons for under-surveillance may be related to relatively low incidence of HCC in ALD. For example, a recent Swedish cohort study (n = 3,410) reported HCC incidence rate of 6.2 per 1000 person-years and the 10-year cumulative incidence of only 5.0% in alcoholic cirrhosis [5], which was much lower than previously published (annual incidence of 2.6-2.9 %) [6][7][8][9]. Another recent Danish study showed similar result (cumulative incidence of 6.0% after 10 years) [10]. These ndings suggest that HCC screening for all alcoholic cirrhosis may not be cost-effective, and that further risk strati cation is warranted to identify ideal candidates for surveillance in alcoholic cirrhosis.
Risk strati cation strategies have been actively explored in chronic viral hepatitis [11,12]. However, risk prediction models in ALD have been less well known [13][14][15]. In building a HCC prediction model, deaths and liver transplantations should be considered as competing events because many ALD patients experience hepatic decompensations and deaths before HCC is detected. Conventional Kaplan-Meier and Cox analysis may over-estimate the actual risk of HCC in the presence of competing risks [16]. For competing-risk survival analysis, cause-speci c hazards or Fine-Gray model is recommended [17]. The aforementioned alcohol-related HCC prediction models, however, used conventional cox regression without competing risk analysis.
Hepatitis B virus (HBV) and hepatitis C virus (HCV) may accelerate disease progression and increase the HCC risk in ALD [18]. Therefore, comorbid viral hepatitis needs to be considered in the generation of prognostic models for ALD. However, the effect of hepatitis virus infection has not been thoroughly assessed because patients with chronic viral hepatitis were usually excluded in the epidemiologic studies of alcoholic cirrhosis [9,10,15]. The prevalence of HBV is decreasing over time in Korea, but still is classi ed as intermediate, i.e., ≥ 2% of population [19].
In this study, we aimed to assess the risk and predictors of HCC in Korean alcoholic cirrhosis patients by using competing risk analysis. For this aim, we linked the Korean national death registry data to hospitalbased cohort data. A HCC risk prediction nomogram was developed, taking into consideration liver transplantations and deaths as competing risks.
Study population and design
In this retrospective cohort study, an e-cohort was generated by using the clinical data warehouse of Seoul National University Bundang Hospital, a university-a liated hospital in Korea [20][21][22]. The inclusion criteria were: 1) ALD based on ICD-10 code K70 AND presence of cirrhosis (see below), 2) > 20 years of age, 3) screening liver ultrasonography (US) with or without serum alpha-fetoprotein (AFP) at baseline. Alcoholic cirrhosis was diagnosed based on histology, endoscopic con rmation of varices or radiologic demonstration of cirrhosis. Patients were excluded if follow-up duration was < 6 months, development of outcomes (see below) or other malignancies before or within 6 months from initial screening US. Child-Pugh class C patients at presentation were also excluded because HCC surveillance was generally not recommended unless they are on the transplant waiting list [3,23,24].
The primary outcome was development of HCC. Secondary outcomes were liver transplantation and death which were assessed as competing risks. The death records were con rmed by using the Korean government database of vital statistics generated by Statistics Korea and Ministry of the Interior and Safety.
HCC surveillance was performed by liver US with or without serum AFP at 6-12 months of interval at the discretion of the attending hepatologists. Multiphase CT or MRI were subsequently performed if liver US exam showed nodule(s) with a diameter ≥ 10mm, or portal vein thrombosis, or increased AFP level. The diagnosis of hepatocellular carcinoma (HCC) was con rmed based on LiRAD 5 criteria [25]. Liver biopsy was performed to make a de nitive diagnosis if imaging studies showed atypical ndings [23].
This study was approved by the institutional review board (IRB) and ethics committee of SNUBH (IRB No: B-1907-553-105). All clinical investigations have been conducted according to the principles expressed in the Declaration of Helsinki. The requirement of informed consent was waived due to the retrospective nature of this study and anonymous analysis of data.
Statistical analysis
Enrolled patients were randomly assigned to one of two cohorts in a 2:1 ratio: the derivation and validation cohorts. Competing risk regression models were used with deaths and liver transplantation being treated as competing risks to assess the absolute risk of HCC and to identify the predictors of alcohol-related HCC from the derivation cohort. For competing risk analysis, the cause-speci c cumulative incidences were plotted by non-parametric cumulative incidence function using STATA's stcurve cif, and the subdistribution hazards model of Fine and Gray was built by using STATA's stcrreg competing-risks regression [26,27]. Complete case analysis method was chosen for handling missing data. A nomogram was developed for calculating the HCC scoring system by using R rms package. The calibration of the scoring system was evaluated by using calibration curves (R riskRegression package). The predictive power of the scoring system was compared by using area under time-dependent ROC analysis with R timeROC package. Continuous variables were expressed as their median values and interquartile range (IQR), and compared using Wilcoxon rank sum test. Categorical variables were expressed as percentages, and compared using chi-square test. All statistical analyses were performed using STATA for windows ver. 14 (STATA corp., Texas, USA) and R statistical package ver. 3.6.1 (The R Foundation for Statistical Computing, Vienna, Austria; http://R-project.org).
Baseline characteristics of study cohorts
We identi ed 4,980 patients with ALD who visited our institution and received screening US between April 1, 2004 and December 31, 2017. Among them, 965 patients with alcoholic cirrhosis were nally included in this study and randomly allocated to either derivation (n = 643) or validation cohort (n = 322). The baseline characteristics of the two cohorts were balanced without signi cant differences except for baseline AFP levels ( Table 1). The markers for viral hepatitis were positive in 21.0 and 25.8% of patients in derivation and validation cohort, respectively (p = 0.106).
Predictors of HCC in alcoholic cirrhosis
Univariate subdistribution hazards model analysis of the derivation cohort demonstrated that older age, male sex, chronic viral hepatitis, higher AFP level, and lower platelet counts were signi cantly associated with increased risk of HCC. Among them, four predictors were independently identi ed through multivariate analysis: age, positive for viral hepatitis markers, AFP level, and platelet count ( Table 2). Development and validation of alcohol-associated liver cancer estimation (ALICE) scoring system A parsimonious HCC prediction model, the alcohol-associated liver cancer estimation (ALICE) scoring system, was developed from the result of multivariate cumulative incidence function. Nomogram was constructed with four predictors to calculate the ALICE score (Fig. 2). The calibration plots of the nomogram showed good agreement between the observed and predicted HCC risks (Supplementary Fig. 1). When patients were strati ed by ALICE score, HCC risk was minimal with a cut-off ≤ 120, whereas patients with a cut-off of > 120 and < 180 showed the cumulative incidence exceeding 15% per 10 years, and patients with ≥180 had highest risk for HCC ( Fig. 3 and Table 3). Finally, we compared the predictive performance of ALICE score with that of the model derived from US Veterans Affairs health care system (US-VA model), which is a internally validated scoring system for predicting the risk of HCC in patients with alcoholic cirrhosis by using age, sex, BMI, diabetes, platelet count, serum albumin, and serum AST/√ALT ratio as predictors [15]. Time-dependent ROC curve analysis revealed that the performance of ALICE score had comparable or higher AUC values than UA-VA score in the validation cohort (Fig. 4).
Discussion
In this study, we described HCC risk in alcoholic liver cirrhosis, and developed a risk strati cation model for HCC (i.e., ALICE score) in a hospital-based cohort. Unlike the recently developed prediction models [14,15], we employed competing-risk analysis by incorporating mortality data from causes other than HCC. Liver cirrhosis is typically a multistate disease complicated by discrete outcomes [28]. If patients with competing outcomes such as non-HCC deaths are simply treated as right-censored cases, Kaplan Meier method may overestimate the real cumulative risks [28,29]. Moreover, the predicted risk of HCC does not necessarily correlate with the predicted rate by Cox model of HCC prediction [29]. Our cohort patients showed that censored cases due to non-HCC deaths were twice more than those censored due to HCC.
The estimated cumulative HCC risk in our cohort was ~ 1.5 % per year for overall patients (Fig. 1), and approximately 1.0 % for patients without markers of viral hepatis. The latter gure fell in the range between the two recent European studies (0.7 [10] and 1.8 % [9]) which excluded patients with chronic viral hepatitis. Comorbid viral hepatitis is of special interest in geographic areas whe re chronic hepatitis virus infection is prevalent. Interestingly, our cohort showed higher prevalence of chronic viral hepatitis compared to Korean general population: 2.9 % for HBV [19] and ~ 0.8 % for HCV [30]. This high prevalence may be explained by the synergistic effect of comorbid viral hepatitis on the accelerated progression of alcoholic fatty liver to alcoholic cirrhosis [31].
The role of HCC surveillance in alcoholic liver disease is still under debate. Practice guidelines recommend HCC surveillance in patients with cirrhosis due to alcohol and other etiologies on the ground that threshold HCC incidence of > 1.5 %/year may justify cost-effectiveness of surveillance [3,23,32]. However, not only the "1.5 %/year" cut-off itself has been doubted [33], but also the risk of HCC in alcoholic cirrhosis may not be high enough to ensure cost-effectiveness [5,10]. Risk strati cation may be thus necessary to enhance the effectiveness of HCC surveillance in alcoholic cirrhosis.
We have built our risk strati cation model based on four independent predictors of HCC risk: age, chronic viral hepatitis, AFP level, and platelet count. AFP level was a signi cant predictor in addition to other wellestablished markers [14,15], and this nding is in concordance with the French cohort study [9]. These four factors are readily available in routine practice, and nomogram-based ALICE score was able to discriminate the low, high, and super high-HCC risk groups in alcoholic cirrhosis. Patients with ALICE score ≤ 120 carries minimal risk for HCC and may not be indicated for routine HCC surveillance, whereas those with ≥180 show highest risk for HCC and regular surveillance may be justi ed. In other word, the ALICE score may serve dual purposes: (1) to exclude ALD patients with low risk from HCC surveillance, and (2) to identify patients with very high risk for HCC in need of enhanced surveillance. Further studies will be necessary to assess whether risk-based surveillance is cost-effective in alcoholic cirrhosis.
Recently, a HCC risk strati cation model was developed from US Veterans Affairs healthcare system (VAHS) data including a large number of alcoholic cirrhosis (n = 16,175) [15]. The model was based on 8 parameters and the score is available through a web-based calculator. As mentioned above, competing risks were not considered in the US-VA model building. Moreover, a signi cant fraction of alcoholic cirrhosis with viral markers will not be properly assessed by the US-VA model. Time-dependent ROC analysis showed that the ALICE score had comparable or higher AUC values compared with the US-VA score (Fig. 4). Compared to the US-VA model, our score is more parsimonious with using only 4 readily available parameters. However, further validation would be warranted for the clinical utility of ALICE score by prospective studies.
There are potential limitations in our study, mainly related to the retrospective design in a limited number of institutions. We tried to minimize selection bias by using our pre-de ned EMR system [20,34] and validated the model in an internal validation cohort; however, further external validation is needed by prospective studies. Cost-effectiveness analysis should also be conducted for the clinical utility of ALICE score-guided surveillance strategy. Finally, the diagnosis of cirrhosis was mostly made clinically, and there was a possibility that liver cirrhosis was underdiagnosed and not included in our cohort [35,36]. Since liver biopsy is not generally required for the management of compensated alcoholic liver disease, however, we believe that our model can be applicable to real-world practice of clinically diagnosed alcoholic liver cirrhosis.
In conclusion, a novel HCC risk score, the ALICE score, which includes age, chronic viral hepatitis, AFP level, and platelet count, represents a reliable and easy-to-use method for predicting HCC development in patients with alcoholic cirrhosis in areas where the prevalence of viral hepatitis is substantial. Financial support:
Abbreviations
This work was supported by a National Research Foundation of Korea (NRF) grant to J-W Kim, funded by the Korean Government (2017R1D1A1B03031483). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Consent to participate
Written consents were waived by the IRB due to the retrospective nature of study.
Consent for publication
All authors agree to publication if the paper is accepted. Cumulative incidence functions for HCC in the derivation and validation cohorts.
Figure 2
A nomogram for the alcohol-associated liver cancer estimation (ALICE) score. Cumulative incidence curves of HCC in the derivation and validation cohorts according to Alcohol-associated Liver Cancer Estimation (ALICE) score.
Figure 3
Cumulative incidence function curves strati ed by ALICE score. Comparison of time-dependent receiver operating characteristic curves between the ALICE score and US-VA score.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 2021-10-20T16:14:53.281Z | 2021-09-14T00:00:00.000 | {
"year": 2021,
"sha1": "05b4b9a472f873d03b4dfd8e1c4d9d1d61d2d024",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-856622/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2ad407d20cc5dc1d8429b95d27d14107bee38c11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
202413109 | pes2o/s2orc | v3-fos-license | Immune and Inflammatory Cells in Thyroid Cancer Microenvironment
A hallmark of cancer is the ability of tumor cells to avoid immune destruction. Activated immune cells in tumor microenvironment (TME) secrete proinflammatory cytokines and chemokines which foster the proliferation of tumor cells. Specific antigens expressed by cancer cells are recognized by the main actors of immune response that are involved in their elimination (immunosurveillance). By the recruitment of immunosuppressive cells, decreasing the tumor immunogenicity, or through other immunosuppressive mechanisms, tumors can impair the host immune cells within the TME and escape their surveillance. Within the TME, cells of the innate (e.g., macrophages, mast cells, neutrophils) and the adaptive (e.g., lymphocytes) immune responses are interconnected with epithelial cancer cells, fibroblasts, and endothelial cells via cytokines, chemokines, and adipocytokines. The molecular pattern of cytokines and chemokines has a key role and could explain the involvement of the immune system in tumor initiation and progression. Thyroid cancer-related inflammation is an important target for diagnostic procedures and novel therapeutic strategies. Anticancer immunotherapy, especially immune checkpoint inhibitors, unleashes the immune system and activates cytotoxic lymphocytes to kill cancer cells. A better knowledge of the molecular and immunological characteristics of TME will allow novel and more effective immunotherapeutic strategies in advanced thyroid cancer.
Introduction
The characteristics that a normal cell must obtain, to develop malignancy, are the intrinsic ones of the tumor cell (e.g., cell-autonomous growth and resistance to apoptosis) and the susceptibility to an inflammatory tumor microenvironment (TME) [1,2]. In addition, cancer cells can evade the anticancer immune response and proceed to form tumors [1][2][3].
Myeloid Cells
Myeloid-derived suppressor cells (MDSCs) are phenotypically heterogeneous, not fully mature, myeloid cells [24]. MDSCs are rare in healthy subjects but are elevated in cancer patients, where they show a strong immunosuppressive potential [25] and are associated with a poor prognosis. Elevated preoperative levels of circulating MDSCs have been reported in TC patients in comparison to benign nodules or other noncancerous thyroidal diseases [26,27]. Moreover, a correlation has been found between the number of circulating MDSCs and aggressiveness of differentiated TC (DTC) [27]. In TCs patients, the prognosis could potentially be ameliorated with the differentiation of MDSCs into mature myeloid cells or through their depletion or functional inhibition by different treatments, such as nitric oxide inhibitors, chemotherapy, tyrosine kinase inhibitors (TKIs), and/or bisphosphonates [28]. In particular, the antitumor activity of the TKI sunitinib has been demonstrated in advanced differentiated TC [29]. Sunitinib seems to decrease MDSCs that correlates with increased tumorspecific responses produced by cancer vaccines in preclinical models [30].
Myeloid Cells
Myeloid-derived suppressor cells (MDSCs) are phenotypically heterogeneous, not fully mature, myeloid cells [24]. MDSCs are rare in healthy subjects but are elevated in cancer patients, where they show a strong immunosuppressive potential [25] and are associated with a poor prognosis. Elevated preoperative levels of circulating MDSCs have been reported in TC patients in comparison to benign nodules or other noncancerous thyroidal diseases [26,27]. Moreover, a correlation has been found between the number of circulating MDSCs and aggressiveness of differentiated TC (DTC) [27]. In TCs patients, the prognosis could potentially be ameliorated with the differentiation of MDSCs into mature myeloid cells or through their depletion or functional inhibition by different treatments, such as nitric oxide inhibitors, chemotherapy, tyrosine kinase inhibitors (TKIs), and/or bisphosphonates [28]. In particular, the antitumor activity of the TKI sunitinib has been demonstrated in advanced differentiated TC [29]. Sunitinib seems to decrease MDSCs that correlates with increased tumor-specific responses produced by cancer vaccines in preclinical models [30].
The density of tumor-associated macrophages (TAMs) changes in the distinct subtypes of TCs [31]. In particular, anaplastic thyroid cancers (ATCs) have the highest density of TAMs in TME, and this correlates with poorer prognosis [31]. In papillary thyroid cancers (PTCs), the presence of TAMs is lower but a comparable correlation exists with clinical outcomes, as larger tumors, more lymph node metastases, and decreased survival [31][32][33][34]. In vitro studies reported that TAMs can promote invasiveness of human TC cell lines through CXCL8/IL-8 secretion [34], and this was confirmed by enhanced spreading of human PTC cells after treating immunodeficient mice with CXCL8/IL-8 [34]. On the contrary, a retrospective study reported a positive association between the number of tumor-infiltrating macrophages and enhanced disease-free survival in TCs patients [35]. Further studies will be necessary to clarify these contradictory results. In particular, it could be important to distinguish between inflammatory type 1 and suppressive type 2 TAMs, as the first ones are usually associated with better outcomes in other types of cancer [36,37].
DCs play a central role in presenting antigens and in regulating immune functions through the secretion of cytokines. For these reasons, they play key roles in the induction of immunity. DCs are infrequently found in the thyroid, but they are increased in human PTCs [38]. Tumor-infiltrating DCs often display an immature phenotype, expressing low levels of co-stimulatory molecules and high levels of regulatory molecules, leading to an altered antigen presentation [39,40]. Immature DCs (iDCs) poorly induce T cell and NK cell-mediated responses and they can even inhibit immune responses by producing suppressive cytokines (i.e., IL-10 and TGF-β) [41]. Tregs and DCs participate in the immunosuppressive conditioning of TME. In a model of pancreatic ductal adenocarcinoma, Tregs suppressed the function of tumor-infiltrating DCs, inhibiting the expression of co-stimulatory ligands and the activation of CD8 + T cells [42]. Tregs and DCs are elevated in human PTCs [43]; therefore, the interruption of Tregs and DCs interactions in TCs could be a good therapeutic strategy. The function of tumor-infiltrating DCs can be restored by blocking immunosuppressive pathways, such as those associated with PD-1, secretion of IL-10, and production of lactic acid [40,44].
Neutrophils
Neutrophils are involved in the acute phase of the inflammatory response and represent the first line of defense against extracellular microbes [45]. Quite recently, new roles have been reported for neutrophils in immune and inflammatory responses [46][47][48].
The prognosis of TC patients is still difficult to define, owing to the heterogeneity of the disease [64]. The "neutrophil-to-lymphocyte ratio" (NLR) was associated with tumor progression [65] since an elevated NLR correlated with larger tumor volume and a higher risk of recurrence.
Lee et al. evaluated 151 TC patients, reporting a significant decrease in NLR after treatment in those with low risk of recurrence, those with stage I disease, and those with an excellent response to therapy [66]. At follow-up, NLR significantly increased (p = 0.012) in patients with a structural incomplete response. On multivariate analysis, incomplete response to therapy was associated with an increased NLR (OR = 13.68). The authors concluded that an increase in systemic inflammation after treatment (measured by NLR) is independently associated with an incomplete response to therapy in DTC [66]. However, NLR does not allow to discriminate malignant from benign lesions [67]. Furthermore, NLR does not correlate with the risk of occult metastasis or with patients' survival [68].
The presence of infiltrating neutrophils in human TC and the phenotypic and functional characteristics of "tumor-educated" neutrophils have been recently evaluated. Indeed, TC cells were able to recruit neutrophils through the release of CXCL8/IL-8 and to improve their survival through the release of granulocyte colony-stimulating factor (GM-CSF). TC cells upregulated neutrophils' proinflammatory activities and the expression of factors able to promote tumor progression. Moreover, in human TC samples, neutrophil density correlated with tumor size, suggesting a potential tumor-promoting role of TANs in TC [69].
NK Cells
NK cells play a central role in cancer immunosurveillance through killing cancer cells [70,71]. However, few solid tumors respond to NK cell-mediated immunotherapy owing to the resistance to the lysis induced by NK cells and the reduced homing and infiltration of NK cells into tumors [72]. ATC cell lines in vitro are responsive to NK cell-mediated lysis, leading to hypothesize that TC can take advantage of immunotherapies that incorporate in TME the recruitment of activated NK cells [72]. Furthermore, the cells secreted CXCL10/IP-10 after the stimulation with interferon (IFN)-γ [73] and showed the capability to attract CXCR3 + NK cells [72]. The transfer of ex vivo-expanded NK cells to in vivo-animal model of ATC with the appropriate cellular environment could represent a promising therapeutic model.
Tumor immunosuppression is an obstacle to effective immunotherapy with NK cells. Intratumoral NK cells have an inactive phenotype when compared to blood NK cells. When NK cells are cocultured with ATC, which expresses elevated levels of COX2, the NKG2D (the activation receptor for NK cells that increases the lysis of tumoral cells) was downregulated, when compared to those cocultured with COX2-negative cell lines [72]. The administration of neutralizing antibodies to prostaglandin E2 (PGE 2 ) could rescue this downregulation, suggesting that this eicosanoid downregulates NK cell activity. Other studies reported NK dysfunction in tumor-bearing mice. A diminished splenocyte mediated cytotoxicity in thyroid tumor-bearing LSL-BrafV600E/TPO-Cre mice (that express mutant BrafV600E transcripts under the endogenous Braf promoter between 3 and 10 days postnatally and spontaneous PTC developed at about the age of 5 weeks [74]) with respect to normal LSL-BrafWT/TPO-Cre mice was shown [75]. NK and CD8 + T cells mediated this cytotoxicity and the treatment with exogenous IL-12 and anti-TGF-β partially restored this diminished cytotoxicity [75].
Additional studies are necessary to clarify the role of NK cell dysfunction in TC to obtain effective therapeutic strategies.
T Cells
Different types of cancers, such as metastatic melanomas [76], ovarian [77,78], colorectal [79,80], and breast cancers [81], show a good outcome in the presence of lymphocytic infiltration. In human PTC, the density of lymphocytes is correlated with improved overall survival and lower recurrences [82,83]. Another study showed that proliferating lymphocytes (identified for the ability to express the nuclear antigen Ki-67) could predict the enhanced disease-free survival in children and young adults [84]. Infiltration of CD8 + T cells in TCs was associated with enhanced disease-free survival [6,35]. CD8 + , CD4 + T cells, and B cells were positively correlated with reduced tumor sizes [35]. On the contrary, another study found a higher risk of relapse in the presence of elevated infiltration of CD8 + T cells [85]. IL-2 and IL-15 regulate the expression of the cytolytic proteins granzyme and perforin [86,87]. For this reason, a treatment inducing the overexpression of IL-2/IL-15 in TC TME could permit to activate T cells with cytotoxic activity. Even if the systemic delivery of IL-2 can be toxic, new manners to get IL-2 into a tumor have been evaluated (i.e., IL-2 encoded by an oncolytic virus or linked to a tumor-associated ligand) [88,89].
Tumor-infiltrating T lymphocyte activity can be impaired by the cancer microenvironment. Inhibitory receptors, such as PD-1, are immune checkpoints, able to limit T cell responses, inducing anergy or apoptosis [90]. The physiological role of immune checkpoints is to downregulate the immunologic response after the initial induction of a protective response and to avoid the risk of autoimmunity. In tumors, autoreactive T cells specific for "cancerous self" are needed to eliminate cancer cells. Inhibitory receptors as PD-1 are upregulated on T cells in the same manner as the expression of elevated levels of cognate ligands (i.e., PD-L1 and PD-L2), leading to diminished production of IFN-γ and the cytotoxic potential [91]. Of note, the expression of PD-1 seems to be associated with the presence of TC-infiltrating CD8 + and CD4 + T cells [92], suggesting that immune checkpoint inhibitors (ICIs) treatment could reverse the cytotoxic T cell responses in TCs.
Tregs switch off immune responses, favoring disease progression and metastases to lymph nodes in different tumors [93]; their presence in PTCs is associated with more aggressive disease [94]. A paper evaluated the clinicopathologic significance and roles of Treg in PTC patients with/without Hashimoto's thyroiditis (HT) [95]. The percentage of CD4+CD25+CD127low/-Treg among CD4+ T cells was significantly more elevated in PTC patients than in multinodular goiter (MNG) patients. A higher number of tumor-infiltrating FoxP3+ Treg in primary PTC and metastatic lymph nodes tissues was present, and no FoxP3 expression in the MNG tissues was found. In peripheral blood and tumor tissues, a higher percentage of Treg was associated with extrathyroidal extension and lymph nodes metastasis. The percentage of CD4+CD25+CD127low/-Treg among CD4+ T cells in peripheral blood of PTC patients with HT was significantly lower, while the infiltration of FoxP3+ Treg in tissues of PTC with HT was increased. The authors concluded that the percentage of Treg increased in peripheral blood and in the tumor tissues of PTC patients in comparison to that of MNG patients, and it was associated with aggressiveness [95].
T helper 17 (Th17) cells and follicular helper T (Tfh) cells are regulatory subsets of CD4 + T lymphocytes [96], but their roles in TCs have not been studied exhaustively. Few data are present about the prognostic value of CD4 + T cells in TCs, even if it is possible that the evaluation of the CD8 + cytotoxic-regulatory T cell ratio in TCs might be important, as it has been shown in other tumors [21].
A higher density of T lymphocytes that did not express CD4 or CD8 has been reported in TC compared to patients with autoimmune thyroid diseases [97]. These intratumoral double-negative T cells seem to decrease the growth and cytokine production of neighboring activated effector T cells [92]. For this reason, reducing the number of these cells in TCs could help immune-mediated therapies.
Mast Cells (MCs)
Mast cells are tissue-resident immune cells which have a widespread distribution in nearly all tissues and human cancers [98,99]. These cells are in close proximity to epithelia, fibroblasts, and blood and lymphatic vessels, and are involved in wound healing, angiogenesis, lymphangiogenesis and tumor growth [100][101][102]. MCs are recruited into TME by several tumor-derived chemotactic factors such as stem cell factor (SCF), vascular endothelial growth factors (VEGFs), chemokines, and cytokines [103]. Moreover, tumor-associated mast cells (TAMCs) can be activated by several factors within TME such as hypoxia, adenosine, PGE2, chemokines, and immunoglobulin free light chains [103,104]. MCs play a protumorigenic role in the majority of solid and hematologic tumors, but their contribution to cancer varies according to stage of tumorigenesis [105][106][107][108] and to their microlocalization in tumors [109][110][111].
MCs in TC
Only few papers evaluated the relationship between MCs and TC [112]. MCs infiltration was reported in 95% of PTC samples in which the extent was associated with extrathyroidal extension of tumors, while normal thyroid tissues stained negative for tryptase, a particular MCs marker. The presence of MCs was evaluated also in poorly differentiated thyroid cancer (PDTCs) and ATCs by immunohistochemistry (IHC), showing it both in PDTC and ATC, and their density correlates with tumor invasiveness [113] (Table 1).
Immune Cells Reported Data References
Tumor-associated macrophages • ATCs have the highest density of TAMs in tumor microenvironment, and this correlates with poorer prognosis. [31] • In PTCs, the presence of TAMs is lower but a similar correlation exists with clinical outcomes, as more lymph node metastases, larger tumors, and decreased survival. [31-34] • In vitro studies reported that TAMs can promote invasiveness of human TC cell lines through CXCL8/IL-8 secretion. [34] • A retrospective study of TCs patients reported a positive association between the number of tumor-infiltrating macrophages and enhanced disease-free survival. [35]
Dendritic cells
• Immature DCs poorly induce T cell and NK cell-mediated responses and they can even inhibit immune responses producing suppressive cytokines, such as IL-10 and TGF-β. [41] • Tregs and DCs are elevated in human PTCs. [43] Tumor-associated neutrophils • An independent association is found between NLR increase and an incomplete response to therapy in DTC. [66] • In human TC samples, neutrophil density correlated with tumor size, suggesting a potential tumor-promoting role of TANs in TC. [69] Natural killer cells • ATC cell lines in vitro are responsive to NK cell-mediated lysis. Furthermore, the cells secreted CXCL10/IP-10 when stimulated by IFN-γ and demonstrated an ability to attract CXCR3+ NK cells. [72,73] • Other studies reported NK dysfunction in tumor-bearing LSL-BrafV600E/TPO-Cre mice with diminished splenocyte-mediated cytotoxicity, due to NK and CD8+ T cells. The treatment with exogenous IL-12 and anti-TGF-β partially restored, this diminished cytotoxicity. [75]
Immune Cells Reported Data References T cells
• In human PTC, lymphocyte density is associated with improved overall survival and lower recurrences. [82,83] • A study showed that proliferating lymphocytes could predict improved disease-free survival in children and young adults. [84] • Infiltration of CD8+ T cells into thyroid tumors was associated with improved disease-free survival. CD8+, CD4+ T cells, and B cells were positively correlated with reduced tumor sizes. [35] • A study found a higher risk of relapse in the presence of elevated infiltration of CD8+ T cells. [85] • Tregs switch off immune responses, favoring disease progression and metastases to lymph nodes in different tumors; their presence in PTCs is associated with a more aggressive disease. [93,94] • The percentage of Treg increased in peripheral blood and in the tumor tissues of PTC patients compared to that of MNG patients, and it was associated with aggressiveness. [95] • A higher density of double-negative T cells has been reported in TC patients. These T cells seem to reduce the proliferation and cytokine production of neighboring activated effector T cells. [94,97] Mast cells • MCs infiltration was reported in 95% of PTC samples whose extent correlated with extrathyroidal extension of tumors, they are also present in PDTC and ATC, and their density correlates with tumor invasiveness. [113] • A study revealed a higher presence in the intratumoral and peritumoral areas of follicular variant of PTC in comparison to adenoma. [114] • A protumorigenic role of MCs and their mediators in TC has been shown. [112] • MCs, by releasing specific mediators as CXCL8/IL-8, improve the acquisition of mesenchymal and stem-like characteristics of TC cells, therefore promoting cancer progression. A higher presence of MCs was shown in the intratumoral and peritumoral areas of follicular variant of PTC in comparison to adenoma [114]. Therefore, MCs density could help to discriminate between malignant and benign forms of follicular thyroid lesions [114].
In vitro studies in human MCs lines (HMC-1 and LAD2) and human primary MCs isolated from human lung (HLMC) reported that VEGF-A induced MCs chemotaxis, and it has been found that different TC cell lines release VEGF-A and other soluble factors activating MCs [112]. This activation was not mediated by IgE, but by mediators which are still unknown. Several mediators (i.e., IL-6, IL-1, TNF-α, histamine, and the chemokines IP-10, CXCL8/IL-8, and CXCL1/GRO-α) have been identified, analyzing MCs factors released after TC activation. TC cell proliferation, survival, and motility were stimulated by the mediators present in MCs-conditioned media. The binding of histamine to H1 and H2 receptors on PTC cells induced cell proliferation, even if with a lower effect than that induced by MCs-conditioned media. Interestingly, combining histamine with GRO-α and IP-10, an effect similar to that of MCs-conditioned media was exerted. These data were confirmed by immune-depletion experiments [112]. Importantly, the subcutaneous injection of MCs and TC cells in athymic mice expedited the growth of TC xenografts. TC cell xenografts recruited MCs injected in tumor site, and MCs injection induced an enhanced growth and vascularization of xenografts. Treating mice with sodium cromoglycate (cromolyn), a particular inhibitor of MCs degranulation, reduced these effects [112]. Collectively, these results indicate a protumorigenic role of MCs and their mediators in TC.
MCs in Epithelial-To-Mesenchymal Transition (EMT) and Stemness
The raised motility and invasiveness of tumoral cells derive from the EMT activation, which is essential in tumor progression [115]. EMT is a genetic program, occurring during embryonic development or in response to injury, through which epithelial cells transdifferentiate and obtain a mesenchymal and invasive phenotype. Comparable signaling pathways, effectors, and regulators are present in pathological and physiological EMTs. EMT frequently occurs at the invasive front of various carcinomas [116] and can be provoked by cellular signals derived from tumor cells and TME.
Human TC cell lines, obtained from FTC, ATC, and PTC, undergo EMT once exposed to activated MCs-conditioned media and change morphology into a mesenchymal phenotype, they upregulate EMT markers and downregulate epithelial markers, and a functional EMT is activated [113]. Among the different mediators produced by MCs, TNF-α, IL-6, and CXCL8/IL-8 efficiently media-induced a functional EMT. Interestingly, only immunodepletion of CXCL8/IL-8, but not of IL-6 or TNF-α, blocked MCs-conditioned media-mediated EMT induction in TC cells. The addition of exogenous CXCL8/IL-8 reverted this effect, suggesting that this mediator plays a key role in EMT [113].
The connection between EMT and cancer stem cells (CSCs) has been evaluated in the distinct types of tumors [117]. EMT inducers or regulators could also induce cancer cells to acquire stem cell-like features, suggesting the existence of a cross-talk between EMT and the pathways that regulate stemness [117]. To isolate cells with stem-like characteristics, their capacity to grow in low adherence conditions, giving rise to cell spheroids, can be used [118][119][120]. MC-conditioned media or recombinant CXCL8/IL-8 treatment of TC cells led to the achievement of stemness characteristics more efficiently in comparison to unstimulated cells, suggesting that MC-conditioned media and CXCL8/IL-8 enhance TC cell stemness. Blocking CXCR1 and CXCR2, the CXCL8/IL-8 receptors, with neutralizing antibodies, a strong reduction of the ability of TC cells to form spheroid was shown [113]. CXCL8/IL-8 stimulated EMT/stemness of TC cells through the Akt-SLUG pathway [113]. When human PTC samples were analyzed by IHC with antibodies anti-tryptase and anti-OCT-4 (a stem cell marker), a positive correlation between MCs density (tryptase + cells) and stemness features (OCT-4) was reported. The authors concluded that the release of certain mediators (i.e., CXCL8/IL-8) lead MCs to improve the acquisition of mesenchymal and stem-like characteristics of TC cells, therefore fostering cancer progression [113].
Conclusive Remarks
The association between chronic inflammation and TC involves several components of the innate and adaptive immune system: (a) cells of the innate immune response (macrophages, mast cells, and neutrophils); (b) cells of the adaptive (lymphocytes) immune responses. These cells interact with tumor cells via chemokines, adipocytokines, and cytokines [121] (Figure 2). was reported. The authors concluded that the release of certain mediators (i.e., CXCL8/IL-8) lead MCs to improve the acquisition of mesenchymal and stem-like characteristics of TC cells, therefore fostering cancer progression [113].
Conclusive Remarks
The association between chronic inflammation and TC involves several components of the innate and adaptive immune system: (a) cells of the innate immune response (macrophages, mast cells, and neutrophils); (b) cells of the adaptive (lymphocytes) immune responses. These cells interact with tumor cells via chemokines, adipocytokines, and cytokines [121] (Figure 2).
Chronic Lymphocytic Thyroiditis (CLT) and TC
CLT is the most common autoimmune disorder reaching 10-20% prevalence in different populations, overall in females over 50 [122][123][124]. Several studies demonstrated the epidemiologic association, up to 38%, of CLT and TC (in particular PTC) [38, 125,126]. In PTC, the lymphocytic infiltration seems to correlate with the severity of thyroiditis in normal tissues, indicating that immunologic mechanisms are involved in their pathogenesis [127]. In support of this hypothesis, the Warthin-like variant of PTC, constituted prevalently by papillae filled with a dense lymphoplasmacytic infiltrate and lined by oncocytic cells, is commonly associated with CLT. PTCs, with associated CLT, show a less extensive disease at diagnosis and improved disease-free survival [82,128], whereas patients with tumor-infiltrating lymphocytes in PTC, without CLT, had a higher disease stage and an elevated incidence of invasion and lymph node metastasis compared to patients without lymphocytes [94]. Different prospective study investigated the association between HT and
Chronic Lymphocytic Thyroiditis (CLT) and TC
CLT is the most common autoimmune disorder reaching 10-20% prevalence in different populations, overall in females over 50 [122][123][124]. Several studies demonstrated the epidemiologic association, up to 38%, of CLT and TC (in particular PTC) [38, 125,126]. In PTC, the lymphocytic infiltration seems to correlate with the severity of thyroiditis in normal tissues, indicating that immunologic mechanisms are involved in their pathogenesis [127]. In support of this hypothesis, the Warthin-like variant of PTC, constituted prevalently by papillae filled with a dense lympho-plasmacytic infiltrate and lined by oncocytic cells, is commonly associated with CLT. PTCs, with associated CLT, show a less extensive disease at diagnosis and improved disease-free survival [82,128], whereas patients with tumor-infiltrating lymphocytes in PTC, without CLT, had a higher disease stage and an elevated incidence of invasion and lymph node metastasis compared to patients without lymphocytes [94]. Different prospective study investigated the association between HT and thyroid malignancy, showing a higher rate of indeterminate cytology and a higher incidence of thyroid cancer in patients with HT [129,130].
Recently, it has been shown in vitro and in vivo that thyroid autoimmunity and TC (especially PTC) can be concomitant [131]. The exact mechanism at the basis of this association is unknown. Elevated thyroid-stimulating hormone (TSH) levels and thyroid autoimmunity were considered independent risk factors for TC; autoimmunity and inflammation, per se, are retained TC risk factors. Within TME, inflammatory cells, of both the innate (macrophages) and the adaptive (lymphocytes) immune responses, are interconnected with endothelial cells, adipocytes, fibroblasts, and extracellular matrix through adipocytokines, cytokines, and chemokines. Under the influence of transcriptional regulators (i.e., phosphoinositide-3 kinase/protein kinase-B, mitogen-activated protein kinases, or nuclear factor-kappa B), oncogenes connected to the distinct subtypes of TC promote their effect on TME [131].
The Role of TC Cells in Recruiting Inflammatory Cells in TC
In PTCs, RET/PTC rearrangements and activating mutations in the BRAF or RAS oncogenes activate a transcriptional program and lead to the upregulation of the IP-10 chemokine, which in turn stimulates proliferation and invasion [138]. Moreover, the presence of peroxisome proliferatoractivated receptor (PPAR)-γ has been reported in thyroidal tissues, and PPAR-γ takes part in the
The Role of TC Cells in Recruiting Inflammatory Cells in TC
In PTCs, RET/PTC rearrangements and activating mutations in the BRAF or RAS oncogenes activate a transcriptional program and lead to the upregulation of the IP-10 chemokine, which in turn stimulates proliferation and invasion [138]. Moreover, the presence of peroxisome proliferator-activated receptor (PPAR)-γ has been reported in thyroidal tissues, and PPAR-γ takes part in the modulation of inflammatory responses [144]. Treating normal thyroid follicular cells (TFC) with rosiglitazone (a PPAR-γ ligand), at near-therapeutical doses, inhibited IFN-γ-induced IP-10 secretion. These data indicated that PPAR-γ can be involved in the regulation of IFN-γ-stimulated chemokine expression in human thyroid autoimmunity [144].
PPAR-γ is considered a tumor suppressor gene, and the antiproliferative effects of the PPAR-γ ligands thiazolidinediones have been shown in human ATC and dedifferentiated PTC primary cell cultures [145].
In primary cultures of TFCs and PTCs, we reported that IP-10 was not released basally, but IFN-γ stimulated its secretion in a similar manner in both cell types, while TNF-α alone induced a slight but significant IP-10 secretion only in PTCs [146]. The cotreatment with IFN-γ and TNF-α had a synergistic effect on the IP-10 secretion from PTC cells, and to a lesser extent from TFC [146]. Moreover, thiazolidinediones had antiproliferative effects in PTC primary cells [146]. These data suggested dysregulation of IP-10 secretion in PTCs and the effects of thiazolidinediones on IP-10 were unrelated to the significant antiproliferative effect in PTC cells [146].
We evaluated also IP-10 levels in primary human ATC cell cultures and the effect of IFN-γ and/or TNF-α stimulation on its secretion [73,147]. Primary human ATC cells, but not primary TFC cells, spontaneously secreted IP-10. The treatment with IFN-γ induced the IP-10 secretion in a concentration-dependent manner, both in primary ATC and TFC cells, while TNF-α alone had no effect [73]. The cotreatment with IFN-γ and TNF-α induced a synergistic effect on the IP-10 secretion both in TFC and ATC cells, even if with a variable effect on the IP-10 release in different primary ATC cell preparations, while it was more reproducible in TFCs [73]. Moreover, primary ATC and TFC cells were treated with rosiglitazone in the presence of IFN-γ + TNF-α, and the effect on IP-10 secretion was inhibitory or stimulatory or nil in ATCs, and inhibitory in TFCs [73]. Rosiglitazone was able to reduce primary ATC cells proliferation. These results suggest that the pattern of modulation of IP-10 secretion by IFN-γ, TNF-α, or thiazolidinediones is extremely variable in ATC, indicating that the intracellular pathways involved in the chemokine modulation have different types of dysregulation [73].
We investigated also the pattern of secretion of CXCL9/MIG and CXCL11/ITAC in TFC and PTC primary cells in vitro [148]. MIG and ITAC were not secreted basally in both cell types, the treatment with IFN-γ induced the chemokines secretion, while TNF-α alone induced it only in PTC. Cotreatment with IFN-γ and TNF-α induced a synergistic effect on chemokines release from PTC and, to a lesser extent, from TFC cells. The treatment with PPAR-γ ligands in the presence of IFN-γ and TNF-α suppressed chemokines secretion in TFCs in a concentration-dependent manner, while stimulated it in primary PTC cells. PPAR-γ knocking down, by RNA interference technique in PTC cells, abolished the effect of PPAR-γ ligands on chemokines release. In PTC cells, PPAR-γ ligands reduced proliferation, and MIG or ITAC reduced significantly proliferation and migration. These results suggested to explore further the use of MIG or ITAC as antineoplastic agents in PTC [148].
The role of CCL2/MCP-1, the prototype Th2 chemokine, has been evaluated in primary human ATC cell cultures [149]. MCP-1 was secreted by tumor cells and exerted growth-promoting effects. Primary ATC cells released basally MCP-1 at a higher level than TFC cells. Among the 6 established ATC primary cell cultures, IFN-γ or TNF-α dose-dependently induced the MCP-1 secretion in 3/6 or 5/6, respectively, and in all TFC cells, while thiazolidinediones inhibited it in 3/6 ATC cells, and they had no effect in TFC cells. The treatment with pioglitazone, a PPAR-γ ligand, inhibited the proliferation of primary ATC cell cultures. We concluded that primary ATC cells release spontaneously MCP-1 and upon cytokine stimulation, with an extremely variable pattern of modulation, indicating a different type of dysregulation in the chemokine secretion. Additional studies are necessary to clarify whether MCP-1 could be considered as a biomarker in the follow-up of ATC patients [149]. On the whole, the above data underline the important role of TC cells in recruiting inflammatory cells into the TME.
The Inflammatory Role of Cancer-Associated Fibroblasts (CAFs) in Thyroid Cancer
CAFs surround the tumor cells and they participate in tumor initiation, tumor-stimulatory inflammation, metabolism, drug response, metastasis, and immune surveillance [150]. However, the role of CAFs in thyroid cancer is complex and somehow still contradictory.
A paper evaluated the association between expression of CAF-related proteins in PTC in relation to clinicopathologic factors in 339 PTCs [151]. It was shown that the expression of CAF-related proteins in stromal cells and cancer cells of PTC varied on the basis to histologic subtype, BrafV600E mutation, and subtype of stroma, and it was associated with shorter overall survival [151].
Furthermore, another paper studied the association between CAFs and cervical lymph node metastasis in PTC [152]. Among 78 PTC patients, 65 presented desmoplastic stromal reaction around the tumor. CAFs were found in 42 (64.6%) cases with desmoplastic stroma. At univariate analysis, it was shown that tumor size and CAFs were risk factors of lymph node metastasis. However, by a multivariate analysis, CAFs were the only independent risk factor of lymph node metastasis in these patients [152].
The Immune Landscape of TCs
Evaluating the specific patterns of immune cells infiltrating TCs, including not only their phenotypes but also the function, it is crucial to understand the immunological characteristics of different TCs. Tumors have been classified into 6 immune subtypes according to their transcriptomic and genomic data available in The Cancer Genome Atlas, through immunogenomics methods, that allowed to define the TME immune components. Through the use of this technique, PTCs have been subdivided in "inflammatory" tumors [153]. PTCs are cancer types with low mutational burden owing to low neoantigen expression that indicates a slight immunogenicity, but they have a substantial immune infiltrate accounting for the "inflammatory" immune subtype [17]. It is still unclear whether the PTC-associated inflammation depends on some intrinsic characteristics of the thyroid, as the presence and abundance of tissue-specific antigens, or on the frequent disruption of immunological tolerance and the subsequent propensity to autoimmunity. In any case, the presence of autoimmunity or of CLT has been associated with a good prognosis in TC and in other tumors [154,155]. In contrast, the presence of immunosuppressive cell populations has also been reported in PTCs and their density frequently correlated with a poor prognosis.
Moreover in TC, the immune subtype has been associated with specific genetic lesions and with the differentiation score. For example, elevated scores for DCs, macrophages, and MCs correlated with low thyroid differentiation score or with BrafV600E mutation in PTCs, and the expression levels of CTLA-4 and PD-L1 were more elevated in BrafV600E+ and in dedifferentiated TCs [156]. These data confirm what reported by earlier IHC studies showing that the BrafV600E mutation status was strictly associated with Tregs and immunosuppressive macrophage components, and with immunosuppressive markers, such as PD-L1 [157]. Furthermore, in PTCs, elevated expression levels of PD-L1 correlated with TAM and CD8 + , CD4 + , and Treg lymphocytic infiltrate [35,158]. PD-L1 positive expression in PTCs correlates with a higher risk of recurrence and reduced disease-free survival [159]. In PDTCs, the expression of PD-L1 was significantly associated with increased tumor size and multifocality. In metastatic PTCs, PD-1 + T-lymphocytes were present in lymph nodes, indicating their significant association with cancer lymph-nodal invasion and recurrent disease [160].
A recent paper presented data on the PD-L1 expression in 407 primary TCs with a median 13.7-year of follow-up, analyzing the associations between PD-L1 expression and clinicopathologic factors (such as TERT promoter, disease progression, and BRAF status). Tumoral PD-L1 was expressed in 6.1% of PTCs, 7.6% of follicular thyroid cancer (FTCs), and 22.2% of ATCs. The distribution of PD-L1 positivity was variable (p < 0.001) according to cancer histology types. The proportions of positivity in PD-L1 positive ATCs were more than 80%. PD-L1 in immune cells was positive in 28.5% of PTCs, 9.1% of FTCs, and 11.1% of ATCs. There was no significant association between PD-L1 expression and clinicopathologic variables, oncogenic mutation, disease progression [161].
Another paper evaluated PD-L1 expression levels in medullary thyroid cancer (MTC), demonstrating almost no expression of PD-L1 in MTC and accompanying inflammatory cells [162].
The expression of IDO1 in TCs sustained the immunosuppressive context and it was associated with a raised Treg infiltrate and with more aggressive clinicopathologic characteristics, such as extrathyroidal extension or multifocality [163,164].
Conclusions
There is overwhelming evidence that chronic smoldering inflammation has a protumorigenic role in TC [23,165,166]. The association between chronic inflammation and TC involves several components of the innate and adaptive immune systems, extracellular matrix, stroma, and adipose tissue [133]. Within the TME, cells of the innate (macrophages, mast cells, and neutrophils) and the adaptive (lymphocytes) immune responses communicated with fibroblasts, adipocytes, endothelial cells, and extracellular matrix via chemokines, adipocytokines, and cytokines [121].
In TCs, oncogenes promote proliferative effects on the TME, influenced by transcriptional regulators such as NF-kB, PI3K-AKT, and MAPK.
There is increasing evidence that cancer-related inflammation could be a useful target for novel diagnostic and therapeutic strategies in TC [167]. There is now evidence that different immune cells (e.g., macrophages, mast cells, neutrophils, lymphocytes) play a protumorigenic role, whereas other types play a protective role in tumorigenesis. Single-cell analysis of peritumoral and intratumoral immune cells in different types of TC could be of paramount importance to elucidate the functions of immune cells in TC TME.
Anticancer immunotherapy, especially ICIs, promote lymphocyte activation against cancer cells and inhibit immune-suppressive signals, leading to a sustained anti-tumor response [158]. Preclinical and preliminary clinical studies have reported promising results on the efficacy of monoclonal antibodies targeting PD-1/PD-L1 network in combination with BRAF inhibitors [168,169]. The results arising from several ongoing experimental and clinical studies will contribute to elaborate novel targeted immunotherapies for advanced TCs. | 2019-09-11T13:06:33.022Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "94bfa46bad9a343fb5d4f96463238a2c5b6c3e8c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms20184413",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f961c012bec3257404f5fe25973522a9ead9e51",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256433468 | pes2o/s2orc | v3-fos-license | Draft genome sequence of a nitrate-reducing, o-phthalate degrading bacterium, Azoarcus sp. strain PA01T
Azoarcus sp. strain PA01T belongs to the genus Azoarcus, of the family Rhodocyclaceae within the class Betaproteobacteria. It is a facultatively anaerobic, mesophilic, non-motile, Gram-stain negative, non-spore-forming, short rod-shaped bacterium that was isolated from a wastewater treatment plant in Constance, Germany. It is of interest because of its ability to degrade o-phthalate and a wide variety of aromatic compounds with nitrate as an electron acceptor. Elucidation of the o-phthalate degradation pathway may help to improve the treatment of phthalate-containing wastes in the future. Here, we describe the features of this organism, together with the draft genome sequence information and annotation. The draft genome consists of 4 contigs with 3,908,301 bp and an overall G + C content of 66.08 %. Out of 3,712 total genes predicted, 3,625 genes code for proteins and 87 genes for RNAs. The majority of the protein-encoding genes (83.51 %) were assigned a putative function while those remaining were annotated as hypothetical proteins.
Introduction
Phthalic acid consists of a benzene ring to which two carboxylic groups are attached. There are three isomers of phthalic acid (o-phthalic acid, m-phthalic acid and p-phthalic acid). Phthalic acid esters are widely used as additives in plastic resins such as polyvinyl resin, cellulosic and polyurethane polymers for the manufacture of building materials, home furnishings, transportation apparatus, clothing, and to a limited extent in food packaging materials and medical products [1,2]. Due to the widespread use of phthalates there has been great concern about their release into the environment [3,4]. In addition, phthalates and their metabolic intermediates have been found to be potentially harmful to humans due to their hepatotoxic, teratogenic and carcinogenic characteristics [5,6]. Phthalic acid is also an intermediate in the bacterial degradation of phthalic acid esters [7] as well as in degradation of certain fused-ring polycyclic aromatic compounds found in fossil fuel [8], such as phenanthrene [9], fluorene [10] and fluoranthene [11].
Azoarcus sp. strain PA01 T (=KCTC 15483) is a mesophilic, Gram-negative, nitrate-reducing bacterium that was isolated from a wastewater treatment plant in Constance, Germany, for its ability to completely degrade o-phthalate and a wide range of aromatic compounds. Strain PA01 T is also able to grow with a variety of organic substrates including short-chain fatty acids, alcohols, selected sugars and amino acids. These substrates are degraded completely to carbon dioxide coupled to nitrate reduction. The genus Azoarcus is comprised of nitrogen-fixing bacteria [12] and known for degradation of aromatic compounds. Currently, this genus consists of nine species with validly published names [13]. These species have been isolated from a wide range of environments, including anoxic wastewater sludge and grass root soil [12]. On the basis of 16S rRNA gene sequence similarity search, the closest relatives of strain PA01 T are Azoarcus buckelii DSM 14744 T (99 % gene similarity) [14,15] and Azoarcus anaerobius (98 %) [16]. A. buckelii DSM 14744 T was also isolated from a sewage treatment plant for its ability to degrade a wide range of aromatic compounds. But the biochemistry and genetics of anaerobic o-phthalate degradation had not been elucidated in detail. Here, we present a summary of the features for Azoarcus sp. strain PA01 T and its classification, together with the description of the genomic information and annotation.
Classification and features
Azoarcus sp. strain PA01 T is a member of the family Rhodocyclaceae in the phylum Proteobacteria. It was isolated from an activated sewage sludge sample collected (in 2012) from a wastewater treatment plant in Constance, Germany. Enrichment, isolation, purification and growth experiments were performed in anoxic, bicarbonatebuffered, non-reduced freshwater medium containing (g/l); NaCl, [18] and 1 ml seven-vitamin solution [19] were added. The initial pH of the medium was adjusted to 7.3 ± 0.2 with sterile 1 N NaOH or 1 N HCl. Cultivations and transfer of the strain were performed under N 2 :CO 2 (80:20) gas atmosphere. The strain was cultivated in the dark at 30°C. Enrichment cultures were started by inoculating approximately 2 ml of sludge sample in 50 ml freshwater medium (described above) containing 2 mM neutralized o-phthalic acid as sole carbon source and 10-12 mM NaNO 3 as an electron acceptor. Growth was observed after 3-4 weeks of incubation. Enrichment cultures were sub-cultured for several passages with o-phthalate as sole carbon source. Pure cultures were obtained in repeated agar (1 %) shake dilutions [20]. Single colonies obtained were retrieved by means of finely-drawn sterile Pasteur pipettes and transferred to fresh liquid medium. The strain was routinely examined for purity by light microscopy (Axiophot, Zeiss, Germany) also after growing the culture with 2 mM phthalate plus 1 % (w/v) yeast extract. For genetic and chemotaxonomic analysis, it was cultivated in the described medium containing 8 mM acetate as a carbon source.
Azoarcus sp. strain PA01 T is a mesophilic, non-motile, Gram-negative, short rod-shaped bacterium measuring 0.5-0.7 μm (wide), 1.6-1.8 μm (length) ( Fig. 1a and b) and divides by binary fission. Growth was observed from 25°C to 37°C with an optimum at 30°C and optimal pH of 7.3 ± 0. Initial identification and validation of strain PA01 T was performed by 16S rRNA gene amplification using a set of universal bacterial primers; 27 F (5′-AGA GTT TGA TCM TGG CTC AG-3′) and 1492R (5′-TAC GGY TAC CTT GTT ACG ACT T-3′) as described [21]. A phylogenetic tree was constructed from the 16S rRNA gene sequence together with the other representatives of the genus Azoarcus (Fig. 2) using the MEGA 4 software package [22]. Phylogenetic analysis indicated that strain PA01 T belongs to the genus Azoarcus and is closely related to Azoarcus buckelii (99 %) and Azoarcus anaerobius (98 %). Currently, 30 genome sequences are available for the members of the order Rhodocyclales. The closest neighbors of strain PA01 T whose genome sequence is available are Azoarcus sp. strain KH32C [23] and Azoarcus sp. strain BH72 [24] and Azoarcus toluclasticus ATCC
Genome sequencing information
Genome project history Strain PA01 T was selected for genome sequencing on the basis of its phylogenetic position and its ability to grow on o-phthalaet together with numerous aromatic compounds under nitrate-reducing conditions. Genome sequencing was performed at GATC Biotech AG, Konstanz (Germany). High-quality genome draft sequence of Azoarcus sp. strain PA01 T is listed in the Genomes Online Database of the Joint Genome Institute under project ID Gp0109270 [25]. The Azoarcus sp. PA01 T whole genome shotgun (WGS) project has been deposited at DDBJ/EMBL/ GenBank under the project accession LARU00000000. The version described in this paper has the accession , not directly observed for the living, isolated sample, but based on a generally accepted property for the species, or anecdotal evidence). These evidence codes are from the Gene Ontology project [50]. If the evidence code is IDA, the property was directly observed by one of the authors or an expert mentioned in the acknowledgments number LARU01000000, and consists of sequences LARU01000001-LARU01000004. The draft genome sequence was released on August 26, 2015. Annotation of the Azoarcus sp. strain PA01 T genome, was performed by the DOE Joint Genome Institute using microbial genome annotation pipeline state of the art technology [29,30]. Table 2 presents the project information and its association with MIGS version 2.0 compliance [31].
Growth conditions and genomic DNA preparation
For the isolation of genomic DNA, cells were grown in one liter medium with 8 mM acetate plus 10-12 mM nitrate. Cells were harvested in the late stationary phase and cell pellet was stored frozen (−20°C) until DNA preparation. High-molecular-weight genomic DNA was prepared using modified CTAB DNA extraction protocol [32] with some modifications. Chloroform:isoamyl alcohol (24:1) and phenol:chloroform:isoamyl alcohol (25:24:1) steps were repeated twice and RNase treatment was performed for 2 h. Finally, the DNA was dissolved in RNase and DNase-free molecular grade water. Purity, quality and size of the genomic DNA preparation were analyzed by using nanodrop (639 ng/μl, A 260/280 = 1.84, A 260/230 = 2.10) and agarose gel electrophoresis (1 % w/v) (see Fig. 1c).
Genome sequencing and assembly
The genome of Azoarcus sp. strain PA01 T was sequenced using a library size of 8-12 kb. Library construction, quantification and sequencing (Pacific Bioscience RS) were performed at GATC Biotech AG (Konstanz, Germany). The final high-quality draft assembly was based on 95,883 reads. The combined libraries provided the 97.42 mean coverage of sequencing depth. Final de novo assembly of the genome from the total reads was performed using the PacBio HGAP3 assembly pipeline with default filter parameters. Minimum read length and polymerase read quality was 500 bp and 0.80, respectively. The minimum seed read length was computed automatically and resulted in 5181 bp (length cutoff). The final polished assembly of the sequencing reads yielded 4 linear contigs generating a draft genome size of 3.9 Mb.
Genome annotation
Annotation was carried out using the DOE-JGI annotation pipeline [30] and genes were identified using Prodigal [33]. The predicted CDSs were translated and used to search the NCBI non-redundant database, UniProt, TIGRFam, Pfam, PRIAM, KEGG, COG and InterPro databases. The tRNAScanSE tool [34] was used to find tRNA genes, whereas ribosomal RNA genes were found by searches against models of the ribosomal RNA genes built from SILVA [35]. Other non-coding RNAs such as the RNA components of the protein secretion complex and the RNase P were identified by searching the genome for the corresponding Rfam profiles using INFERNAL [36]. Additional gene prediction analysis and manual functional annotation was performed within the IMG-ER Platform [37].
Genome properties
The draft genome of Azoarcus sp. PA01 T is 3,908,301 bp long (with 4 linear contigs, see Fig. 3) with an overall GC content of 66.08 % (Table 3). Of a total 3,712 genes predicted, 3,625 were protein-coding genes, and 87 were RNA genes (15 rRNA genes and 59 tRNA genes); 525 genes without function were identified (pseudogenes). The majority of the protein-coding genes (83.51 %) were assigned a putative function while those remaining were annotated as hypothetical proteins. The properties and the statistics of the genome are summarized in Table 3, the distribution of genes into COGs functional categories is presented in Table 4. One CRISPR region was found in the genome of strain PA01 which is located in proximity to the CRISPR-associated endonucleases (Cas1 and Cas 2) proteins.
Insight from the genome sequence
Azoarcus sp. strain PA01 T grows on a wide variety of aromatic compounds (Table 1) linked to nitrate reduction like other bacteria capable of growth via anaerobic degradation of aromatic compounds [38]. In the degradation pathway of most aromatic compounds (including ophthalate), benzoate is a central intermediate and has also been used routinely as the model compound to study the anaerobic degradation of aromatic compounds via the benzoyl-CoA degradation pathway [39]. Annotation of the genome indicated that strain PA01 T has key enzymes for the degradation of aromatic compounds such as benzoate.
In the past decade, degradation of benzoate through the benzoyl-CoA pathway has been detailed at the molecular level in facultative anaerobes and the phototrophic strictly anaerobic bacteria, i.e. in the denitrifying bacteria Thauera aromatica and Rhodopseudomonas palustris respectively [40,41]. Unlike other benzoate and/or aromatic compound degrading bacteria, strain PA01 T has the genes for benzoate degradation, which involves a one-step reaction that activates benzoate to benzoyl-CoA by an ATP-dependent benzoate-CoA ligase. The genome of PA01 T contains in total two copies of the benzoate-CoA ligase, i.e., benzoate-CoA ligase (EC 6.2.1.25) and benzoate-CoA ligase (EC 6.2.1.25) (locus tag PA01_01819, PA01_03223) which are supposed to be involved in the initial activation of benzoate to benzoyl-CoA. They are located in different positions. These two genes show 68.11 % identity to each other and are also found to be present in the genomes of the other bacteria [23]. The subsequent enzyme of benzoate degradation, benzoyl-CoA reductase is present in one copy with all its four subunits (locus tags PA01_00623, PA01_00625, PA01_00624, PA01_00626) in the genome of strain PA01. The presence of these gene clusters in the genome of Azoarcus sp. strain PA01 T provides evidence for the capacity of strain PA01 T to degrade aromatic compounds.
Most of the novel biochemistry of the anaerobic metabolism of aromatic compounds has been discovered with nitrate-reducing bacteria in the past two decades [42,43] and little is known about the biochemistry of phthalate degradation in nitrate-reducing and strictly anaerobic (fermenting and sulfate-reducing) bacteria. We are currently exploring the genome of strain PA01 T and the enzymes responsible for o-phthalate degradation by using differential proteomics and measuring enzyme activities (unpublished). Thus, the draft genome sequence of strain PA01 T provides an opportunity to study the biochemistry of o-phthalate degradation into depth.
Conclusions
Azoarcus sp. strain PA01 T harbors various genes required for degradation of aromatic compounds (which are normally found in the other aromatic degrading bacteria), e.g., genes for benzoate degradation in the genome of strain PA01 T . Further, the genome of Azoarcus sp. strain PA01 T The total is based on either the size of the genome in the base pairs or the total number of protein coding genes in the annonated genome The total is based on the total number of protein coding genes predicted in the genome will expands our view to understand the biochemistry of anaerobic degradation of various aromatic compounds, including o-phthalate, a priority pollutant. The genome sequence of strain PA01 T will provide insight into the putative genes involved in the degradation of all these compounds, mainly o-phthalate. | 2023-02-01T14:54:08.326Z | 2015-10-29T00:00:00.000 | {
"year": 2015,
"sha1": "9a84ffd229f17d930acc0335aced15e4b41b5003",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40793-015-0079-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "9a84ffd229f17d930acc0335aced15e4b41b5003",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
125176929 | pes2o/s2orc | v3-fos-license | On the total irregularity strength of caterpillar with each internal vertex has degree three
Let G be a simple, connected and undirected graph with vertex set V and edge set E. A total k-labeling f : V ∪ E → { 1 , 2 , … , k } is defined as totally irregular total k-labeling if the weights of any two different both vertices and edges are distinct. The weight of vertex x is defined as w t ( x ) = f ( x ) + ∑ x y ∈ E f ( x y ) , while the weight of edge xy is w t ( x y ) = f ( x ) + f ( x y ) + f ( y ) . A minimum k for which G has totally irregular total k-labeling is mentioned as total irregularity strength of G and denoted by ts(G). This paper contains investigation of totally irregular total k-labeling and determination of their total irregularity strengths for caterpillar graphs with each internal vertex between two stars has degree three. The results are t s ( S n , 3 , n ) = ⌈ 2 n 2 ⌉ , t s ( S n , 3 , 3 , n ) = ⌈ 2 n + 1 2 ⌉ and t s ( S n , 3 , 3 , 3 , n ) = ⌈ 2 n + 2 2 ⌉ for n > 4:
Introduction
Let us consider a connected, simple and undirected graph G with a vertex set (V (G)) and an edge set (E(G)). A labeling of a graph G is a mapping that carries a set of graph elements into a set of integers, called labels (see Wallis [10]). If the domain of mapping is a vertex set, or an edge set, or a union of vertex and edge sets, then the labeling is called vertex labeling, edge labeling, or total labeling, respectively. In his survey, Gallian [2] showed that there were various kinds of labelings on graphs, and one of them was an irregular total labeling.
Bača et al. [1] defined a labeling for a graph G, f : V (G) ∪ E(G) → {1, 2, . . . , k} to be a vertex irregular total k-labeling if for every two different vertices x and y the vertex-weights wt f (x) ̸ = wt f (y), where the vertex-weight of vertex x is wt f (x) = f (x) + ∑ xz∈E f (xz). The total vertex irregularity strength of G, denoted by tvs(G) is a minimum k for which G has a vertex irregular total k-labeling. They obtained the exact values of the total vertex irregularity strength for star, cycle, prisms and complete graphs. Moreover, Nurdin et al. [7] proved the exact value of the total vertex irregularity strength for any tree T with n pendant vertices and no vertex of degree two, that is Bača et al. [1] also defined a labeling g : V (G) ∪ E(G) → {1, 2, . . . , k} to be an edge irregular total k-labeling of the graph G if for every two different edges xy and x ′ y ′ of G, their edge weights are distinct. The edge-weight of xy defined as wt g (xy) = g(x) + g(xy) + g(y) and wt g (x ′ y ′ ) = g(x ′ )+g(x ′ y ′ )+g(y ′ ). The total edge irregularity strength denoted by tes(G), is defined as the minimum k for which G has an edge irregular total k-labeling. They also obtained the exact values of the tes for path, cycle, star, wheel and friendship graphs. The tes of generalized web graphs have been determined by Indriati et al. [3]. Moreover, Ivančo and Jendrol [5] proved that for any tree T , satisfies Marzuki et al. [6] combined the idea of vertex irregular total k-labeling and edge irregular total klabeling, and introduced another irregular total k-labeling called the totally irregular total k-labeling.
. . , k} to be a totally irregular total k-labeling of the graph G if for every two different vertices x and y, their vertex-weights wt h (x) ̸ = wt h (y), and also for every two different edges xy and x ′ y ′ of G, their edge-weights wt h (xy) ̸ = wt h (x ′ y ′ ). The minimum k for which G has a totally irregular total k-labeling is defined as the total irregularity strength, and denote as ts(G). For the total irregularity strength of a graph G, they observed that Moreover, Marzuki et al. [6] also determined the total irregularity strength of cycle and path. Ramdani and Salman [8] obtained the total irregularity strength of some cartesian product graphs, namely K 1,n 2P 2 , P n 2P 2 , (P n + P 1 )2P 2 , and C n 2P 2 . In [9], Ramdani et al. determined the total irregularity strength of Gear graphs G n , n ≥ 3, fungus graphs F gn , n ≥ 3 and disjoint union of stars mS n , n, m ≥ 2.
In [4], the total irregularity strength of double stars S n,m and caterpillar S n,2,n has been determined. In this paper, we continue to investigate the total irregularity strength of caterpillar graphs S n,3,n , S n,3,3,n and S n,3,3,3,n for n ≥ 4.
Main Results
This part discussed about the total irregularity strength of three caterpillar graphs S n,3,n , S n,3,3,n and S n,3,3,3,n .
Caterpillar S n,3,n
A caterpillar S n,3,n is a class of graph constructed from the double-star S n,n by inserting one vertex on the bridge connecting of the two centers of two stars, so that the inserted vertex has degree three. This inserted vertex called as the internal vertex. This caterpillar contains three stars with the center of the two end-stars have degree n respectively, while the center of the middle star has degree three. Therefore, there is no vertex with degree two. This graph is a tree with 2n + 2 vertices, 2n + 1 edges and 2n − 1 pendant vertices. Maximal degree of the graph is ∆ = n. According to (3), the lower bound of its total irregularity strength is the maximum value between its total edge irregularity strength and its total vertex irregularity strength. The total edge irregularity strength of graph S n,3,n can be found by (2), that is Moreover, the total vertex irregularity strength of S n,3,n can be found by (1), that is Furthermore, the lower bound of this total irregularity strength is obtained by (3), that is In the Theorem 2.1, the exact value of its total irregularity strength is determined. The lower bound of this parameter can be seen in (6). We will show that the upper bound of this parameter is equal with the lower bound.
To determine the exact value of ts, construct the totally irregular total k-labeling h as follows. Label of vertices and edges: Under the total labeling h, it is shown that the greatest label for all vertices and edges is k = n. It means for j = 2, (n + 1)n − 1, for j = 3.
The weight of vertices v j i for j=1, 3 and for v 2 1 form a consecutive integers from 2 up to n, for n + 1 until 2n − 1 respectively and 2n for (v 2 1 ). The weights among vertices v 1 , v 2 , and v 3 are distinct. Then, it indicates that the weights of every pair of vertices are distinct. Furthermore, the weight of edges for every two different edges also distinct. There are i + 2, 2n − 1 + i, 2n − 1, 3n − 1 and 3n for the weight of all edges. Therefore, we conclude that h is a totally irregular total k-labeling and the total irregularity strength is ts(S n, 3,n
Caterpillar S n,3,3,n
A caterpillar S n,3,3,n is a class of graph constructed from the double-star S n,n by inserting two vertices on the bridge connecting of the two centers of two stars, so that each inserted vertex has degree three. These inserted vertices called as the internal vertices. This caterpillar contains four stars with the center of the two end-stars have degree n respectively, while the center of each internal star has degree three. Therefore, there is no vertex with degree two. This graph is a tree with 2n + 4 vertices, 2n + 3 edges and 2n pendant vertices. Maximal degree of the graph is ∆ = n. The lower bound of its total irregularity strength is the maximum value between its total edge irregularity strength and its total vertex irregularity strength (see (3)). According to (2), the total edge irregularity strength of graph S n,3,3,n is tes(S n,3,3,n ) = max While the total vertex irregularity strength of S n,3,3,n can be found by (1), that is Moreover, the lower bound of its total irregularity strength can be obtained by (3), that is The exact value of its total irregularity strength can be seen in the Theorem 2.2 below. It is enough to show that the upper bound is equal to the lower bound as in (9). Proof. S n,3,3,n is a tree with 2n + 4 vertices, 2n + 3 edges and 2n pendant vertices. Let the vertex set of this graph be V (S n, 3,3,n To determine the exact value of ts, construct the totally irregular total k-labeling f as follows. Label of vertices and edges: for j = 1, n + 1, for j = 2, 3, 2, for j = 4.
f (v j v j 1 ) = n + 1, for j = 2, 3. Let consider that under the total labeling f , the greatest label for all vertices and edges is k = n + 1. Then, f is a total k-labeling with k = ⌈ 2n+1 2 ⌉ = n + 1. The weight of vertices and edges are as follows.
3n + 1, for j = 2, 3n + 2, for j = 3. It is shown that the weight of vertices v j i for j=1, 4 form a consecutive integers from 2 up to n, for n + 1 until 2n − 1 respectively and for v 2 1 and v 3 1 their weight are 2n for v 2 1 , 2n + 1 for v 3 1 . The weights among vertices v 1 , v 2 , v 3 and v 4 are distinct. Moreover, the weights of every pair of vertices are distinct. The weight of edges for every two different edges also distinct. There are i + 2, n + 2 + i, 2n + 2, 2n + 4, 3n + 1, 3n + 2 and 3n + 3 for the weight of all edges. Moreover, it is can be concluded that f is a totally irregular total k-labeling and the total irregularity strength is ts (S n,3,3,n With the same reason as in Theorem 2.1 and 2.2, it can be proven the total irregularity strength of the caterpillar S n,3,3,3,n as in the Section 2.3.
Caterpillar S n,3,3,3,n
Same as the previous statement, a caterpillar S n,3,3,3,n is a class of graph constructed from the doublestars S n,n by inserting three vertices on the bridge connecting of the two centers of two stars, so that each inserted vertex has degree three. This inserted vertices called as the internal vertices. This caterpillar contains five stars with the center of the two end-stars have degree n respectively, while the center of each internal star has degree three. There is also no vertex with degree two. This graph is a tree with 2n + 6 vertices, 2n + 5 edges and 2n + 1 pendant vertices. Maximal degree of the graph is ∆ = n. The maximum value between its total edge irregularity strength and its total vertex irregularity strength become as its lower bound of its total irregularity strength. (see (3)). The total edge irregularity strength of graph S n,3,3,3,n can be found by (2).
The total vertex irregularity strength of S n,3,3,3,n also can be found by (1), that is According to (3), the lower bound of its total irregularity strength can be obtained as follows.
Furthermore, in the Theorem 2.3, the exact value of its total irregularity strength can be found. The lower bound of this parameter can be seen in (12). Then, we will show that the upper bound of this parameter and the lower bound is equal.
In order to determine the exact value of ts, construct the totally irregular total k-labeling g as follows. The Label of vertices and edges are: n + 1, for n = 4, n, for n ≤ 5.
It is easy to see that under the total labeling g, the greatest label for all vertices and edges is k = n + 1.
Then, g is a totally total k-labeling with k = ⌈ 2n+2 2 ⌉ = n + 1. The weight of vertices and edges are as follows. | 2019-04-22T13:11:24.268Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "d9ef2a87d3f64226236049f751f3acef3cb44b6e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1008/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1d8ffc1b361a8ec2f9dfa0870770f8f32a618779",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Mathematics"
]
} |
236967025 | pes2o/s2orc | v3-fos-license | Covid-19 and tracing methodologies: A lesson for the future society
As the new coronavirus (SARS-CoV-2) surged across the globe, new technical solutions have supported policy makers and health authorities to plan and modulate containment measures. The introduction of these solutions provoked a large debate which has focused on risks for privacy and data protection. In this paper we offer an analysis of the available technical approaches and provide new arguments to move beyond the ongoing discussions. In particular, we argue that the past debate missed the opportunity to highlight the societal aspects of privacy and to stimulate a broader reflection on the actions needed to serve the good of society. With this paper, as well as providing an accessible review of the technical and legal aspects of the proposed solutions, we aim to offer new stimuli to reconsider contact tracing and its role in helping countries navigate the current pandemic.
Introduction
Since the beginning of the COVID-19 pandemic, the computer science community has been contributing ideas and practical solutions to tackle this global crisis. Significant efforts have been directed towards developing digital contact tracing applications to complement lockdown measures and, ultimately, curb the spread of the virus. In general, the goal of these apps is to notify people who have recently been in contact with a person diagnosed positive, and to provide them with guidance on how to proceed to avoid the further spreading of the disease, such as observing the quarantine period and getting in touch with public health authorities.
Proposals of this sort abound in literature and vary in several respects including the technology employed. Contact tracing has already been used in past epidemic diseases such as the Ebola virus [1]. Traditionally, it is conducted manually by human interviewers. Because of the magnitude of COVID-19, more recent methods rely on mobile applications and attempt to improve the extent and the efficiency of data collection and retrieval.
In the spectrum of tracing methodologies, the role of digital technology can vary significantly. At one extreme, the application only supports the work of human interviewers replacing the paper forms [2]. At the other extreme, the system, infused with learning capabilities, acts as a wellinformed orchestrator making risk predictions and recommendations at personal level [3].
This variety is not only a matter of technological diversity but also a societal dilemma. Indeed, a more effective tracking process, enabled by mobile technology's pervasiveness, can also translate into privacy breaches and, at worst, systematic forms of societal control. 1 These possibilities raised several concerns about the practical effectiveness and the privacy guarantees of these apps. Also, many organisations put forward guidelines and principles for the design and deployment of digital contact tracing apps following data protection legislation and human rights (European Commission [4] and European Data Protection Board [5]).
In this paper we argue that the discussion was framed too narrowly in terms of technical requirements and competing architectures. An example of this narrow focus is given by the polarized discussion between the centralised vs. decentralised approaches (see Sect. 2.2.), which differ in the way they generate the identifiers needed for the system to properly work, share information between devices and compute the risk score for each individual. We believe that this and similar debates, albeit essential, distracted from a broader reflection on the needed actions to put the tracing process to service for the good of society, not only for the contingency of a large-scale crisis.
We first provide an overview of three different contact tracing methodologies and suggest to what extent humans and machines can (co)operate. We then survey the main principles put forward in legal literature and discuss some limitations which affect the ongoing discussions. Finally, we suggest three arguments highlighting the importance of the societal context and human oversight.
Tracing methodologies
Contact tracing methodologies were already in use over 500 years ago to control the great pox (also known as syphilis) when a group of Italian doctors started investigating the spread of the disease in the search for the "patient zero". 2 There are several examples over the history of medicine, from AIDS to Ebola, where tracing methods were implemented to identify symptomatic individuals and, when needed, apply strategies of isolation. The societal and ethical concerns raised by such techniques at the time are still largely present today. Those include the fear of disclosing personal information on our societal interactions, the lack of trust in the public institution tasked with the collection and further processing of the said data, the potential for discrimination and stigmatisation and the necessity to partially bypass the democratic debate due to the urgency of the situation. The use and the efficacy of digital technologies for tracking and curbing the spread of the virus are still under review and different institutions are monitoring their introduction around the world. 3 In this section we provide our own classification which aims at suggesting different levels of human and machine computation. On the one hand, there are methods which rest on the human capability to collect information through interviews or self-reports. These have already been in use in past pandemic diseases and can also exploit digital devices to support and improve data collection tasks of health professionals. On the other hand, there are methods using technology to warn users of a potential exposure either in the form of a binary signal (e.g. "being in contact with a positive case or not") or a risk score. We are aware that our taxonomy is all but exhaustive and, in certain respects, it may even be disputable, 4 but our goal is not to cover the plethora of all existing applications. Instead, the primary purpose is to suggest the continuity between digital solutions and human tracing and, secondly, to highlight the role played by machines and the different types of human-machine interaction. In other cases, the machine could be a silent medium replacing pen and paper, in other cases, the machine can be more active and sends alerts to the user automatically, often after a given consent. We call this second type of application "machinedriven" 5 since the technological element takes an active role in eliciting a desired course of actions based on a "simple" warning alert or a risk prediction and personalised messages.
Technically speaking, what is automated here is the notification process or the prediction of infection and humans can still take care of other important activities (e.g. instructing users who have been notified), but the human intervention is somehow dependent on the tech layer. Table 1 summarises the main benefits and limitations of each contact-tracing method that is described in the following subsections.
Human-driven tracing
Traditionally, contact tracing has been handled through personal interviews between health professionals and patients. The aim is to identify possible contacts of the infected person and monitor them for several days after the notification of infection. Protocols need to be put in place as soon as the case is confirmed to be effective, although they may vary among countries and viruses. In the case of COVID-19, health agencies try to identify contacts where transmission could have happened (e.g. interactions longer than 15 min and within a distance of 2 m over the last 14 days prior 2 https:// theco nvers ation. com/ conta ct-traci ng-how-physi cians-usedit-500-years-ago-to-contr ol-the-bubon ic-plague-139248 3 For example, see the tracker systems provided by the Ada Lovelace Institute (https:// www. adalo velac einst itute. org/ our-work/ ident ities-liber ties/ covid-19-digit al-conta ct-traci ng-track er/), Privacy International (https:// priva cyint ernat ional. org/ examp les/ track ing-global-respo nse-covid-19? field_ locat ion_ region_ locale_ target_ id= Italy+% 28238% 29& sort_ by= field_ date_ value & sort_ order= DESC) and the report exploring the European landscape delivered by AlgorithmWatch and Bertelsmann Stiftung (https:// algor ithmw atch. org/ en/ publi cation/ new-report-on-admsyste ms-in-the-covid 19-pande mic/). 4 For instance, one may contend that the layer of automation introduced by notification apps is limited and does not justify their attribution to a machine-driven methodology. But, we believe that automated notifications have a serious impact on the whole tracing process, in particular with respect to the elicited behaviour. Indeed any notification is supposed to trigger a course of action involving the user, its contacts and the health authority. 5 Note that a machine-driven solution includes one or more (semi-) automated mechanisms but does not necessarily imply the use of Artificial Intelligence components. to the positive result). This allows to elaborate on the list of personal contacts, albeit it risks to be very imprecise as they rely on the imperfect recollection of persons interviewed as well as their criteria to measure if a contact was significant enough to be considered as a random contact or as a case to be analysed. Some studies (e.g., Ferretti et al. [6] suggest that this technique is not sufficient to control a pandemic such as COVID-19 and the individuals at risk of transmission. Manual contact tracing can also make use of mobile applications. For example, in 2016 a group of researchers proposed a software to improve data collection and storage for Tuberculosis tracing in Botswana [2]. The intended users are health care workers who need to operate in settings with limited resources, and the interaction with the patients is still guided by humans. In the early days of the COVID-19 pandemic the technological intermediation moved towards more distributed forms of data collection. For example, several governments, research centres and institutions created online survey forms to gather health information from self-reports. The common structure of these surveys starts by asking for some personal information: gender, age range and location (the level of detail depends on each survey and country, going from the name of the city, to the zip code or even the street name), but could also be extended to the professional sector, level of income and others. Next, a series of questions were asked to reveal possible COVID-19 symptoms. All this information is collected and analysed in order to study the evolution of the pandemic and identify those areas where the pandemic was more active, and ultimately to help governments and health authorities to make decisions. Note that, while these tools are offered to large populations (not just a group of professionals), the interactions with user is limited to (voluntary) self-reporting and not intended to deliver specific messages or recommendations. 6
Machine-driven contact tracing
For decades, economic, health or environmental emergencies have led to crisis-driven innovation. In the case of COVID-19 crisis, one of the main topics of discussion has been the digital contact tracing methodologies, generally deployed in mobile apps. While in human-driven methodologies people are notified by other people, with machine-driven approaches the notification mechanism is automated. On the one hand, this feature can significantly reduce costs in terms of time and people needed to deal with large pandemics. On the other, the active intermediation by digital tools in tracing processes can introduce further privacy and surveillance concerns (see Sect. 3). In addition, since people can have different reactions depending on the nature of the intermediation (i.e. human or technological), we may expect that, in the long run, different notification mechanisms can generate distinct behavioural patterns within the population. For example, machine-driven solutions, and in particular those incorporating learning capabilities, could favour the initiative and the autonomy of individual users with respect to actions to be taken after an exposure notification and, as a consequence, increase chaos in associated services (e.g. when a large number of users contacts health operators simultaneously these may not be able to serve all requests).
Location-based and proximity-based solutions
The main purpose of contact tracing apps is to reduce the spread of the pandemic and support policy makers in planning alternatives to stringent interventions such as lockdown measures. Indeed, lockdowns are not sustainable in the long term and cause significant impacts in our daily lives, both from an economic and societal perspective. The objective of digital contact tracing is to monitor contacts among citizens and identify those at risk of being contagious. This methodology is designed to help governments and health authorities in making decisions more efficiently by sending prompt alerts to people who were in contact with a confirmed case and applying selective measures like isolation. The reaction time in these situations is crucial to tackle the contagion rate and avoid the spreading of the virus. We can differentiate between two prevailing means of digital contact tracing. The first one uses a location tracing methodology with GPS or network-based location tracking. 7 This option has been ruled out in many countries across the EU since, according to the recent Guidelines of the (European Data Protection Board [5], other less privacy-invasive can achieve the same goal. With respect to this option, the main concern regards privacy since both GPS and networkbased solutions can be active 24/7 in users' phones. Also, they can collect more data than strictly necessary to check whether an encounter could lead to an infection and informing the concerned persons of such risk. Another critical aspect regards network-based solution which can do not require active user's participation (i.e. download and installation), thereby guaranteeing penetration. To the best of our knowledge, Israel, Iran, Cyprus, China, Indonesia, Bulgaria and Ghana have made use of location-based solutions.
The second option is based on proximity data usually collected via Bluetooth Low Energy (BLE), a technology used to transfer data from one device to another, mostly over a short distance. BLE is most commonly used to connect peripherals (e.g. headphones) to devices like smartphones and is omnipresent in almost all modern mobile devices and thus accessible to a large number of people. The characteristics of BLE against other technologies are (i) low power consumption, allowing contact tracing apps to run for hours without draining mobiles' batteries too fast and (ii) indoor operation as a short-distance tracker. However, BLE is also sensitive to false positives as proximity estimation does not always detect architectural obstructions between two individuals that have been identified as exposed [7].
The EDPB has promoted proximity-based solutions as they adopt a privacy-preserving design following the basic principles of the General Data Protection Regulation (GDPR), such as data minimisation and purpose limitation. Among these solutions, two types of protocols have been proposed: (i) centralised solutions such as the ROBERT protocol (that store the data in a central server [8,9] and (ii) decentralised options such as the DP-3 T protocol that ensures that personal data and computation stays entirely on the user's phone [10,11]. The main purpose is to help epidemiologists to build the network of contacted people that are potentially infected. No other personal or health information is collected or required. The DP-3 T protocol enhances user control by giving them a choice to voluntarily share the information gathered by their mobile devices with health authorities. Note that the DP-3 T solution is more properly acknowledged to be an exposure notification app since contact data are stored in the user's phone.
AI-based solutions
Another option to track and limit the spread of the virus is to incorporate Artificial Intelligence into tracing applications. Note that the role played by AI in the control of the COVID-19 pandemic is considerable and not limited to tracing applications. Significant efforts went to diagnosis from medical imaging or voice analysis, 8 drug discoveries and societal simulations (see [12] for a review). Here we will focus on AI solutions for tracing purposes. 9 Usually AI-based tracing applications allow to infer knowledge about the risk of infection and the spread of the virus in a geographical area. A proposal going in this direction came from Yoshua Bengio's team (at Mila, Canada) with the so-called COVI app, whose main functions are: (i) to inform individuals of their infection risks and (ii) to support governments in better understanding the disease transmission and planning containment policies [3]. Another mobile application proposes to use an AI algorithm for classifying users into four classes (no risk, minimal risk, moderate risk and high-risk) and sending an alert for check health recommendation to both the users and health departments [13].
An important distinction between location-and proximitybased apps (such as those described in Sect. 2.2.2) and AIbased solutions is the type of information sent to the user. Instead of sending binary information about whether a user has been in contact with an infected person or not, AI-based applications inform the user of the risk of infection through the aggregation of thousands of data points. For example, COVI sends a message reflecting the probability that the person has been infected and recommending a specific course of actions. The prediction of the risk of infection is enabled by a machine learning model combining different information sources regarding both users' individual profile (e.g. demographic, existing health issues and presence of new symptoms) and users' interaction, based on Bluetooth proximity detection. The machine learning model computes users' current and past contagiousness (their risk level) locally. When two phones with the app meet, they exchange information about each other's risk. As the app accumulates information the risk estimated is revised and, if a revision is sufficiently important, an updated message is sent to its relevant contact.
Some scholars claim that the introduction of AI and, in particular, machine learning models into tracing applications can help detect the early signals of the disease before they propagate throughout the population [3,14]. This argument is also reinforced by the lack of human tracers, whose number turns out to be insufficient to interview high volumes of positive cases and find out new potential infections. For example, a study claimed that last year in England "tracers typically reached less than half of the close contacts of people who'd had a positive COVID-19 test" [15].
In general, although their use is meant to support and complement manual tracing, AI-based applications are proposed to offer a greater automation level reducing human efforts in the early phases of the pandemic where symptoms are either absent or not clearly discernible [3]. Also, the proponents of AI-based solutions claim that, by offering risk predictions and customised recommendations, these apps promise to empower individuals with knowledge to protect themselves and take preventive measures [3]. To increase public trust, moreover, AI-based solutions can adopt a privacy-protecting approach [3], for example, requesting consent for all collection and use of personal information (see Sect. 3 for a review of data protection requirements). However, despite the positive inspiration motivating AI-based tracing apps, there is still a lack of evidence proving that increased levels of automation meet the expectations of end users and bring them greater empowerment. In addition to the final social impact, it is also necessary to take into account other dimensions to assess the benefits of deploying such solutions, such as the environmental impact related to the cost of storing huge amounts of data required to train AI models. As we will see in Sect. 4, design specifications can misalign with how users concretely approach and use a tracing application and this situation can lead an app to fail.
Legal implications of contact tracing for COVID-19
As the number of digital contact tracing applications increased over the past months, a lively debate took place on the impact that such solutions could have on individuals' fundamental rights to privacy, data protection, health and non-discrimination. 10 At the European level, this debate translated into concrete questions about the legal acceptability of machine-driven tracing applications and their compliance with the European regulatory framework. Does the General Data Protection Regulation (GDPR) apply in this context? What principles should guide the design of these apps? And what actions follow from them? To verify whether digital contact tracing apps fall within the material scope of application of the GDPR, it is necessary to check whether they involve the 'processing' (Art. 4(2) GDPR) of 'personal data', i.e. any information relating to an identifiable natural person (Art. 4(1) GDPR). Regardless of the technology used to perform contact tracing, the consensus is that they do. While it is rather obvious for contact tracing based on geolocation data -the privacy-invasive and repurposing potential of which is well-documented [16,17] -the same is true for BLE-based solutions, deemed as the 10 Although this section focuses more on privacy and data protection, further considerations would include the right to health care and nondiscrimination (see articles 21 and 35 of the Charter of Fundamental Rights of the European Union: https:// eur-lex. europa. eu/ legal-conte nt/ EN/ TXT/? uri= CELEX: 12012P/ TXT).
sion of an infectious disease and is thus an essential public health tool for controlling infectious disease outbreaks." https:// www. who. int/ publi catio ns/i/ item/ conta ct-traci ng-in-the-conte xt-of-covid-19 Footnote 9 (continued) most privacy-preserving alternatives. As highlighted in the privacy analysis of the DP-3 T protocol, even decentralised options based on the sharing of emitted EphIDs are vulnerable to re-identification attacks and, therefore, would warrant the qualification of the data processed as 'personal' [10,11]. Without delving into the intricacies of the 'identifiability' threshold under Art. 4(1) GDPR, one can reasonably assume that contact tracing apps will fall under the scope of application of the GDPR and, as such, will need to abide by the various principles and rules prescribed therein.
Another fundamental task is to identify and adequately qualify the actors involved in the processing operations, as this will determine the allocation of responsibility, liability and accountability under the Regulation. Under the GDPR, that entity is the 'controller', i.e. the one determining the 'purposes' and the 'means' of the processing activities. While various options can be considered -involving both public and private actors -, the European Commission recommends a model where such responsibility would fall to national health authorities, or the entity carrying out tasks of public interest in the field of health. This, underlined by the (European Commission [4]) and the (European Data Protection Board [5]), is essential to foster public trust and guarantee sufficient adoption.
Besides the applicability of the GDPR, it is to be noted that the data produced via the smart devices are also protected under the ePrivacy Directive, which prescribes that storing information on a user's device or gaining access to information already stored is allowed only with the consent of the user or if the storage and/or access is strictly necessary for the app installed or activated by the user (Article 5(3)). In the same vein, location data can only be transmitted to authorities or other third parties if they have been anonymised or, for data indicating the geographic position of the terminal equipment, with the prior consent of the users (Article 9(1)).
Operators of mobile apps offering contact tracing functionalities will need to follow a security and data protection by design approach. Table 2 summarises some of the most important principles and rules laid down in the GDPR and ePrivacy that should be taken into account to develop, design, select and use applications that are based on the processing of personal data.
The debate on contact tracing applications: A missed opportunity?
So far, the debate has highlighted important implications of digital tracing applications thereby suggesting that using technological advances to tackle societal issues is laudable but insufficient by itself. Technological intermediation can offer great opportunities in reaching out to large populations and reducing the time for tracing the whole chain of contacts. However, it can also justify invasive practices of data collection and favour surveillance mechanisms for societal control and profiling. Furthermore, if not careful, machinedriven contact tracing methodologies might result in a pure consequentialist and technological deterministic attitude, in which solutionist tech (cfr. [18] is perceived as a necessity in tackling this crisis and privacy treated as good that can be traded rather than a fundamental right. Contact tracing methodologies should be treated as systems producing value and giving meaning, not merely neutral technical artefacts. They are developed with certain goals in mind and are thus calibrated to reach that specific goal as efficient as possible [19]. The GDPR provides essential safeguards to address privacy and data protection issues (see the principles described in our table) along with other existing legislation. Based on that, several organisations and researchers provided useful guidelines for assessing digital contact tracing apps (e.g. [20]. It could also be argued that the debate, at least in Western European countries, has been mindful of privacy issues and data protection. So, rather than focusing only on the urgency of the crisis and the pressing need to flatten the curve of contagion, there has been room for legal and human rights considerations which have promoted the discussion of fundamental principles such as data minimization, consent and voluntary use. In addition, in various countries it was explicitly described how digital tracing methodologies have a supportive role and will not replace manual tracing efforts, thereby opposing a view in which technology occupies the driver's seat.
However, the current debates have largely ignored the societal context of these apps. In the following sections we propose three arguments that, in our opinion, move beyond the current debate and offer new stimuli to reconsider contact tracing and its role in helping countries navigate the current pandemic.
Control, secrecy and appropriateness
First, we argue that the current discussions on privacy and tracing methodologies are too narrowly focused on control and access restriction. Scanning through existing applications and protocols, it is noticeable how two prominent perspectives on privacy are put forward and translated into the design, that is, 'privacy as secrecy' and 'privacy as control'. The former perspective ensures full anonymity, or at least tries to. For example, those who employ GPS or networkbased location tracking put forward solutions to anonymize personal data. Others, like the DP-3 T, propose a decentralized design that shifts the processing operations from a central entity to end-users devices. The latter perspective offers control options to regulate and manage their information flow. For example, the exposure notifications systems of Access to terminal equipment Article 5(3) ePrivacy Directive requires either (i) the user's freely given, specific, informed and unambiguous consent or (ii) to justify that the storage and access is strictly necessary to ensure the proper functioning of a service explicitly requested by the user In the context of contact tracing apps, it could be argued that access to terminal equipment is strictly necessary for the functioning of BLE-based digital contact tracing solutions Lawfulness Article 5(1)a GDPR requires controllers to justify personal data processing using one of the lawful grounds listed in Article 6(1) As suggested by the EDPB, the appropriate lawful ground would be, in most cases, Article 6(1)e (task carried out in the public interest) Special categories of personal data Article 9(1) prohibits the processing of special categories of personal data unless one of the exemptions listed in Article 9(2) applies As suggested by the EDPB, the relevant exemption would be, in most cases, Article 9(2)i (reasons of public interest in the area of public health) or h (preventive or occupational medicine) Transparency Articles 12, 13 and 14 GDPR require controllers to report about their processing activities in a concise, transparent, intelligible and easily accessible form, using clear and plain language -Provide the identity and contact details of the controller, the purposes and lawful ground of the processing, the recipients of personal data if any, the retention period and the existence of the multiple prerogatives granted to data subjects such as the right to access and erasure -Transparent and verifiable development through open-source code, external audits and publicly available Data Protection Impact Assessments Purpose limitation Article 5(1)b GDPR requires personal data to be (i) collected for explicit, specified and legitimate purposes, i.e. purpose specification, and (ii) not further processed in a manner that is incompatible with those purposes, i.e. compatibility assessment -Only collect personal data the repurposing potential of which is limited, such as ephemeral identifiers -Avoid the bundling of functionalities within the same app (e.g., a single app providing general information, symptom checker features and contact tracing) or grant users granular control over which of them he or she wishes to opt-in to Data minimisation Article 5(1)c GDPR Requires controllers to only collect and further process personal data that is necessary to the purposes that have been specified -Avoid the use of geolocation and/or movement data (BLE is less privacyinvasive) -Avoid storing the exact time of contact or any type of metadata that is not specific to the contact or duration Storage limitation Article 5(1)e GDPR Tailor the retention period according to the purposes of the processing Proximity data should be deleted as soon as they are no longer necessary for alerting individuals (or EphIDs in the case of BLE-based solutions) or any personal data stored in the backend server Google and Apple allows one to opt in to use exposure notifications after the public health authority app is downloaded. One can also decide, when diagnosed positive to Covid-19, to share random IDs with the application. Although both secrecy and control are important parts of privacy, they should not be treated as one and the same thing [21]. Rather we want to underline and stress out the importance of a contextual approach to privacy (cfr. Contextual integrity [22,23]). Contextual integrity (CI) does not focus on privacy expectations in terms of 'control' or 'secrecy' but in terms of 'appropriateness'.
According to CI the focus on the control of personal data and increased exposure is only part of the anxiety but limited in itself. In her framework, [23] argues to focus on informational norms (what is appropriate and what is not within and between contexts), that consists of three parameters: actors, information types, and transmission principles. Contextual integrity is achieved when a particular flow, or transmission of information from one party to another is appropriate in terms of the type of information that is shared, the identity of the sender, how it is shared and the receiver of the data. CI moves beyond an individual perspective on privacy and denies a false contradiction between privacy and using personal information for various reasons, including tracking location or monitoring everyday behaviours. If the flow is appropriate (not necessarily in 'control' or 'secret') then contact tracing does not necessarily reduce privacy expectations.
Contact tracing apps, like other digital services, build upon existing practices and rules which influence design choices more or less implicitly. This means that the elements charactering the context of this technology, such as the actors involved with their tasks and responsibilities, or the types of exchanged information, depend to some extent on the features of administrative routines and protocols operating in an organization. More specific rules can be detailed in national pandemic strategies and applied in particular time of crisis to coordinate and improve efforts. For example, the entity to whom report a positive test or ask advice in case of a possible contagion can be a health agency which operates according to organizational and social norms. These may regard the type of information collected and the communication chains to be followed in a time of pandemic, as well as the commitment to alleviating illness and promoting health. Identifying and understand such norms is a valuable exercise not only to anticipate which patterns of flow can harm people privacy and rights but also to identify responsibilities of actors involved. This would be even more essential when communication flows through an intricate web of connections as those arising in complex institutions (such as national health care services), where decisions are distributed across multiple actors.
Note that the discussion of CI span both human-and machine-driven tracing methodologies since the organizational and social norms governing the flows of information can be independent of the substrate used to provide a service (in our case to notify users at risk of carrying the virus as early as possible). On the one hand, this would challenge the naïve assumption that risks for privacy and discrimination originate only from computer-mediated communication. On the other hand, the design of communication technologies can obfuscate the meaning of certain roles or norms characterizing the context of application. For example, the ownership of the server storing information (for example, think of the centralized versions of exposure notification apps) with the associated powers and duties might be unclear or poorly communicated to the user.
Arguably, the current pandemic is quite unique in its impact and affecting societies at their core. It is difficult to imagine what life will be like, let alone one's opinion about the usage of surveillance technology and tracing methodologies to limit the spread of Covid-19 and other pandemics. It is therefore necessary to negotiate this relatively new context that we are in to identify practices that defy privacy norms, which requires a shift beyond privacy as control or secrecy.
Technical specifications versus technology usage
A second issue concerns the misalignment between how technical artifacts work and how end-users imagine these to work. Every individual forms a specific idea of how a technological system works and why it works in a certain way (cf. algorithmic imaginary [24]). It is this algorithmic imagination that fuels how people form opinions and whether or not they want to use a specific system and under what circumstances. Even if only imagined, this image is real in its consequences.
In the context of machine-driven contact tracing, the information people received was, at best, sparse and inconsistent. Up to the date, some countries have designated institutions in charge of controlling personal data related to contact tracing apps. 11 However, other governments have not clearly communicated yet who would be in charge of the development and management of these applications and the collected data, how data would be processed (Bluetooth vs GPS, AI vs algorithms), or what specific goals would be pursued by the application. There is a significant difference between a contact-tracing app that will merely inform users (i.e., that they may have been in contact with someone that had , that enforces quarantine measures, or one that could be accessed and used by governments for purposes other than the containment of the virus' spread. In the context of the COVID-19 tracing apps, the general goal is to track the spread and contaminations of the virus. A close physical encounter is considered a risk, thus, when short range technology (i.e., BLE) detects a close proximity of personal devices (i.e., smartphones), the app will interpret this as a risky encounter. These include completely safe contacts (e.g., behind a window or with sufficient precautions). These imperfections could decrease the perceived accuracy of and trust in these systems. Additional concerns would come when dealing more nuanced information such as risk predictions and behavioural recommendations. How would the user interpret a risk measure? Would it make sense of the risk estimates automatically revised by the machine leaning model?
It is not entirely clear who, how, and why exactly these COVID-19 tracing applications will be developed and maintained. Based on this inaccurate and incomplete information, people could imagine and supplement their algorithmic imaginaries with erroneous insights (e.g. the government will use these applications to spy on their citizens). It is then fair to ask not only what kind of technology will be used, but also why [25] and how. These questions go beyond the technical requirements; governments should also justify if they are necessary, proportional, scientifically sound and time-bounded to solve the main problem [26].
Discussing the impact of design choices
A third and final issue concerns how considerably more importance is attached to the development of contact tracing apps as opposed to discussions on adoption and appropriation. It is crucial to discuss and assess the impact of choices made during development. However, the privacy negotiation process must continue in the deployment stage. Users will evaluate the appropriateness of data flows and adjust their algorithmic imaginary while using the application.
Another important consideration regards the assessment of the effectiveness of the adopted solutions. Among others, this includes the discussion as to whether the applications employed operate as expected. Follow-up studies suggested that certain proximity-based approaches suffer from important technical flaws. For instance, it has been shown that Australia's app has worked only 25% of the time on some devices because "the Bluetooth "handshake" necessary to register proximity between two phones doesn't work if the phone screen is locked" [27]. Another study testing the Italian, Swiss and German apps in a tram reported that the technology was very inaccurate and no better than a random notification system [28].
In addition to the comprehensiveness of the data being collected, the ways how users adopt and use contact tracing apps will likely influence the potency of contact tracing apps (e.g. if end-users do not trust the organization storing the data, they could decide to sporadically or indefinitely turn off their Bluetooth/delete the application). Also, mechanisms allowing citizens to provide their feedbacks and flag issues related to data protection or specific app's functionalities are a fundamental step to promote the adoption of tracing apps across the population. Note that collecting and taking care of citizens' opinion serves not only an important social function but also a scientific and technological purpose. Solving problems by means of scientific and technical tools is in fact an attractive option, even in social and political domains. However, the adoption of tech solutions for solving social and policy problems is exposed to several ideologies (e.g. tech solutionism) and the seduction of quantification especially in times of global uncertainties [29]. So, it is critical that tech-based solutions, such as contact tracing apps, once deployed, keep being tested by experts and open to public opinion so as to collect a spectrum of observations as larger as possible. Regular testing with participatory assessment practices 12 (e.g. through citizens' assemblies and public deliberation) would contribute to create better narratives of our tech solutions and more elements to exercise trust or distrust in this technology.
Concluding remarks
In this paper we gave an overview of the technology proposed to control the spread of the COVID-19 pandemic and the public debates that originated from it. Our discussion suggested that past and present discussions could miss an important opportunity. The COVID-19 pandemic gives many stimuli to think about how a global crisis can be tackled through a large technological infrastructure. Although many countries moved towards privacy-friendly solutions, such as the DP-3 T protocol, the effect of these notification mechanisms on the whole population is still unexplored: do the technological intermediation achieve the intended goal? Does it work as expected? What is the impact of such technology in the long run? These simple questions point to significant technical and societal considerations that are essential to investigate the effectiveness of the chosen technology. For example, it has been suggested that proximity-based apps need a high level of adoption to be effective for decision-making and representative of the population, although it has been observed that lower rates would be enough to have a protective effect. 13 Expecting a high uptake would not be realistic if we think in groups of vulnerable people with no access to smartphones or with poor digital skills. The consequences complicate as we add AI components to the digital intermediation. For example, in AI-based tracing it is crucial to assess the accuracy of predictions and explain to the users where these come from so they can make sense of the messages received and actions they can take.
Unfortunately, the debates at the beginning of the pandemic triggered opposite positions that have distracted from the complexity of the problem and the need to set up long-term efforts dedicated to the study of similar scenarios. For example, the popular centralised vs. decentralised dispute, while opening important technical details to a broad audience, has implicitly encouraged the idea that a societal problem, like privacy and surveillance, can be fixed by a technical strategy (technical solutionism). Also, this and similar discussions hide the fact that privacy concerns are a prerogative also in human-driven tracing and should be supervised in all circumstances. While the debates on technical requirements and their privacy guarantees abounded clear evidence of the efficacy and the effectiveness of contact tracing apps are still missing and in need of an in-depth policy evaluation (see for example, technical flaws cited in [30]).
The challenges raised by this new technical mediation can be partially tackled by design and their larger effects are still poorly understood. As we suggested, to assess the effectiveness and the appropriateness of adopted solutions there are many aspects that should be considered, including the real performances and users' understanding. In addition, it would be important to evaluate how tech solutions interact with existing apparatus such as the health care system and governmental bodies.
The problems surrounding contact tracing apps and similar technologies do not regard isolated efforts but encompass views and ideologies on how technology can serve society. All this needs a long-term discussion engaging different stakeholders, from health experts, engineers to politicians, and allowing citizens to actively contribute with feedback and comments. Similar work could be carried out by dedicated entities, like living labs or research hubs where both public and private institutions can collaborate to investigate possible scenarios and study the impact on society, ethical consequence and interaction with existing laws.
Author contributions All authors contributed equally.
Funding Open access funding provided by Università Ca' Foscari Venezia within the CRUI-CARE Agreement. Atia Cortés and Teresa Scantamburlo are supported by the project A European AI On Demand Platform and Ecosystem (AI4EU) H2020-ICT-26 #825619. The views expressed in this paper are not necessarily those of the consortium AI4EU.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-08-11T05:25:22.331Z | 2021-08-07T00:00:00.000 | {
"year": 2021,
"sha1": "23e62a34e262ada33c788ee75b6e2a597217bfd8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12553-021-00575-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "962b1b11b497c31d40d9f7db3d60295e5bfacccc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208639460 | pes2o/s2orc | v3-fos-license | Expedited Removal of a Radial Hemostatic Compression Device Following Cardiac Catheterization Is Safe and Associated With Reduced Time to Discharge
Background Radial access for cardiac catheterization has become increasingly adopted, owing much of its popularity to decreased bleeding complications compared with the femoral approach. Hemostatic compression devices (HCDs) for radial catheterization play a key role in this advantage, but the optimal duration of compression is unknown. A shorter duration of compression is encouraged by guidelines, but removing an HCD too quickly could result in serious bleeding. We aimed to evaluate the safety and effectiveness of expedited removal of a radial HCD after cardiac catheterization. Methods We conducted a prospective study of patients undergoing radial cardiac catheterization and/or percutaneous coronary intervention at a tertiary care academic medical center. Patients underwent HCD application using a TR Band® (Terumo Interventional Systems) which was removed after a prespecified amount of time in each of three sequential temporal cohorts: 2-h, 1-h, or 0.5-h. Each patient was monitored for development of bleeding or hematoma and for serious complications. Results A total of 354 patients participated in our study, with similar numbers in each group. There was a greater rate of minor bleeding in the 0.5-h (12%) and 1-h (19%) groups compared with the 2-h group (8%), but there were no serious complications (need for surgical consultation, transfusion, or unplanned admission) in any group. The average time to discharge was shorter in the 0.5-h and 1-h groups compared with the 2-h group. Conclusions Deflating the radial HCD at 0.5 h is safe with no increase in the observed rate of major complications and is associated with reduced time to discharge after coronary angiography or percutaneous coronary intervention using the radial arterial approach.
Introduction
Transradial vascular access has become an increasingly adopted approach for cardiac catheterization in the USA, with use of radial arterial access among percutaneous coronary interventions (PCIs) rising in the National Cardiovascular Data Registry (NCDR ® ) CathPCI Registry from 10.9% in 2011 to 25.2% in 2014 to 39.5% in 2017 [1,2]. One of the drivers behind this trend is the lower rate of bleeding complications with the radial approach compared with the femoral approach [3].
A key factor in the decreased bleeding rates associated with radial access is the use of hemostatic compression devices (HCDs) placed post-procedurally. However, despite their importance, the optimal duration of post-procedural compression with an HCD is unclear. One benefit of shorter of duration of compression is lower rates of radial artery occlusion [4], prompting expert consensus guidelines to advocate for this strategy [5]. If faster removal of an HCD leads to less time required for patients to be observed in the hospital, this could also result in decreased length of stay and cost savings. However, a shorter duration of hemostatic compression would be detrimental if it leads to increased rates of bleeding. Thus, the ideal duration of compression that balances these factors remains unclear.
An informal survey of North American cardiac catheterization laboratory policies that we conducted reflects this uncertainty, with institutional stipulated compression times varying from 10 min to 4 h. At the outset of this study, our institution's cardiac catheterization laboratory protocol mandated application of an HCD (the TR Band ® from Terumo Interventional Systems, Somerset, NJ) for 2 h. The Removal Guidelines from Terumo suggest compression for 1 -2 h depending on the amount of heparin used during the procedure [6]. However, the brochure indicates that these guidelines are consensus opinion only. Because of the potential benefits of shorter duration hemostatic compression after radial approach cardiac catheterization and the lack of data to guide practice, we sought to ascertain whether an expedited HCD removal protocol would be safe to implement at our institution.
Study design
We conducted a prospective cohort study of consecutive patients undergoing coronary angiography, left heart catheterization and/or PCI via the radial artery approach at the Beth Israel Deaconess Medical Center (BIDMC) in Boston, Massachusetts, aiming to ascertain the safety of expedited removal of an HCD post-procedure compared with our standard practice of 120 min of compression. Most patients undergoing cardiac catheterization and PCI procedures receive immediate post-procedure care (including HCD or femoral arterial sheath management) in the Cardiac Catheterization Laboratory Holding Area. Patients with radial arterial access were excluded from this study if they were to be transferred from our holding area to an inpatient care area (intensive care unit or cardiology ward) prior to removal of the HCD. This ensured that our participants' HCDs would be managed and removed exclusively by our catheterization laboratory holding area nursing staff who were educated on our study design and its data collection form. Patients were also excluded if they had a hematoma prior to HCD application, or if they were felt to be at excessive bleeding risk in the opinion of the interventional cardiologist who performed the procedure. Data collection took place from January 2017 through October 2017. This study was conducted under the supervision of our Cardiac Catheterization Laboratory Quality Improvement Committee and was approved by the BIDMC Institutional Review Board as a quality improvement initiative. All participants underwent HCD application with a TR Band ® as the arterial sheath (most often a 10-cm 6 French GlideSheath Slender ® , Terumo Interventional Systems, Somerset, NJ) was removed in adherence with the device's instructions for use provided by the manufacturer [6] with inflation of the device's air cushion until hemostasis was achieved. After application of the HCD, participants were transferred to our post-procedure holding area and, if needed, immediately had air removed from the HCD cushion until patent hemostasis was achieved, as determined using Barbeau's test [7]. Following a pre-defined duration of compression (120 min, 60 min, or 30 min), the HCD cushion was weaned off by removing one-third of the air content at three 15-min intervals, constituting phases 1 -3 of our weaning protocol. Patients were subsequently ob-served for 15 min with the TR Band ® in place, but completely deflated, after which time it was removed. Following removal of the TR Band ® , patients were observed for a specified interval before being eligible for discharge. In the 2-h and 1-h groups, the post-removal observation interval was 30 min, and for the 0.5-h group, the post-removal observation time was 60 min (Fig. 1). The patients were then observed for a period of time in the holding area before being discharged home or transferred to an inpatient care area. If any bleeding occurred during cushion deflation, re-inflation with an additional 1 -2 mL of air was performed until hemostasis was achieved, and the weaning period was extended by an additional 15 min interval. If any new hematoma was evident during cushion deflation, manual pressure was applied, and a cardiology fellow was called to assess the patient and determine subsequent management.
Data collection
For each participant, we recorded basic demographic information, the type of procedure performed (diagnostic coronary angiography only, PCI, or other) and what anticoagulant was used. If an activated clotting time (ACT) was performed as part of PCI or to guide safe removal of sheaths at additional access sites, we recorded the time of measurement and the value. Timestamps were recorded for initial application of the HCD, each phase of deflation of the HCD cushion, the time of removal of the HCD, and the time of discharge or transfer. If any bleeding or hematoma occurred, the time of this event was also recorded. Lastly, we recorded occurrence of large hematoma, severe discomfort, transfusion, surgical consult or unplanned admission for radial arterial access site complications.
Using the above protocol, we collected data in three sequential stages. During stage 1 (January through February, 2017), we utilized our pre-existing standard timeframe of 2 h of compression prior to deflation of the HCD cushion in order to ascertain our baseline level of events. During stage 2 (February through April, 2017), we used a time of compression of 30 min. During stage 3 (September through October, 2017), we used 1 h. The weaning protocol was identical in all stages. However, postweaning observation differed; post-weaning observation was mandated as at least 60 min in the 0.5-h weaning group and 30 min in the 1-h weaning group to ensure patient safety since this degree of rapid removal had not previously been studied in this
Statistical methods
We performed statistical analyses using JMP/13 software (SAS Institute Incorporated, Cary, North Carolina). For patient characteristics and outcome events, proportions were compared using Fisher's exact tests, and continuous variables were compared using analysis of variance and t-tests. Elapsed time variables were compared using Wilcoxon tests and the Kruskal-Wallis test.
Results
A total of 354 patients were included in our study; there were 99 patients in the 2-h weaning group, 132 patients in the 1-h weaning group, and 123 patients in the 0.5-h weaning group. Baseline demographic and clinical characteristics are shown in Table 1 and were similar across all three groups. Notably, there were no differences in proportion of PCI, heparin use, or mean ACT (if performed) across the three groups.
As shown in Table 2, the rate of bleeding in the baseline 2-h weaning group (8%) was lower than in the 1-h weaning group (19%, P = 0.01), and was not significantly different than in the 0.5-h weaning group (12%, P = 0.08.) There was no statistically significant difference in the rate of hematoma alone or the composite rate of bleeding or hematoma across all three groups. Notably, the serious complications endpoint (a composite of severe discomfort, severe hematoma, need for blood transfusion, surgical consultation, or unplanned admission) did not occur in any of the three groups.
We theorized that minor bleeding events occurring early during the weaning process were less dangerous than bleeding events occurring late in the weaning process or after removal of the HCD, since the closer a bleeding event occurred to a patient's discharge, the greater the risk another bleeding event could occur after discharge and be unable to be immediately acted upon by medical personnel. For this reason, we analyzed our data as depicted in Figure 2 and grouped bleeding and hematoma events by the phase of weaning during which they occurred among our three groups. Reassuringly, only two total bleeding events occurred after the HCD was removed; both of these events were adjudicated with manual medical record review and it was discovered that in both cases, the patients had developed bleeding immediately after using the bathroom, and it is suspected that they may have been nonadherent to the prescribed range of motion and activity restrictions for their involved wrist. The time from HCD application to discharge was shorter in the 0.5-h weaning group (median: 3.0 h, P < 0.01) and 1-h weaning group (median: 3.0 h, P < 0.001) compared with the baseline 2-h weaning group (median: 3.5 h.) There was no significant difference in time to discharge between the 0.5-h weaning group and the 1-h weaning group (Fig. 3).
Because anticoagulants and antiplatelet agents are used routinely during and after PCI procedures, and could contribute to bleeding or hematoma formation, we also examined these data in aggregate, detailed in Table 3. There were no significant differences in rates of bleeding or hematoma across diagnostic coronary angiography, PCI, or other cardiac catheterization procedures.
Discussion
Although Carrington et al previously performed a similar analysis of accelerated removal of an HCD following cardiac catheterization [8], to our knowledge, our study is unique in that it included patients undergoing PCI and utilized modern patent hemostasis technique. Our study showed that reducing the duration of HCD application, although associated with increased rates of minor bleeding, was not associated with any increase in serious complications. The vast majority of bleeding and hematoma events occurred during serial deflations of the HCD air cushion, and were thus immediately recognized and abated with prompt re-inflation of the air cushion and waiting an additional 15 min before continuing; only four patients experienced recurrent bleeding (1%.) Of note, the lack of a difference in time to discharge between the 1-h compression group and the 0.5-h compression group (Fig. 3) is likely explained by the different mandated post-removal observation times. In the 1-h group, patients were mandated to be observed for at least 30 min after HCD removal prior to discharge. In the 0.5-h group, this interval was 60 min. This longer interval was chosen to ensure patient safety and may explain the lack of a difference between discharge times of the 0.5-h and 1-h compression groups.
Our results confirm that reducing the duration of HCD application is associated with expedited time to discharge. Reducing time to discharge is beneficial for a number of reasons provided it is safe, particularly if it improves patient satisfaction and reduces healthcare costs by making a catheterization laboratory more efficient. A recent NCDR study showed that same-day discharge for PCI patients was associated with cost savings averaging $3,497 per procedure compared with non same-day discharge [9]. Although not evaluated in our study, an additional expected benefit of reduced duration of HCD compression is a decreased rate of radial artery occlusion [4]. It is worth noting that we do not advocate for same-day discharge for all PCI patients; however, in appropriately selected patients this is a safe and less costly disposition.
Our study has several limitations. With roughly 100 participants in each of our three groups, our study was underpowered to detect small differences in event rates. The lack of randomization and blinding also limited definitive causal inference. However, these same characteristics enabled us to perform the study relatively quickly and with limited resources. As a result, our hospital has implemented a 0.5-h weaning protocol for all transradial cardiac catheterization procedures. In post-implementation surveillance, as was the case during our study, there have been no serious complications. We hope that the reassuring results of this study will serve as a pilot for a more definitive randomized controlled trial.
Conclusions
Compared with 120 min of hemostatic device compression, Figure 3. Time to discharge from HCD application. These cumulative frequency distributions illustrate how much time elapsed between hemostatic compression device (HCD) application and discharge for each of the three groups evaluated. Patients in the 1-h and 0.5-h weaning groups were discharged sooner than in the 2-h weaning group (P < 0.001 and P < 0.01, respectively, by Wilcoxon test). There was no significant difference between the 1-h and 0.5-h groups (P = NS by Wilcoxon test.) NS: not significant.
Figure 2.
Bleeding and hematoma events grouped by weaning phase. As described in the methods section, hemostatic compression devices (HCDs) were removed with three serial deflations of one-third of the volume of air in the HCD cushion, 15 min apart, constituting phase 1 -3 of the weaning protocol. Bleeding or hematoma could be noted during each of these phases, once the HCD was physically removed, or after the HCD was removed but before the patient was discharged.
an accelerated protocol of 60 min of compression or 30 min of compression yielded no serious complications in our cohort of patients, status post transradial cardiac catheterization. As expected, the accelerated protocols were also associated with reduced time to discharge.
Supplementary Material
Supplementary Material 1. Radial Artery TR Band Removal Protocol and Data Collection Tool. | 2019-11-28T12:25:24.161Z | 2019-11-23T00:00:00.000 | {
"year": 2019,
"sha1": "a97a918555299daa318b46bc4255291872b2e563",
"oa_license": "CCBYNC",
"oa_url": "https://cardiologyres.org/index.php/Cardiologyres/article/download/953/989",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68c2787ff8e16b304eed5420560524b0986b7880",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14138978 | pes2o/s2orc | v3-fos-license | Sudden Death and Birth of Entanglement Effects for Kerr-Nonlinear Coupler
We analyse the entanglement dynamics in a nonlinear Kerr-like coupler interacting with external environment. Whenever the reservoir is in a thermal vacuum state the entanglement (measured by concurrence for a two-qubit system) exhibits regular oscillations of decreasing amplitude. In contrast, for thermal reservoirs we can observe dark periods in concurrence oscillations (which can be called a"sudden death"of the entanglement) and the entanglement rebuild (which can be named the"sudden birth"of entanglement). We show that these features can be observed when we deal with 2-qubit system as well as $2\otimes 3$ system.
Introduction
One of the fundamental areas of interest in quantum information theory concerns problems of generation of entanglement in various quantum systems and the influence of external environment on disentanglement processes. As decoherence processes in entanglement evolution are unavoidable consequences of the system-environment interactions, one should consider this problem seriously in any real physical realisation of quantum computing devices. Especially interesting in this context are recently described phenomena of sudden death of entanglement [1,2] which consist in entanglement disappearance in finite time rather than exponential decoherence decay of an individual qubit, and a sudden birth of entanglement [3] characterised by the reappearance of entanglement after its death. Both these phenomena are caused by the interaction of the pair of entangled qubits with environment. For the systems described by Yu and Eberly [2] and also by Ficek and Tanaś [3] an entangled pair was formed by the two 2-level atoms. The initially entangled atoms were located in separate cavities [2] (initially in the vacuum state) and had no direct interactions with each other during the evolution. A finite-time decoherence of the system having non-local properties is therefore caused by the spontaneous emission processes. Similarly, a sudden birth of entanglement in the system considered in [3] (composed of two initially entangled 2-level atoms inside a single cavity in a vacuum state) is a consequence of interaction with the environment. The particularly, crucial role plays the formation of collective states of these two atoms followed by collective damping, which cannot be neglected whenever the atoms are close to each other. Moreover, a phenomenon of delayed birth of entanglement in the previously mentioned system was described in [4]. It was also shown that the thermal reservoir [5,6] or a reservoir in a squeezed vacuum state [5] always causes sudden death of entanglement of a two-qubit system (two 2-level atoms) for certain classes of initial entangled states.
The decoherence process of entanglement between two harmonic oscillators coupled to the reservoir with the spectral density of the Ohmic character was considered in [7]. Various kinds of disentanglement evolution, depending on the characteristic features of that environment were considered. The both phases of evolution: (i) sudden death, (ii) sudden death and entanglement revival were identified as well as the usually observed phase of no sudden death behaviour.
Some systems consisting of harmonic oscillators were also previously described in order to identify their various disentanglement properties. Within the Markovian [8] and for some cases under non-Markovian [9] approximations there is a possibility for obtaining a sudden death phenomenon, or for some cases in a non-Markovian reservoir the entanglement persists for longer times than in a Markovian reservoir. Under some conditions, the final state of the initially entangled pair can stay entangled (but with lower value of entanglement). It is also possible to obtain entanglement of the initially disentangled pair via the interaction with a Markovian reservoir [10].
In the present paper we will focus on the problem of evolution of entanglement generated between two anharmonic oscillators placed inside the two-mode cavity. We will show that it is possible to obtain a finite-time disentanglement (a sudden-death) of initially entangled qubits and for some cases also a revival of entanglement -a sudden-birth. We will consider not only the zero temperature reservoir but the thermal one as well, and we will show that the latter type of external environment is more suitable for observing the phenomena of sudden death of entanglement and its revival.
The model
We deal with the model of Kerr coupler described in [11]. It was shown in that paper that when the initial state of the system was an excited one |2 a |0 b the system could be used as a generator of maximally entangled states (MES). From the quantum information theory point of view it was a qubit-qutrit system. The MES were generated most efficiently when the damping process of the generated photon states was negligible. The damping process usually causes decoherence in the system. This fact was shown on the basis of analysis of the time dependence of the fidelity between the quantum state generated and the Bell-like states.
In the present considerations we will focus on the dynamics of the initially generated MES state (in the above described system) which is then left in a damped cavity. We will show that despite the damping processes the system is able to spontaneously generate entanglement after its sudden vanishing. In a two-atom system such a phenomenon is called a "sudden birth" of entanglement and it was discussed in [3]. A "sudden death" of the entanglement between two atoms was studied for example in [? ] . In our case the situation is qualitatively different as we deal with the entanglement between photon states and damping takes place within these photon states.
The system under consideration is described by the following hamiltonian: As usual,â(â † ) andb(b † ) are photon annihilation (creation) operators in modes a and b, respectively, χ a (χ b ) are the nonlinearity parameters, α is the strength of the coupling between the external coherent field and the cavity mode a, and ǫ describe the coupling between the two oscillators. The non-zero nonlinear coupling between the modes of the coupler is responsible for generation of the entanglement between the two-mode photon states. When the initial system's state is an excited two-mode one, for the values of ǫ and α, small when compared with χ, the form of hamiltonian (3) indicates that the states |2 a |0 b and |0 a |2 b are involved in the system dynamics. Additionally, as a result of the action of the external field, the state |1 a |2 b must be also considered. As it has been proved in [11] the maximum entanglement between the states |2 a |0 b and |0 a |2 b can be formed and the entanglement between |2 a |0 b and |1 a |2 b states is also possible.
The influence of damping
As we want to describe the system in a damping environment we should use the density matrix approach. In the standard (Born and Markov) approximations, the adequate master equation can be expressed in the well known form: where the operatorsĈ k describe the damping in the modes a and b and are defined as: We have introduced the interaction with thermal baths of non-zero temperature viaĈ 1n and C 2n terms, which include the mean photon numbers n a and n b of the system's environment, whereasĈ 1 andĈ 2 describe interaction with zero temperature bath.
The degree of the entanglement can be described via various parameters. One of them is the concurrence defined for the two qubit system by Wooters [12]. The value of the concurrence changes from 0 for completely unentangled states, to 1 for the maximally entangled ones. This quantity is defined as: The parameters λ i are the eigenvalues of the matrix R constructed from the density matrix ρ (obtained directly from the master equation) via the relation R =ρ cρc , whereρ c is the density matrix operator for 2-qubit system, andρ c can be obtained from the relation: Appearing here σ y , is the well known 2 × 2 Pauli matrix.
While for the two-qubit system the definition of the concurrence can be applied straightforwardly, for a qubit-qutrit system there is a need to extract from the system's density matrix a matrix for a two-qubit system using a projection operator according to [13]. Thus, we get the reduced density operatorρ c given by: where In such a way we will investigate the entanglement between the states for a two-qubit case via the inspection of the formation of the two Bell-like states: One should remember that apart from these two states there is also a possibility of formation of other entangled states, which in general influence the total amount of entanglement in the system.
We will analyse the process of sudden disentanglement and the possible reappearance of entanglement in two different situations. At first, we assume that the coupler initially prepared in an entangled state is located inside a cavity and we can ignore the external coherent field (α = 0). Then in the second case, we can allow the external field to interact with a coupler -in such a situation we supply the coupler with energy, which leaks through the cavity mirrors (one should remember that it is not the external field that produces the entanglement between the coupler parts).
System without excitation (α = 0)
Before we deal with the problem of entanglement in a qutrit-qubit system, first we will focus on the situation when the interaction with external coherent field is not present (α = 0).
Physically, it corresponds to the faster (than in the α = 0 case) leakage of the energy from a coupler system through the cavity mirrors, but moreover, it means that there are only a few states involved in the dynamics -for the case without damping there are the states |0 |2 and |2 |0 , whereas, when the damping is present we also have to deal with these states:|0 |0 , |0 |1 , |0 |2 , |1 |0 , and |2 |0 .
When we include the damping processes, for the initially entangled state and for n a = n b = 0 we can observe the continuous process of concurrence C(t) oscillations with decreasing amplitude as expected -see Fig1. We can identify this decoherence behaviour as a nonsudden death process and it is preserved for the vacuum reservoir case only. The whole population is finally transferred to the product state 1 For the thermal reservoirs we can observe that the decoherence changes in a different way (see Fig.2).We cannot observe the continuous and periodical vanish of the concurrence amplitude any more. The inset of Fig.2 it is the interaction between the system and the thermal reservoir that causes a sudden disentanglement in the system, and moreover, it may cause a revival of the entanglement after that death. It can be seen from Fig.3 showing maps of the concurrence for various numbers of photons. The plots are generated in such a way that dark areas correspond to C = 0 value and the bright ones to any other (higher than zero) values of concurrence. The individual thin dark lines appearing in the maps reflect to the fact that C(t) during the oscillations reaches zero from time to time (in such a way as in Fig.1 for example), and as such it is not interesting for our considerations. Only dark areas surviving for some time indicate that a sudden death has occurred. Any bright areas after that, indicate a revival of the entanglement in the system. We can inspect this fact in more details analysing Fig.4 showing the concurrence for n b = 3 and for various n a ; with increasing values of n a and n b the time between the death and sudden birth of the entanglement becomes longer.
System excited by an external field (α = 0)
The second situation we want to focus on is that, when the external coherent field is switched on. This field's role is to preserve the energy in the system composed of two oscillators and the cavity, and to diminish the role of the leakage of energy through the cavity mirrors. One should remember that it is not the external field that causes the entanglement in the system.
The entanglement arises due to the nonlinear interaction between the oscillators, which is present during the whole process [11]. An external coherent field apart from reducing the role of damping is also responsible for engaging other two-mode states in the dynamics. The entanglement in the coupler system without damping appears as a result of formation of the two Bell-like states (11) and additionally, an entangled state of the form: These states are formed alternately and the details can be found in [11]. When the coupler interacts with the reservoir, the whole population is finally transferred to the product state for the thermal reservoirs, and they may reveal previously mentioned features of sudden "death" and sudden "birth" of the entanglement, and in fact they can be even enhanced as the coupler is continuously supplied with the energy.
As the reservoir is assumed to be zero temperature bath (in both modes a and b) we can observe the concurrence changes of a slightly different character. The two types of concurrence maxima appear -see the inset a 1 at Fig.5. These are two successive maxima of larger amplitude (they correspond to the formation of the Bell-like states (11)) and between the two subsequent pairs of these maxima there are groups of maxima with significantly smaller amplitude (they are the effect of formation other entangled states -between |2 |0 and |1 |2 states). In this way we deal with the physical system in which the previous entanglement is transferred out from the two-qubit system from time to time, because one of the states of the entangled pair |2 |0 is coupled to the remaining state |1 |2 of the qubitqutrit system. One should keep in mind that the system interacts with the thermal bath that can significantly change the concurrence dynamics. This feature cannot be observed when the values of damping constants increase (see the inset (a 2 ) at Fig.5).
We have investigated the influence of the external field strength on the C(t) evolution for a thermal bath with n a = 0 and n b = 0. The exemplary map is presented in Fig.6. We can see that even for a relatively small values of α there are explicit areas of no-entanglement and after some time the entanglement appears suddenly again. Moreover, in Fig.7 we can clearly see the effect of the thermal bath on the entanglement formation. For n a = n b = 1, when including the external coupling α, we can obtain a significantly enhanced period of disentanglement in the two-qubit system. The period of time when C(t) = 0 become larger and larger with increasing values of α. For instance, when α ∼ = 0.3 (solid line in Fig.7) between t = 9 and t = 14 the system is disentangled, and after t = 14 the entanglement appears again. It means that during that period of time the entanglement is transferred to the state |B 3 . Next the population returns to the previously populated two-qubit system.
Conclusions
The possibility of changing the character of entanglement evolution caused by the external system's environment was considered. In particular, the entanglement obtained in a Kerr-coupler system with nonlinear interaction between its parts, and the process of decoherence caused by a reservoir were analysed. At first the maximally entangled state in the system was generated, and after that the system started to interact with its external environment.
The two cases, namely, that of the cavity interactions with environment of zero temperature (n a = n b = 0) and the second one, when n a and (or) n b are different from zero were discussed. The degree of entanglement was studied via the concurrence for an appropriately chosen two-qubit system. Moreover, analysis of the entanglement evolution for two physically different situations was performed. The first one was when the weak interaction with external coherent field was absent during the whole decoherence process and the second one, when such an excitation was present (for recollection -an external field does not generate an entanglement in a system). We have found for both cases that for a cavity in a vacuum state the process of entanglement decoherence is a continuous and manifested as oscillations of decreasing amplitude. Qualitative changes in the entanglement evolution for non-zero temperature external bath modes were observed. When there are additional photons inside the cavity, it was found possible to observe a sudden disappearance of entanglement in a two-qubit system, which can be called a "sudden death" of entanglement, and after some time its reappearance, which can be called a "sudden birth" of entanglement.
When α = 0 it is possible that, due to the interactions with photons inside a cavity, the entanglement can be transferred for some time to another entangled state |B 3 in which one of the two-qubit states is involved, and after some time, it returns to the system.
For the case when α = 0 there is no additional coupling to |B 3 state. Different from zero number of photons in thermal bath leads to population of the other two-mode coupler states which (due to the interactions between the nonlinear oscillators) during the evolution change the population of the two-qubit system described. In such a way the interactions with these additional two-mode states (and between them) lead to the population transfers from them to the Bell |B 1 and |B 2 states, causing the revival of the entanglement in the system after its disappearrance. | 2009-05-29T17:20:08.000Z | 2009-05-28T00:00:00.000 | {
"year": 2009,
"sha1": "0a01c0b28ddabfff75806e15970d891aa5e8bcac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0905.4652",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0a01c0b28ddabfff75806e15970d891aa5e8bcac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16138848 | pes2o/s2orc | v3-fos-license | Distinct but Spatially Overlapping Intestinal Niches for Vancomycin-Resistant Enterococcus faecium and Carbapenem-Resistant Klebsiella pneumoniae
Antibiotic resistance among enterococci and γ-proteobacteria is an increasing problem in healthcare settings. Dense colonization of the gut by antibiotic-resistant bacteria facilitates their spread between patients and also leads to bloodstream and other systemic infections. Antibiotic-mediated destruction of the intestinal microbiota and consequent loss of colonization resistance are critical factors leading to persistence and spread of antibiotic-resistant bacteria. The mechanisms underlying microbiota-mediated colonization resistance remain incompletely defined and are likely distinct for different antibiotic-resistant bacterial species. It is unclear whether enterococci or γ-proteobacteria, upon expanding to high density in the gut, confer colonization resistance against competing bacterial species. Herein, we demonstrate that dense intestinal colonization with vancomycin-resistant Enterococcus faecium (VRE) does not reduce in vivo growth of carbapenem-resistant Klebsiella pneumoniae. Reciprocally, K. pneumoniae does not impair intestinal colonization by VRE. In contrast, transplantation of a diverse fecal microbiota eliminates both VRE and K. pneumoniae from the gut. Fluorescence in situ hybridization demonstrates that VRE and K. pneumoniae localize to the same regions in the colon but differ with respect to stimulation and invasion of the colonic mucus layer. While VRE and K. pneumoniae occupy the same three-dimensional space within the gut lumen, their independent growth and persistence in the gut suggests that they reside in distinct niches that satisfy their specific in vivo metabolic needs.
Introduction
Antibiotic-resistant bacteria such as vancomycin-resistant Enterococcus faecium (VRE) and multi-drug resistant Klebsiella pneumoniae represent a growing concern in hospitals worldwide. In the United States, Enterococcus spp. and K. pneumoniae account for nearly 10% of all hospitalacquired infections and are common causes of bacteremia [1]. Vancomycin resistance among enterococci has markedly increased since it was first described in the mid-1980s [2]. Even more alarming, however, is the increasing prevalence of carbapenem-resistant Enterobacteriaceae, primarily K. pneumoniae, rendering treatment of these infections very challenging [3]. Broad-spectrum antibiotic exposure, immune suppression and intravascular devices increase the risk for colonization and infection with one or more antibiotic-resistant bacteria [4]. In hospitalized patients, the intestine can become densely colonized with drug-resistant organisms. While colonization by itself does not directly cause disease, in the event of injury to the mucosal barrier colonizing bacteria may translocate beyond the intestinal tract, leading to deep tissue and bloodstream infections [5]. Consistent with this, studies have shown that intestinal colonization precedes bloodstream infection with VRE or K. pneumoniae, suggesting that loss of colonization resistance represents an early step in the progression of these infections [6][7][8].
Colonization resistance refers to the ability of the microbiota to prevent expansion and persistence of exogenously acquired bacterial species, a pivotal defense mechanism that can be impaired by antibiotic treatment [9,10]. Changes in microbiota composition, mainly the elimination of specific groups of anaerobic bacteria, lead to VRE domination of the gastrointestinal tract in antibiotic-treated mice [7,11]. Similarly, colonization of mice with K. pneumoniae is facilitated by antibiotic administration [12,13]. Antibiotic-mediated depletion of commensals also decreases production of mucus and antimicrobial effector molecules, potentially increasing the risk for bacterial invasion of the intestinal epithelium [14][15][16][17][18].
The interactions between different bacterial species in the gut are complex. Studies of mice singly colonized with Bacteroides thetaiotaomicron, Bifidobacterium longum or co-colonized with both demonstrated that these distinct bacterial species impact each other's transcriptional profile [19]. In some circumstances, bacterial species work together in assembly-line fashion to dismantle complex carbohydrates, where the product of one species becomes the substrate for another [20]. In other circumstances, however, microbial species within the intestine compete for limited nutrients and space in order to persist. For example, similarities in carbon utilization by Escherichia coli and Citrobacter rodentium lead to competition between these species. Niche specialization also results in competition between the same, but not different, strains of Bacteroides spp. [21,22]. Metabolic overlap and niche restriction, however, can be shared by distinct bacterial species as is the case with Salmonella typhimurium and Clostridium difficile which benefit from the transient abundance of the same sugars following antibiotic treatment [23]. Yet, the extent to which metabolic competition between antibiotic-resistant microbes contributes to their prevalence and persistence is unknown.
In this study, we investigated the interactions between VRE, K. pneumoniae and the host in a murine model of intestinal colonization. Our goal was to determine whether intestinal domination by either VRE or K. pneumoniae would provide colonization resistance against the other. We found that VRE and K. pneumoniae were able to co-exist despite occupying the same intestinal sites, while restoration of a normal microbiota by means of a fecal transplant displaced both organisms effectively but at different rates. Antibiotic treatment of mice reduced the thickness of the colonic mucin layer, a defect that was corrected by K. pneumoniae but not VRE colonization. However, K. pneumoniae was more effective than VRE at invading the mucus layer and translocating to mesenteric lymph nodes. Our findings demonstrate that highly antibiotic-resistant bacteria such as VRE and K. pneumoniae non-competitively cooccupy the colon but establish distinct relationships with the mucosal epithelium.
VRE and K. pneumoniae coexist in the gastrointestinal tract
We previously showed that treatment with ampicillin, a broad-spectrum antibiotic, renders C57BL/6 mice from Jackson Laboratories highly susceptible to VRE colonization [7]. Inoculation of ampicillin-treated mice with K. pneumoniae also resulted in dense colonization of the colon, with approximately 10 10 colony-forming units (CFU) per gram of feces (S1B Fig). To assess the impact of VRE and K. pneumoniae on each other's ability to colonize the gut, we performed a series of experiments in which ampicillin-treated mice were pre-colonized with VRE or K. pneumoniae by gastric lavage and challenged with the other species 3 days later, at which point the initial colonizer has become established in the intestine and reached maximum density (S1B Fig). Fecal levels of K. pneumoniae and VRE were determined by culture beginning one day and continuing through 21 days post infection (p.i.) with the challenge species. We found that dense colonization with VRE did not significantly impact K. pneumoniae colonization levels at any time point over the course of the experiment. However, fewer K. pneumoniae were recovered from the feces of co-colonized mice on day 1 p.i., suggesting a lag in establishment of K. pneumoniae in the presence of VRE shortly after infection (Fig 1B). On day 21, mono-and co-colonized animals had comparable K. pneumoniae burden in the proximal and distal small intestine (duodenum and ileum, respectively) as well as in the large intestine (cecum) (Fig 1D). Under this experimental condition, introduction of K. pneumoniae did not reduce VRE density in the intestine. Rather, VRE CFU levels were, for the most part, similar in mono-and co-colonized mice although a trend toward increased VRE burden was observed on days 1, 14 and 21 post challenge (Fig 1C and 1E). In the converse experiment, challenge of K. pneumoniae-dominated mice with VRE resulted in a transient but significant increase in VRE density 1 day p.i. but then equivalent density in all intestinal compartments when compared to antibiotic-treated mice lacking K. pneumoniae (Fig 2B and 2D). In addition, we observed no difference in K. pneumoniae colonization between mice challenged with VRE and mice monocolonized with K. pneumoniae (Fig 2C and 2E). Overall, our results demonstrate that VRE and K. pneumoniae neither compete nor synergize with each other upon dense colonization of the murine gut.
VRE and K. pneumoniae achieve similar densities in the large intestine of co-colonized mice Because VRE becomes the dominant member of the intestinal microbiota within days after administration to antibiotic-treated mice [7], we examined how colonization by K. pneumoniae and subsequent VRE challenge would impact their relative proportions at different time points p.i. On day 1 post challenge, VRE represented 30% of the total fecal bacteria in mono-colonized mice, with the remaining 70% constituting bacteria that remained and expanded following antibiotic treatment. In mice previously colonized with K. pneumoniae, VRE represented 10% of the microbiota and K. pneumoniae dominated. However, VRE rapidly expanded in the colon of co-colonized animals and within 4 days of inoculation, VRE and K. pneumoniae achieved roughly equal densities and remained stable for up to 21 days (Fig 3A). This ratio was specific to the large intestine since both bacterial species were equally abundant in the cecum and colon but VRE dominated over K. pneumoniae in the ileum (Fig 3B).
Fecal bacteriotherapy eliminates established VRE and K. pneumoniae intestinal domination
Transplantation of feces from donor mice that have not been treated with antibiotics can eliminate VRE from the intestine of densely colonized mice and, in humans, fecal transplantation from healthy donors cures patients with recurrent Clostridium difficile infection [11,24]. To determine whether the kinetics of VRE and K. pneumoniae clearance from the murine intestine following fecal transplantation are similar or distinct, we colonized ampicillin-treated mice with VRE and K. pneumoniae concurrently, terminated ampicillin treatment and treated mice with fecal microbiota transplants (FMT) or PBS on three consecutive days ( Fig 4A). VRE and K. pneumoniae colonization levels were similar in the feces before FMT administration and remained elevated in mice that received PBS instead of FMT ( Fig 4B). However, following FMT treatment, K. pneumoniae density in fecal pellets decreased within one day and became undetectable within 7 days in all mice ( Fig 4C). VRE, on the other hand, was cleared in 60% of the mice but reduced by 3 logs in the remaining 40%. Increased colonization resistance against K. pneumoniae as opposed to VRE was also observed in mice that had not been treated with antibiotics (S1A Fig). K. pneumoniae was also cleared more effectively than VRE from the duodenum, ileum and cecum of FMT-treated animals while the density of these bacterial species remained high in PBS-treated mice (Fig 4D and 4E). These findings suggest that the mechanisms of microbiota-mediated colonization resistance against VRE and K. pneumoniae are distinct or that K. pneumoniae is more susceptible to colonization resistance.
K. pneumoniae and VRE reside within the same intestinal regions but occupy distinct metabolic niches
The findings that K. pneumoniae and VRE do not interfere with each other's ability to colonize the gut lumen and that their elimination from the intestine following FMT differs suggest that these bacterial species occupy distinct intestinal niches. To localize bacteria within the colons of mice we performed fluorescence in situ hybridization (FISH) with a universal probe targeting bacterial 16S rRNA genes. In mice that had not been treated with antibiotics, we detected a dense and morphologically diverse bacterial microbiota that was almost completely depleted by ampicillin-treatment (Figs 5A, 5B, S2A and S2B). FISH analysis of antibiotic-treated mice colonized with K. pneumoniae (Kpn), VRE or both with species-specific oligonucleotide probes revealed that K. pneumoniae and VRE were most abundant in luminal areas adjacent to the colonic epithelial layer and that both organisms localized to the same intestinal sites (Figs 5C, 5D and 6A-6D).
Confocal microscopy-based quantification of VRE and K. pneumoniae in mono-colonized mice demonstrated that K. pneumoniae and VRE each only achieved 10% of the bacterial Spatial Localization of Antibiotic-Resistant Bacteria in the Intestine density detected in antibiotic-naive mice with a diverse microbiota (Fig 5E). In co-colonized mice, the densities of VRE and K. pneumoniae in the colonic lumen were additive, supporting the idea that VRE and K. pneumoniae do not interfere with one another and that their metabolic needs may differ ( Fig 5E). Furthermore, K. pneumoniae and VRE were generally occupying overlapping regions within luminal areas closest to the colonic epithelium, although islands of increased bacterial density were also detected more centrally in the colonic lumen (Fig 6C and 6D). To further assess whether VRE and K. pneumoniae compete for space within the regions they occupy in the colon, we measured distances between neighboring VRE bacteria, between neighboring K. pneumoniae bacteria and between neighboring VRE and K. pneumoniae bacteria. Supporting the notion that VRE and K. pneumoniae occupy different metabolic niches, intraspecies distances were significantly greater than interspecies distances between VRE and K. pneumoniae, suggesting that localized nutrient depletion may promote spatial avoidance among competing bacteria while non-competing bacteria ignore each other.
Antibiotic treatment and colonization with K. pneumoniae or VRE influences the thickness and integrity of the inner mucus layer The murine colonic mucus layer consists almost exclusively of the mucin Muc2 [25]. Upon release from goblet cells, Muc2 forms a gel-like layer that coats the intestinal epithelium and that is largely impenetrable to bacteria. Beyond the thick, epithelium-associated mucin layer is a loosely attached and less dense mucin layer that is readily penetrated by intestinal bacteria and that serves as a microbial habitat [25,26]. Under normal homeostatic conditions, bacterial molecules released by the microbiota and host factors regulate the production and secretion of mucus [27] and thinning of the colonic mucus layer occurs following antibiotic treatment [18]. The infectivity of some intestinal pathogens, such as C. rodentium and Salmonella enterica sv Typhimurium, is enhanced by disruption of the mucus layers in the colon [15,28]. To explore the impact of K. pneumoniae and VRE colonization on the colonic mucus layer, we stained colonic sections for Muc2 and measured the thickness of the inner mucus layer. While the dense mucus layer was significantly reduced in mice treated with ampicillin ( Fig 7A, 7B and 7E), mono-colonization of ampicillin-treated mice with K. pneumoniae but not VRE resulted in recovery of normal mucus layer thickness (Fig 7C-7E). This indicates that bacterial species differ in their ability to promote mucin production and mucus layer generation in the murine colon.
K. pneumoniae and VRE differ in their ability to invade the colonic mucus barrier and translocate to extra-intestinal sites
To investigate the interactions between K. pneumoniae and VRE with the colonic mucin layer, we mono-colonized ampicillin-treated mice and performed FISH in conjunction with Muc2 immunostaining to localize bacteria in relation to the inner mucus layer (IML). Very few bacteria were detected within the IML in antibiotic-naïve mice harboring a diverse microbiota ( Fig 8A). On the other hand, K. pneumoniae was detected within the IML in mono-colonized mice, with the extent of mucin penetration varying in different regions of a colonic cross-section Spatial Localization of Antibiotic-Resistant Bacteria in the Intestine (Fig 8B). VRE also penetrated the IML at different regions but to a lesser extent than K. pneumoniae (Fig 8C). Image analysis of entire colon cross-sections revealed substantially higher bacterial numbers in the IML of K. pneumoniae mono-colonized mice compared to antibioticnaïve or VRE mono-colonized animals (Fig 8D). Within the colonic lumen, however, K. pneumoniae and VRE were similarly associated with detached islands of mucus (Fig 8B and 8C).
To determine whether increased mucus layer penetration by K. pneumoniae is associated with increased invasion of deeper tissues, we cultured mesenteric lymph nodes (mLNs). In mono-colonized mice, we detected up to 10 4 live K. pneumoniae in mLNs three weeks p.i. (Fig 8E) Notably, infection of mLNs by K. pneumoniae was not affected by co-colonization with VRE, regardless of the order of microbial administration. In contrast, we did not recover any live bacteria from mLNs of VRE mono-colonized animals, perhaps reflecting their reduced penetration of the IML. However, challenge of VRE mono-colonized mice with K. pneumoniae resulted in VRE translocation to mLNs in 60% of the mice, suggesting that introduction of K. pneumoniae may enhance the ability of VRE to traverse the mucin layer and gain access to the intestinal epithelium (Fig 8E). Administration of fecal microbiota transplants to colonized mice reduced K. pneumoniae infection of mLNs (Fig 8F), suggesting that isolation of live bacteria from mLNs of K. pneumoniae-colonized mice results from continuous seeding of the node rather than bacterial growth and persistence in mLNs. Our findings demonstrate that K. pneumoniae, upon achieving high density in the intestine, can penetrate the dense mucin layer of the colon. In contrast, VRE access to the epithelial layer appears to be more restricted by the mucin layer. Although the ability to penetrate mucus has been associated with pathogenic bacteria, our results suggest that among commensal bacterial species mucus penetration may correlate with their ability to disseminate beyond the gut lumen.
Discussion
VRE and K. pneumoniae are two of the most common highly antibiotic-resistant bacterial species that cause infections in hospitalized patients. Both organisms are oxygen-tolerant facultative anaerobes and principally reside in the lower gastrointestinal tract where, under normal circumstances, they are minor contributors to a colonic microbiota composed predominantly of oxygen-intolerant obligate anaerobes. Microbiota analyses have revealed that some patients undergoing allogeneic hematopoietic stem cell transplantation have marked expansion of VRE or K. pneumoniae in their colonic microbiota [7,8], to the point where in many cases these species constitute the overwhelming majority of the intestinal bacterial taxa. Our experiments with murine models demonstrate that both organisms can undergo massive expansion in the gastrointestinal tract of antibiotic-treated mice, occupy the same intestinal regions and coexist without exerting any colonization resistance against each other.
The mechanisms that determine bacterial density in the colon remain incompletely defined. During broad-spectrum antibiotic treatment the density of bacteria in the colon, as determined by quantitative 16S rRNA gene PCR, is reduced over 1,000 fold. Nevertheless, upon termination of antibiotic treatment, the compositionally-restricted residual flora re-expands to a density that is similar to pre-treatment levels [7,29]. Along similar lines, we find that VRE and K. pneumoniae achieve densities of 10 10 CFU per gram of colonic content in antibiotic-treated mice, at which point their growth stops, potentially resulting from nutrient depletion or quorum sensing mechanisms that remain undefined [30]. Because VRE and K. pneumoniae grow independently in the colons of antibiotic-treated co-colonized mice, our findings suggest that distinct mechanisms determine maximal bacterial density of these two bacterial species.
Competition for resources has been demonstrated in several in vivo colonization studies involving bacteria of the same or closely related species [21,22,31,32]. In the intestine, heavily glycosylated mucus is a major source of energy for bacteria. Commensals that express mucusdegrading enzymes, such as Bacteroides thetaiotaomicron, can cleave sugars from host glycans and support the growth of other members of the colonic microbiota [23,33]. Enterococci and γ-Proteobacteria do not express glycosidases that degrade mucosal polysaccharides, and thus their carbohydrate utilization is limited to less complex sugars [34,35]. Some studies have demonstrated that mucus-derived carbohydrates, including sialic acid and fucose, transiently Spatial Localization of Antibiotic-Resistant Bacteria in the Intestine accumulate following antibiotic-treatment, presumably as a result of hydrolases released during bacterial cell lysis, and promote the growth of Salmonella and Clostridium difficile [23,36]. However, given the transient release of these carbohydrates following initiation of antibiotic treatment, it is unlikely that this process is sustaining dense and prolonged colonization by either VRE or K. pneumoniae. Although E. faecium and K. pneumoniae can metabolize monosaccharides, and the range of potential carbon sources for K. pneumoniae is broad [37], the in vivo carbohydrate dependencies of these two bacterial species remain largely uncharacterized. The lack of competition between VRE and K. pneumoniae could potentially be explained by a difference in metabolic requirements (i.e. different sugar utilization) and the observation that bacteria can switch to alternative nutrient sources in the presence of competing strains [38]. Further studies that examine the metabolic profiles of VRE and K. pneumoniae during monoand co-colonization will be necessary to identify the mechanisms determining their density in the intestinal tract.
In addition to serving as a nutrient source for intestinal bacteria, mucus also provides a barrier that keeps bacteria away from the colonic epithelium. The intestinal microbiota induces mucin production and antibiotic administration results in thinning of the mucus barrier, thereby increasing susceptibility to bacterial invasion [18]. Pathogens such as Salmonella and C. rodentium can penetrate the mucus layer and in the process induce increased mucin production, a host defense mechanism that limits bacterial invasion [15,28]. Consistent with these studies, we find that K. pneumoniae infiltrated the mucus layer while also restoring its thickness to pre-antibiotic treatment levels. VRE colonization of antibiotic-treated mice, on the other hand, did not induce mucus layer thickening. In addition, VRE did not invade the mucus layer to the same degree as K. pneumoniae. It remains to be determined whether enhanced mucus production is a consequence of bacterial invasion or rather, molecular differences between Gram-negative and Gram-positive bacteria.
In mice with a normal flora and intact TLR signaling, the dense mucus layer is largely devoid of bacteria [17,25]. Antibiotic treatment, however, reduces the expression of antimicrobial molecules such as RegIIIγ [14], potentially enabling bacteria to gain access to and survive within the mucus layer. Therefore, it is possible that a paucity of host-derived antimicrobial proteins in the mucin layer renders it more penetrable. A recent study showed that mice harboring high levels of bacteria belonging to the Proteobacteria and TM7 phyla have an inner mucus layer that is normal in thickness but penetrable to bacteria [39]. Thus, it is also possible that K. pneumoniae may induce mucins with abnormal glycosylation or that are structurally disorganized.
Although intestinal colonization with VRE and K. pneumoniae is asymptomatic, it increases the risk of extra-intestinal infection, including bacteremia [8,40]. Consistent with the observed differences in mucus infiltration, we found increased K. pneumoniae translocation to mesenteric lymph nodes (mLNs) relative to VRE. Co-colonization with K. pneumoniae resulted in increased VRE translocation to mLNs, suggesting that K. pneumoniae may have opened barriers for the less invasive VRE. The high levels of K. pneumoniae observed in mLNs of intestinally dominated mice and the complete reduction of bacteria following FMT-mediated intestinal clearance of K. pneumoniae suggests that bacteria are not replicating in the mLNs but rather are delivered on a continuous basis. Although it is unclear how live bacteria are delivered to draining mLNs, recent studies with other bacterial pathogens suggest that cells within the colonic lamina propria, including CD103 + and CX3CR1 + dendritic cells, may capture K. pneumoniae from the luminal environment and carry them to mLNs [41,42].
Enterococci and γ-Proteobacteria constitute a minor population within the human and murine microbiota and are, for the most part, harmless to the host unless intestinal homeostasis is perturbed [26]. In our murine model, intestinal colonization with VRE and K. pneumoniae did not result in detectable inflammation of the intestinal wall. In clinical scenarios, however, dense intestinal colonization with antibiotic-resistant bacteria is an important risk factor for systemic infection and for patient-to-patient spread. We found that introduction of a normal microbiota into VRE and K. pneumoniae co-colonized mice resulted in reduction and clearance of both bacterial species, albeit at different rates. Current studies are focusing on the identification of commensal bacterial species that mediate clearance of antibiotic-resistant bacteria. Overall, the findings presented here uncovered previously unrecognized features of VRE and K. pneumoniae colonization and provide insight into the nature of pathogen coexistence, dissemination and ways to eradicate colonization.
Mice, bacterial strains and infection
All experiments were carried out using 6-8 week-old C57BL/6 female mice purchased from Jackson Laboratories and housed in sterile cages with irradiated food and acidified water. For experiments involving antibiotic treatment, 0.5g/L ampicillin (Fisher) was administered to animals in the drinking water and changed every 4 days. Vancomycin-resistant Enterococcus faecium was purchased from ATCC (700221). Carbapenem-resistant Klebsiella pneumoniae was obtained from the Clinical Microbiology Laboratory at Memorial Sloan-Kettering Cancer Center and isolated from blood cultures of a patient. Both bacterial strains were grown at 37°C in Brain Heart Infusion (VRE) or Luria-Bertani (K. pneumoniae) broth to early stationary phase and diluted in phosphate buffered solution (PBS) to 10 5 colony-forming units (CFU). For infection experiments, 5x10 4 CFU of VRE and/or K. pneumoniae were administered by oral gavage in a 200μl volume on the fifth day of ampicillin treatment. For simultaneous infection of VRE and K. pneumoniae (Fig 4), inocula were mixed in a 1:1 ratio prior to administration. Mice were singlehoused at the time of infection and kept on ampicillin until the end of the experiment unless otherwise specified. Animals were maintained in a specific pathogen-free facility at Memorial Sloan-Kettering Cancer Center. All mouse handling, cage changes and tissue collection were performed in a biosafety level 2 facility wearing sterile gowns, masks and gloves.
Quantification of VRE and K. pneumoniae burden
Fecal samples and intestinal contents from the duodenum, ileum and cecum were weighed and resuspended in 1 ml of PBS. Tenfold dilutions were plated on Difco Enterococcosel (supplemented with 8 μg/ml vancomycin; Novaplus and 100 μg/ml streptomycin; Fisher) and Luria-Bertani (supplemented with 50 μg/ml neomycin; Sigma-Aldrich and 100 μg/ml carbenicillin; LabScientific) agar plates for the specific detection of VRE and K. pneumoniae, respectively. Mesenteric lymph nodes were mashed through a 40 μm filter, resuspended in PBS and plated directly onto selective plates.
Fecal microbiota transplantation (FMT)
A fecal pellet from an untreated C57BL/6 mouse was resuspended in 1ml of PBS under anaerobic conditions and 200μl of the fecal suspension was administered per mouse via oral gavage starting on the third day of ampicillin cessation. This process was repeated on the fourth and fifth days for a total of three consecutive FMT doses.
DNA extraction, V4-V5 16S rRNA gene amplification, multiparallel sequencing and sequence analysis Fecal samples and intestinal contents were frozen immediately after collection in a dry ice/ethanol slurry and stored at -80°C. DNA was extracted using the phenol-chloroform and bead beating method described previously [7]. The V4-V5 region of the 16S rRNA gene was amplified and sequenced with the Illumina Miseq platform as described previously [43]. Sequences were analyzed using version 1.33.3 of the MOTHUR pipeline [44] as described in Buffie et al., 2015. Sequences with distance-based similarity of at least 97% were assigned the same OTU (operational taxonomic unit) and representative OTUs were classified using a modified Greengenes reference database [45].
Tissue preparation for histology analysis
Intestinal tissues with luminal contents were carefully excised and fixed in freshly made nonaqueous Methacarn solution (60% methanol, 30% chloroform and 10% glacial acetic acid) as previously described [17,46] for 6 hours at 4°C. Tissues were washed in 70% ethanol, processed with Leica ASP6025 processor (Leica Microsystems) and paraffin-embedded by standard techniques. 5-μm sections were baked at 56°C for 1 hour prior to staining.
Muc2 immunofluorescence
Muc2 immunostaining was performed as previously described [25,50] with some modifications. Briefly, deparaffinized sections were incubated in 0.9M NaCl, 20mM Tris-HCl at pH7.2 and 0.1% SDS at 50°C for 3 hours, rinsed in PBS and blocked with 5% goat serum in PBS for 30 min at room temperature to minimize non-specific binding. Sections were then washed in PBS for 10 min prior to overnight incubation at 4°C with an anti-Muc2 rabbit polyclonal antibody (H300, Santa Cruz; 1:200 in PBS) [51]. Following incubation with primary antibody, tissues were washed 3 times in PBS for 10 min and incubated with goat-anti-rabbit Alexa 488 secondary antibody (Life Technologies, 1:1000 in PBS) for 1 hour at room temperature. Sections were washed twice in PBS for 10 min and counterstained with Hoechst (1:3000 in PBS). For FISH--Muc2 dual staining, sections were briefly rinsed in wash buffer after FISH hybridization and incubated directly with the anti-Muc2 primary antibody diluted in wash buffer. Incubation with secondary antibody was carried out at 4°C for 2 hours. A single 10 min PBS wash was performed after incubation with the primary and secondary antibodies before Hoechst nuclear staining and mounting with Mowiol solution.
Microscopy
Images were acquired with a Leica TCS SP5-II upright confocal microscope using a 63x oil immersion lens (NA 1.4, HCX PL APO) with or without digital zoom as a series of short Z-stacks. Maximum intensity projection processing of Z-stacks was done in Fiji (ImageJ) software. Mucus layer thickness was measured using the Leica distance measurement tool (LASAF). The width of the inner mucus layer was determined by the average of 4 measurements per field with 4 fields measured per section. Whole tissue images were digitally scanned using the Zeiss Mirax Desk Scanner with 20x/0.8NA objective. Bacterial distance analysis was performed on colon images taken at 63x magnification with a 4x digital zoom by determining the XY coordinates of each bacterial cell in MetaMorph (Molecular Devices) software and measuring the distance from their center. For quantification of bacterial density and invasion into the mucus layer, whole tissue cross-sections were tile scanned in short Z-stacks using an inverted laser scanning confocal Zeiss LSM 5-Live microscope at 63x magnification. For bacterial quantification, a threshold based on the RGB color combination and intensity of each bacterial species was generated with the color thresholding option in MetaMorph. Thresholded objects of 1μm in size were counted as a single bacterial cell with MetaMorph's integrated morphometric analysis tool.
Statistics
The Mann-Whitney test was used for statistical analysis of intestinal CFU as well as imagebased quantification of bacterial distance, density and infiltration into the mucus layer. Differences in mucus layer thickness were analyzed with the unpaired Student t test. All statistical tests were performed using the Graph-Pad Prism software package (version 6.0). P values less than 0.05 were considered to be significant.
Ethics statement
All mouse experiments were performed in accordance with and approved by the Memorial Sloan-Kettering Institutional Animal Care and Use Committee (IACUC) under protocol 00-05-066. The Memorial Sloan-Kettering IACUC adheres to provisions of the Animal Welfare Act. | 2016-05-04T20:20:58.661Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "32bc58c30115f9356a76a719e50be8c75edc38d7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1005132&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32bc58c30115f9356a76a719e50be8c75edc38d7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
92017680 | pes2o/s2orc | v3-fos-license | Potential response of the rumen microbiome to mode of delivery from birth through weaning,
© The Author(s) 2018. Published by Oxford University Press on behalf of the American Society of Animal Science. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com. Transl. Anim. Sci. 2018.2:S35–S38 doi: 10.1093/tas/txy029
INTRODUCTION
Increasing evidence in mice (Ley et al., 2005), humans (Biasucci et al., 2008), and ruminants (Cannon et al., 2010) suggests maternal influences and preparturition environment may affect infant gastrointestinal tract (GIT) microbiome. Early GIT colonization is critical to the development of the GIT and the immune system (Suárez et al., 2006;Malmuthuge et al., 2012). In addition, the colonization phase may be suitable for intervention strategies for improved host performance (Yáñez-Ruiz et al., 2015).
In humans, research focuses on increased autoimmune disorders in children born via cesarean section (Neu and Rushing, 2011). Dominguez-Bello et al. (2010) reported that the gut microbiome of infants born via cesarean more closely resembles the microbiome of the mother's skin rather than the vaginal microbiome. This research suggests that mode of delivery can alter the gut microbiome with potential long-term impacts for the host. In ruminants, the frequency of cesarean delivery is not of concern; however, understanding the potential influence of mode of delivery on the calf microbiome may bring to light new intervention strategies to optimize the rumen microbiome.
Although the rumen is not functional until nearly 4 to 6 wk of age (Church, 1988), and the rumen microbiome shifts rapidly during this period, the early microbiome is responsible for production of volatile fatty acids that affect rumen development (Flatt, 1958;Suárez et al., 2006) and ensures proper absorptive capacity for the mature ruminant. Thus, we hypothesized that the rumen microbiome of calves would be altered by mode of delivery, and these changes would persist through weaning. Our objective was to determine whether cesarean section delivery would affect the early calf microbiome compared with vaginal delivery and whether these differences would be evident through weaning.
MATERIALS AND METHODS
All animal procedures were approved by the University of Wyoming Animal Care and Use Committee.
Cow Management and Diet
Mature Charolais (n = 24) cows from the University of Wyoming (UW) beef herd were used in this study. Cows were bred via natural service, and their expected calving date was calculated as 250 d after the date the bull was introduced. Cows were fed ad libitum grass hay (6.8% CP, 40.2% ADF, 56.8% TDN, 1.2 NE m MCal kg −1 , 0.64 NE g MCal kg −1 ) and 2 lb d −1 dried distillers grains (29.9% CP, 12.3% ADF, 75.0 TDN%, 1.79 NE m MCal kg −1 , 1.16 NE g MCal kg −1 ). Prior to parturition, cows were moved into pens and monitored closely for signs of parturition. Cows were randomly assigned to either the control group (CON; n = 12) or cesarean section group (CSECT; n = 12). The CON cows were allowed to calve naturally with no intervention. The CSECT group was monitored closely for signs of parturition, and a veterinarian was summoned to perform the cesarean section using standard protocol including pain management and postsurgical care. Cows in both CON and CSECT groups reared their respective calf until weaning at 180 d; each treatment group was housed in separate pens.
Calf Management and Calf Rumen Fluid Sample Collection
At parturition, calves were monitored to ensure survivability. Calves were allowed ad libitum access to their dam's colostrum and hay. At approximately 1.5 mo of age, calves were fed Purina Stocker Grower (Purina Mills/Land O'Lakes, Inc.) at the rate of 2 lb h −1 d −1 through weaning (180 d of age). At 24 ± 4 h, rumen fluid was collected from calves via oral-lavage using methods described by Lodge-Ivey et al. (2009). Briefly, a 0.5-cm-interdiameter, flexible vinyl tube, 3 feet in length, was lubricated and passed through the mouth into the rumen; suction via an attached syringe was used to collect the rumen fluid. Samples were aliquoted, flash frozen, and stored at −80 °C for subsequent analysis. These samples were collected again on day 3, day 28, and at weaning.
Rumen Microbial DNA Extraction
Rumen fluid samples were used for shotgun metagenomic sequencing. First, DNA was isolated from 8 calves per treatment group using methods described by Yu and Morrison (2004). Briefly, a 0.25-g sample of rumen fluid (thawed immediately prior) was added to sterilized zirconia (0.3 g of 0.1 mm) and silicon (0.1 g of 0.5 mm) beads along with 1 mL of lysis buffer (500 mM NaCl, 400 mM Tris-HCl, 50 mM EDTA, 4% SDS). Tubes were then homogenized using a Mini-Beadbeater-8 at maximum speed for 3 min, incubated at 70 °C for 15 min with gentle mixing, and centrifuged at 4 °C for 5 min. Supernatant (~1 mL) was transferred to a new 2-mL flat cap tube, and 300-µL fresh lysis buffer was added to the pelleted beads. The homogenization, incubation, and centrifugation steps described previously were repeated, and the supernatant was pooled. Precipitated DNA was purified further using the QIAamp DNA Stool Mini Kit (Qiagen, Santa Clarita, CA), and the manufacturer's protocol except that buffer EB was used for elution of purified DNA. The DNA was precipitated in ethanol and resuspended to 80 ng µL −1 (2-µg aliquots) and shipped to the University of Missouri DNA Core Facility, Columbia, MO, for sequencing.
Library Preparation and Metagenomic Sequencing
Libraries were constructed using manufacturer's (Illumina) protocol with reagents supplied in Illumina's TruSeq DNA PCR-Free sample preparation kit. Briefly, 1 µg of genomic DNA was sheared using standard Covaris methods to generate an average insert size of 350 bp. The 3′ and 5′ overhangs were converted to blunt ends by an end repair reaction utilizing 3′ to 5′ exonuclease/polymerase activity. Using purification beads (AMPure XP), the desired size fragment was selected. Then, a single adenosine nucleotide was attached to the 3′ ends of the blunt fragments followed by ligation of Illumina indexed paired-end adapters. The library was purified twice using sample purification beads. This purified library was then quantified with a Qubit assay, and library fragment size was confirmed by the Fragment Analyzer (Advanced Analytical Technologies, Inc.). The library was then diluted and sequenced according to Illumina's standard sequencing protocol for HiSeq.
Metagenomic Sequencing Analysis and Identification of 16S rDNA Genes
Metagenomic sequences were quality filtered before 16S rDNA genes were identified using Metaxa2. Briefly, hidden Markov models using HMMER identified the conserved regions of the small subunit by aligning to the SILVA database and then were subjected to a BLAST search. Taxonomic classification occurred by taking each rRNA entry and comparing the top 5 BLAST matches until a reliability score of 80 was achieved; this resulted S37 Competition (Grad): Calf microbiome in accurate taxonomic classification but may not allow for specific classification (Bengtsson-Palme et al., 2015). These taxonomic profiles were further analyzed to assess diversity among and between samples using QIIME 1 (Caporaso et al., 2010).
RESULTS
A total of 117 taxa were significantly different (P < 0.05) in terms of abundance between CON and CSECT; 981 taxa differed by day (P < 0.05), and 910 taxa were significantly different (P < 0.05) by day within treatment. The microbial richness (Chao1) was unaffected (P = 0.97) by treatment group when averaged across all collection day. Whereas days 1 and 3 had lower richness scores compared with day 28 (P = 0.006), and day 28 had increased richness compared with weaning. Microbial richness was increased (P = 0.03) for CSECT on day 28 compared with CSECT day 3, CON day 28 compared with CON day 1, and tended to be greater (P = 0.054) for CSECT day 28 compared with CON day 3. No differences (P > 0.50) in beta-diversity were detected between CON and CSECT. However, beta-diversity differences were detected (P < 0.05) for each sampling day and several day within treatment where CSECT day 1 tended (P = 0.06) to be different from CSECT day 3, CSECT day 28, and CON day 28 and was significantly (P = 0.03) different from CON day 3. The CSECT day 3 was significantly (P < 0.05) different from CSECT day 28, CON day 1, CON at weaning, and CON day 28. Significant differences (P < 0.05) were also detected between CSECT day 28 and CSECT day 3, CON day 1, CON day 3, and CON at weaning. The CON day 1 differed (P < 0.05) from CSECT at weaning, CSECT day 3, CON at weaning, and CON at day 28 and tended (P = 0.06) to be different from CON day 3. Samples from CON day 3 tended (P = 0.06) to be different from CSECT day 1 and was significantly (P < 0.05) different than CSECT at weaning, CSECT day 28, CON day 1, CON at weaning, and CON day 28. Beta-diversity tended (P = 0.07) to be different between CON day 28 and CSECT at weaning and CON at weaning and was significantly (P < 0.05) different from CSECT day 1, CSECT day 3, CON day 1, and CON day 3.
DISCUSSION
Overall, fewer differences in the rumen microbiome were associated with mode of delivery (CON vs. CSECT) compared to day; some distinct alphaand beta-diversity differences were detected when comparing day within treatment group. This suggests an interaction of mode of delivery and stage of maturity with the microbiome in terms of the richness and composition. Richness was improved with increased age and CSECT day 28 compared with CON day 3. The microbiome of human infants born via cesarean was less diverse compared with those delivered naturally (Biasucci et al., 2008). It is possible that our contradictory data are a result of differences in age (day 28 vs. day 3). A multitude of beta-diversity differences were detected with day and day within treatment suggesting compositional differences between mode of delivery and stage of development.
The most prominent effect on the microbiome in our data resulted from sampling day where day 28 samples had the greatest richness compared with other day and day within treatment group, even compared with samples at weaning. At day 28, calves are transitioning from a preruminant to a functioning ruminant, and the rumen grows 4 to 8 times in weight (Church, 1988). Calves are consuming more solid feed, including hay and grain in addition to milk from their dam. Richness at day 28 is greater than at weaning, suggesting that the microbiome stabilizes as the calf matures. This is in agreement with other data that report the microbial profile begins to stabilize at weaning (Benson et al., 2010) into maturity (Jami et al., 2013). The microbial profiles at days 1 and 3 were similar in terms of taxa, but differences in abundance were detected (Jami et al., 2013). In this study, although alpha-diversity was not different between days 1 and 3, several comparisons of day within treatment indicate differences in days 1 and 3. As the rumen begins to shift from microbes associated with aerobic and facultative fermentation to strictly anaerobic fermentation the microbial profiles shift as well, with distinct clustering according to stage of development (Bath et al., 2013;Jami et al., 2013).
Although distinct microbial profile differences were not evident between CSECT and CON when averaged across all days, several differences in both abundance and composition were highlighted at specific day and day within treatment group. Stage of development appeared to have the largest impact on microbial profiles, which is in agreement with other literature across multiple species. Thus, we can conclude that mode of delivery and stage of development affect the rumen microbial profiles and differences are more distinct in the preruminant phase.
IMPLICATIONS
The rumen microbiome is critical to host performance. Understanding factors that contribute to variation in the microbiome may be key to identifying opportunities for optimizing the microbiome to improve efficiency. Although cesarean sections are uncommon in livestock situations, these data provide insight into the influence that the birth canal has on colonization of the microbiome and whether detected differences persist through weaning. These data may allow for identification of critical stages and intervention strategies that may improve host performance later in life. | 2019-04-03T13:07:31.195Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "c066dce272beb5149f3ce5828850b802f9011d5c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/tas/txy029",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "457ab1d51dbd4fd0dad075e7e463c69c586795d6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250732655 | pes2o/s2orc | v3-fos-license | Brain–immune interaction mechanisms: Implications for cognitive dysfunction in psychiatric disorders
Abstract Objectives Cognitive dysfunction has been identified as a major symptom of a series of psychiatric disorders. Multidisciplinary studies have shown that cognitive dysfunction is monitored by a two‐way interaction between the neural and immune systems. However, the specific mechanisms of cognitive dysfunction in immune response and brain immune remain unclear. Materials and methods In this review, we summarized the relevant research to uncover our comprehension of the brain–immune interaction mechanisms underlying cognitive decline. Results The pathophysiological mechanisms of brain‐immune interactions in psychiatric‐based cognitive dysfunction involve several specific immune molecules and their associated signaling pathways, impairments in neural and synaptic plasticity, and the potential neuro‐immunological mechanism of stress. Conclusions Therefore, this review may provide a better theoretical basis for integrative therapeutic considerations for psychiatric disorders associated with cognitive dysfunction.
Studies have demonstrated that various psychiatric disorders resulting from cognitive dysfunction can stimulate brain immune cells, microglia, and astrocytes to trigger signalling pathways through the activation of innate immune receptors. Toll-like receptors (TLRs) and NOD-like receptors (NLRs), ultimately produce pro-inflammatory cytokines and chemokines, leading to neuroinflammation. 23,24 This neuroinflammation may lead to cognitive dysfunction and increased susceptibility to psychiatric disorders. Research in the late 1980s clearly showed that cells can produce nitric oxide (NO) a gas molecule that has a range of functions in the CNS, such as regulating synaptic plasticity and is also involved in cardiovascular, immune, and neurological regulation, rather than just being a toxic pollutant. 25 NO, CO, and hydrogen sulphide gases are gaseous signalling molecules, and many studies now suggest that certain physiological levels of these gases have neuroprotective effects. 25 In the CNS, NO and CO perform many functions, such as regulating synaptic plasticity, sleep-wake cycles, and hormone secretion, and more interestingly, nitric oxide plays an important role in the mechanisms of cell death or survival. 25 It is reported that cellular stress is related to various psychiatric disorders. Under stressful conditions, intracellular homeostasis processes may be disrupted, which may induce a process of unfolded protein response (UPR) in the subcellular lumen of the endoplasmic reticulum (ER) 26 Some experimental studies have shown that ER stress pathway is involved in cognitive decline and cognitive dysfunction. 27 In this review, we aim to summarize the mechanisms of brain-immune interactions underlying cognitive dysfunction in psychiatric disorders from the perspective of individual, cellular and molecular levels to improve our comprehension of the brain-immune interaction in cognitive dysfunction and offer new insights and potential treatments for a variety of psychiatric disorders.
| THE RELATIONSHIP BETWEEN COGNITIVE DYSFUNCTION, BRAIN IMMUNE, AND PSYCHIATRIC DISORDERS
In the broad sense of immunology, 'immune' refers to a series of behavioural changes that occur in a living organism that recognizes 'self', eliminates 'nonself' and keeps itself stable. 28 The immune system is distributed throughout the entire body, destroying and rejecting antigens (e.g. pathogens and their products, tumour cells) or heterogeneous substances (e.g. ageing self-cells, damaged cells, etc.) that enter the body to maintain the balance and stability of the internal homeostasis. 28 Currently, the CNS (brain) is regarded as an immune-privileged site because it can tolerate the introduction of antigens without causing an inflammatory immune response. In the past, peripheral immune cells of the brain were only reported in pathological conditions, such as acute brain inflammation or acute infections; however, current studies have found that bone marrow-derived immune cells are identified to have an instrumental role in neuroprotection, repair of brain injury, and healthy brain synaptic plasticity. 29 It is becoming increasingly clear that there is an intricate connection between the brain and the immune system. The psychiatric disorder of depression in the brain, for example, is closely linked to two-way communication between the immune system and the brain. 30,31 Experiments by Kipnis et al. showed that the cognitive abilities of mice were impaired in the absence of mature T cells, but this effect could be restored by passive T-cell transfer, suggesting a link between brain maintenance and peripheral adaptive immunity and that the ability to cope with neurodegenerative psychiatric disorders may depend on CD4+ T cells. 32 There is growing evidence that healthy CNS cognition, plasticity, neurogenesis and coping with psychiatric stress are dependent on the immune system and that the mechanisms of immune effects in the brain are diverse that involve a wide variety of cells and signalling pathways. In addition to lymphocytes, microglia, and intrinsic myeloid cells of the brain, neuronal cells (neurons, oligodendrocytes, and astrocytes) also play key immune functions in the brain. 33,34 Cytokines are one of the most important signalling molecules in the immune system. Cytokines, when recognized by neuronal cells, may influence higher brain functions, such as cognitive dysfunction and memory, and in turn, neurotransmitters and neuropeptides, mediators of the brain nervous system, may influence immune cells. 29,35 The expression of pattern recognition receptors (PRRs) receptors (TLRs and NLRs) and cytokine receptors on neurons has been shown to provide molecular substrates for pathogen-associated molecular patterns (PAMPs) that regulate both immune and neuronal functions. 36,37 Inflammation is a protective response of the organism to invasion or other injuries by pathogens and is also a common response of the immune system. Although transient acute inflammation is beneficial to the organism, a sustained inflammatory response is tightly associated with tissue dysfunction and the pathology of many psychiatric disorders. Numerous studies have shown that repeated physical exercises not only improve immune surveillance and immunity but also allow immune cells in the CNS to acquire an antiinflammatory phenotype and that the anti-inflammatory effects induced by physical activity are significant in improving cognitive function and dementia. 38,39 Thus, the role of the triad of cognitive dysfunction, brain immune, and psychiatric disorders merits further discussion.
| COGNITIVE DYSFUNCTION AND IMMUNE MOLECULAR AS WELL AS RELEVANT SIGNALLING PATHWAY
In recent years, the views on the pathogenesis of cognitive dysfunction have been changing, and now it is believed that the immune molecular and relevant signalling circuits perform a pivotal function in the etiopathogenesis of cognitive dysfunction. 40,41 Findings in animals and patients with psychiatric disorders suggest that neuroinflammation activates immune molecular and relevant signalling pathways, which contribute to behavioural symptoms and changes in psychiatric disorders. 42 Several experiments have shown that the inflammatory response of the CNS (brain) can not only cause or increase tissue damage but also promote neuroprotection and repair. This dual effect, the counterbalance between the damaging and protective aspects, ultimately dictates the consequences of immune interactions in the brain. [43][44][45] Two-way communication between the immune system and the brain occurs in different organs, involving a wide range of cells and mediators. 46,47 Brain-immune interaction between immune molecular and cognitive impairment may contribute to our understanding of the pathophysiology of psychiatric disorders. It has now been proved that many of the neuromodulators and immunoreactive substances (immune molecules, complement, interferon, interleukin, tumour necrosis factor, etc.) are produced by the immune system. 46,48 Insight into the effects of immune molecules and their associated pathways on cognitive dysfunction may provide promising therapeutic targets for the treatment and prevention of psychiatric disorders.
3.1 | Cognitive dysfunction, oxidative stress, vitagenes, inflammatory cytokines, and psychiatric disorders As mediators of our immune system, cytokines are abundant in the brain, transmitting messages to the brain, coordinating appropriate physiological and behavioural responses and protecting neurons from potentially toxic circulating substances. 49 Cytokines also regulate the initiation, proliferation and suppression of inflammatory responses by regulating the activation, migration and proliferation of immune cells and by generating other damage-inducing molecules. 50 Several cytokines are approved for the treatment of diverse neuropsychiatric disorders, improving emotion and cognitive symptoms and other behavioural abnormalities in patients. 13 Recently, extensive investigations have demonstrated that cytokines are involved in various pathophysiological processes as key inflammatory mediators linking the brain and the immune system, but the exact mechanisms remain unclear. 51,52 There is accumulating evidence that neuroinflammation is a vector of the neuroendocrine, neurotransmission, neurogenesis, as well as stress responses, and these interactions are thought to be risk factors for the onset of cognitive dysfunction. Inflammation activates adaptive immune responses primarily by stimulating antigenspecific T and B lymphocytes and their regulatory immune transmitters (pro/anti-inflammatory cytokines). 53 Neuroinflammation refers to the inflammatory response within the brain or spinal cord, mediated primarily by cytokines, chemokines, as well as reactive oxygen species (ROS). 54 Cytokines induce pro-inflammatory cytokines that interact with the CNS and brain in a cytokine network, to influence almost all aspects of behaviour, from neurotransmitter metabolism, neuroendocrine function and synaptic plasticity, to emotion regulation, motor activity and motivational apprehension. 55 Cytokines are classified in many ways and are generally classified into six categories based on their structure and biological function: interleukins (IL), tumour necrosis factor (TNF), interferons (IFN), cytokines, growth factors (GF) and colony-stimulating factors (CSF). [56][57][58] The detailed classification and function of each cytokine are shown in Th subsets based on the type of cytokines they produce. 59 Cytokines also can be classified into Th-1 and Th-2 according to the source of lymphocyte T-helper production. 60 Cytokines released by Th-1 lymphocytes and their inhibitors activate macrophages, NK cells, neutrophils and cytotoxic lymphocytes, enhancing cell-mediated immune responses, whereas Th-2 lymphocytes boost humoral responses by activating cells to express antibodies. 60 Macrophages are the most common mononuclear phagocytes that perform a vital function in the immune system, exerting a critical impact on inflammatory conditions. 61 The two types of macrophages were initially named classically activated macrophages (M1) and alternatively activated macrophages (M2). 62,63 From the macrophage domain, there are two activation states, which is the M1 state as a proinflammatory state, and alternate activation, which is the M2 state associated with the repair. Typically, 'classically activated' myeloid cells (LPS ± IFN-γ) have been referred to as 'M1', whilst 'alternatively activated' cells are referred to as 'M2'. 64,65 M1 macrophages boost pro-inflammatory cytokine synthesis and reduce anti-inflammatory cytokine production via activating the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB); however, the alternative M2 activation causes activation of IκB, resulting in the inactivation of NF-κB and subsequent inhibition of pro-inflammatory genes, as well as increased expression of TGF-β and other anti-inflammatory molecules. 66,67 M1 macrophages, also known as classical macrophages, are macrophages that can produce pro-inflammatory cytokines and have strong characteristics of microbicidal properties, but this specificity is also easy to cause tissue destruction. 68 Activation of M1 microglia also leads to the upregulation of CD16/32 and CD68, which adversely affects neurons and further exacerbates tissue inflammation and damage. 67 M2 macrophages, also known as alternatively activated macrophages, are stimulated by several factors such as CSF-1, IL-4, IL-10, IL-13, TGF-β, VEGF, EGF, arginase-1 (Arg-1), CD206, and fungi and helminth infections and other factors, which can promote the M2 subpopulation polarization. 62,69 M2 macrophages promote inflammatory remissions through anti-inflammatory factors that inactivate pro-inflammatory cell phenotypes and re-establish homeostasis, which lead to neuroprotection and promote recovery and remodelling. Numerous studies have shown that the M2 phenotype facilitates restorative actions, including neurogenesis, remodelling of the axon, angiogenesis, oligodendrogenesis, as well as myelin regeneration. 70 Later, macrophages were divided into pro-and anti-inflammatory phenotypes according to the differences in functional modification of macrophages by T-helper 1 (Th1) cytokines and Th2 cytokines. 71,72 Th1 cells can cause macrophage activation, inflammation, and tissue damage, whilst Th2 cells mediate humoral immune response and inhibit a variety of inflammatory functions of macrophages. Generally, Th-1 cells mainly produce pro-inflammatory cytokines, whereas Th-2 cells primarily produce anti-inflammatory cytokines. 53 Pro-inflammatory cytokines are secreted by activated immune cells (monocytes and macrophages) by activating additional cellular constituents in the inflammatory response, whereas anti-inflammatory cytokines help to reduce the inflammatory immune reaction. 53 Th-1 lymphocytes are characterized by intense phagocytic activity and secreting IL-1β, IFN-γ, TNF-α, IL-2, IL-6 as well as IL-12, Th-2 lymphocytes are characterized by secreting IL-4, IL-9, IL-10, IL-13 as well as TGF-β and stimulating type 2 immunity. 73 CD4+ T-helper (Th) cell activation in the CNS serves an instrumental function in the regulation of immune responses, inflammation, and eventually the remedy of various psychiatric disorders. 74 Clinical studies have shown that inflammatory markers (IL-1β, TNF-α, and IL-6) are significantly elevated in the blood and cerebrospinal fluid of patients with psychiatric disorders such as MDD. 75 The inflammatory response induced by elevated inflammatory markers can be involved in synapse formation and mapping, as well as long-term potentiation (LTP) and neurogenesis. 76,77 The balance between M1/M2 activation is important for maintaining health and resistance to disease. It has been suggested that the overactivation of M1 and inactivation of M2 are important in the aetiology and pathogenesis of psychiatric disorders. The immune system is generally regulated through suppression of microglia M1 activation and promotion of microglia M2 activation. M1 phenotype impairment and M2 phenotype recovery provide a prospective therapy for modulating microglia/macrophage polarization to treat cognitive dysfunction in psychiatric disorders. 78 Clarifying the mechanisms that facilitate the transition from M1 to M2 in cognitive dysfunction possibly offers novel approaches to treatment for better outcomes. 78 The early mediators in the inflammatory process are triggered by IL-1, IL-6 as well as TNF-α, whilst effector mediators are produced by proteases, ROS, arachidonic acid metabolites, nitric oxide (NO) and carbon monoxide (CO), etc. Three NO synthase isomers of the NOS family (iNOS, eNOS and nNOS) are responsible for the synthesis of NO, and these enzymes catalyse the conversion of L-arginine to Lcitrulline plus NO in the presence of oxygen. 25 Levels of inducible NO (iNOS) were elevated upon exposure to IFN-γ, TNF-α and cellular or bacterial debris. 79,80 NO is a short-lived lipophilic gas molecule that is critical in various physiological and developmental processes, its ability to cross all barriers of biological membranes without being able to react with different intracellular/extracellular targets, forming a series of reactive nitrogen species (RNS). 81 The RNS includes free radicals, such as NO and nitric dioxide (NO2), as well as non-free radicals, such as nitrous acid (HNO2) and dinitrogen tetroxide (N2O4). 82 Physiological amounts of NO are neuroprotective, whilst higher concentrations are neurotoxic. If the cell is in a pro-oxidant state, NO can undergo redox reactions to form toxic compounds that ultimately lead to cell damage. 25 Abnormal vascular NO production and transport lead to endothelial dysfunction, which ultimately leads to various psychiatric disorders. A wide range of disorders of the nervous and immune systems have been associated with the excessive production of NO. 83 Numerous studies have shown that NO and RNS are involved in the pathogenesis of neurodegenerative diseases. The physiological effect of the imbalance between the production and elimination of reactive oxygen species (ROS) on cells is called 'oxidative stress'. More importantly, various ROS-mediated effects protect cells from ROS-induced F I G U R E 1 Detailed classification of cytokines and possible mechanisms by which inflammatory cytokines may play a role in cognitive dysfunction (based on the balance between M1 and M2 phenotypes of macrophages). Naïve CD4+ helper T (Th) cells (CD4+ Th0 cells) are presented by the antigen-presenting cells (APCs) such as macrophages, dendritic cells, and B cells into CD4 helper T (Th) cells. CD4 helper T (Th) cells further differentiate into subsets of Th1 or Th2 effector cell subsets, which produce different types of (pro/anti-inflammatory) cytokines and regulate contrasting immune responses. The production of reactive oxygen species (ROS) and reactive nitrogen species (RNS) and the uncoupling of nitric oxide synthase (NOS) may be the two main mechanisms of oxidative stress, and this mechanism is closely associated with psychiatric disorders caused by cognitive dysfunction. APC, antigen-presenting cells; DC, dendritic cells; Mφ, macrophages; NO, nitric oxide synthase; RNS, reactive nitrogen species; ROS, reactive oxygen species oxidative stress and re-establish or maintain 'redox homeostasis'. 84 Mitochondria are thought to be a correlated origin of ROS and are exposed to RNS, and there is growing evidence that mitochondrial dysfunction and ROS/RNS levels are interrelated, though in a celland environment-dependent manner. 85 The salutogenic effects of ROS/RNS occur at a low/moderate concentration and are associated with physiological roles in the cellular response to nitrogen-oxygen, such as defence against infectious agents and in the function of various cellular signalling pathways. The redox state of the body is maintained through a balance between a series of antioxidant systems and the production of ROS/RNS, which are used to regulate a variety of CNS function. 84 Poorly studied redox systems of enzymes are associated with the plasma membrane and may be involved in regulating oxidative stress levels. 86 The expression and activity of enzymes in the plasma membrane redox system (PMRS) alter the response to physiological challenges. PMRS is well regulated in the hormetic response of neurons and other cells to a range of stimuli that increase oxidative stress. 87 Oxidative stress leads to the accumulation of damaged macromolecules and a series of physiological changes in agerelated neurodegenerative diseases, whereas the PMRS appears to attenuate oxidative stress as a compensatory mechanism during ageing. 86 Mitochondrial dysfunction may contribute to the development of neurodegenerative diseases. During mitochondrial dysfunction, the PMRS appears to act as a protective effect, providing a survival mechanism for cells by reducing oxidative stress. 86 The upregulation of PMRS activity contributes to cell survival and membrane homeostasis under stress conditions. 86 Further research on PMRS might offer not only information about neurodegenerative diseases in the elderly but also therapeutic targets for the prevention and treatment of agerelated neurodegenerative diseases.
The discoveries in cellular stress signalling have provided a new understanding of the various processes that regulate the cellular stress response, and it is believed that the brain detects and defeats oxidative stress through a complicated network of 'longevity assurance processes' that are linked to the expression of genes known as vitagenes. 88,89 The cellular stress response entails the activated prosurvival pathways and the generation of molecules (heat shock proteins, glutathione, bilirubin) with anti-oxidant and anti-apoptotic activities, which are under the control of protective genes-vitagenes. 90,91 The cellular resistance to stress and the strongly conserved cytoprotective mechanisms are also known as the heat shock response. Heat shock proteins (HSP) play a pivotal role in the folding and repair of damaged proteins, and they promote cell survival and prevent apoptosis. HSP is delivered between cell types in the nervous system, and the provision of exogenous HSP at the site of neural injury may be an effective tactic for maintaining neuronal viability, a strategy that has been validated in many model systems. 90,92 Heat shock proteins serve as important members of the vitagene network in neuroprotection and redox proteomics as a useful tool to study redox-regulated stressresponsive vitagenes. 89 The vitagene system, one of the major intracellular redox systems involved in neuroprotection, is emerging as a potential target for novel cytoprotective interventions. 93 Studies have shown that vitagenes can encode the Hsp32, Hsp70, thioredoxin and sirtuin protein systems and heme oxygenase-1. 91,94 Endogenous cellular defence pathways (sirtuin and Nrf2 [Nuclear factor-erythroid 2 p45-related factor 2] and related pathways) mediate hormetic dose responses and integrate adaptive stress responses in the prevention of neurodegenerative diseases. 87 Heme oxygenase, located within the endoplasmic reticulum (ER), is a dynamic sensor of cellular oxidative stress and regulator of redox homeostasis in the entire phylogenetic spectrum, acting in association with NADPH cytochrome P450 reductase to oxidize heme to bilirubin, CO and free ferrous iron. 87 CO is a gaseous second messenger that is produced in biological systems when heme oxygenase is oxidative catabolism. The promotion of ROS-mediated signalling by CO may depend on CO concentration and exposure time, the localization of heme proteins, or the specificity of the redox response. 87 Even though high concentrations of CO are toxic, ROS produced in response to exogenous low doses of CO may affect cellular respiration and eventually lead to adaptation. 95 NO is produced in small quantities to regulate local brain metabolism, neurotransmitter release, gene expression and exerts a crucial role in synaptic plasticity and morphogenesis, but in the case of excessive formation, NO is an essential mediator of neurotoxicity in various diseases of the nervous system. 87,96 CO and NO are essential in the regulation of CNS functions, and impaired CO and NO metabolism lead to abnormal brain functions. Hormesis is the most effective endogenous protective mechanism against antioxidant damage, and it is a dose-response phenomenon featuring low-dose stimulation and highdose inhibition, which can be graphically indicated by an inverted Ushaped dose-response/J-shaped or U-shaped dose-response. 90 Hormesis can be induced either by direct stimulation or by overcompensation for disruptions in homeostasis, and the induction of adaptive responses by previous exposure to a variety of low-level stressors has been widely investigated and has been proven to be reliable in protecting the nervous system from a variety of neurodegenerative diseases. 97,98 Hormetic dose-response has been widely studied in neuroscience research, such as neurodegenerative diseases, 90 in addition to various antidepressants, 99 anxiolytic drugs, 100 and memoryenhancing drugs. The hormetic dose-response provides a new interpretation of the dose-response, where the typical endpoints measured at high doses in a toxicological setting show cellular damage; however, when the dose is reduced below the threshold, the low-dose stimulation is more probably a manifestation of an adaptive response that is consistent with a measure of biological performance as may be seen in the cases of modest increases in longevity, cognition and other biomedical endpoints of interest. 87,101 The consistency of the hormetic dose-response of various biological models suggests that this dose-response may be a manifestation of plasticity in biological systems and that essentially all biological models have the same quantitative characteristics as the dose-response in response to imposed stress. 87 The hormetic dose-response provides a quantitative indicator of biological plasticity at multiple levels of biological organization, making it a core biological element for improving biological cell survival. 87 Indeed, the preconditioning signal leading to cellular protection through hormesis is an important redox-dependent ageingassociated to free radicals species accumulation, 25 The free radical species NO not only regulates neuronal proliferation, survival and differentiation, but it is also involved in synaptic activity, neural plasticity as well as memory function, and it also regulates cell differentiation and survival in the brain by modulating transcription factors and genes. 102 NO activates important survival pathways involved in Akt (protein kinase B) and CREB (cyclic AMP-responsive-element-binding protein). 25 S-nitrosylation of NMDA (N-methyl-D-aspartate) receptor subunit or the active site of caspases is responsible for the neuroprotective effect of physiological amounts of NO. 103 NO also induces heme oxygenase 1, a crucial enzyme in the cellular stress response. 104 NO becomes detrimental when cells are under oxidative stress or when there is excessive production of NO, which may undergo redox reactions and form toxic RNS that ultimately lead to cellular damage.
The intracellular redox state may be a key factor in determining whether NO is toxic or protective in brain cells. The mechanism of action of NO as a pro-or anti-inflammatory agent depends on the complexity of the chemical nature of NO in biological systems. 87,105 For example, natural antioxidants, such as polyphenols, have been proposed for the prevention or treatment of the neurodegenerative disease-Alzheimer's disease because of their ability to counteract NO-induced damage in vitro. 106 The vitagene system, an important intracellular redox system, is emerging as a potential neurostimulatory target for new cytoprotective interventions. Besides, vitagenes play a crucial role in the cellular pathway of protection against oxidative stress. 91 Oxidative stress is defined as elevated intracellular levels of ROS, leading to lipid, protein, and DNA damage; however, elevated ROS also acts as signalling molecules to maintain physiological functions, a process known as redox biology. 107 In general, all redox biology is interconnected and regulated in various ways, including mainly the antioxidant networks and vitagene networks. A variety of nutrients, such as taurine, vitamin E, selenium, carotenoids, etc., have been shown to affect antioxidant defence and vitagene networks, helping to maintain redox balance. 108 The regulation of endogenous cellular defence mechanisms through the vitagene system may represent an innovative approach to the treatment of psychiatric disorders. 90 During periods of oxidative stress, the vitagene network operates as a defence system in the brain, which opens new perspectives for the treatment of brain ageing and neurodegenerative disorders. 25,87,109 There is growing evidence for the concept of oxidative stressdriven neuroinflammation as an early pathological feature of neurodegenerative diseases. 110 Neuroinflammation, a complicated host response to tissue damage or infection, is believed to be a key mechanism leading to CNS damage and has been identified to be an appealing therapeutic target for the prevention of cognitive impairment in psychiatric disorders. 111 In other words, neuroinflammation is typical of all neuropsychiatric disorders and is a response to the CNS homeostasis involved in normal brain development and neuropathological processes, and therefore inflammation can be either beneficial or harmful. The imbalance that exists between pro-and antiinflammatory activity is the most prominent feature in defending and restoring the integrity of the injured CNS and protecting neurons from damage. Under normal physiological conditions, cytokines supply nutritional support for neurons and reinforce neuronal integrity, but under stressful conditions (oxidative stress), overproduction of cytokines can lead in part to depression-like behaviours and cognitive dysfunction in vivo. 55,77 There is increasing evidence that the imbalance between pro-and anti-inflammation is involved in the pathogenesis of many human diseases so the two balance also plays a major role in brain-immune interactions in humans. Inflammatory cytokines are generally considered to be markers of activation of brain-immune interactions. 112 It is suggested that the cytokine-mediated activation of the inflammatory response system might take an active part in the pathogenesis of cognitive dysfunction.
As a component of the cell membrane of Gram-negative bacteria, lipopolysaccharide (LPS) has been used in many studies to mimic infection because it triggers a rapid and well-characterized immune response and induces depressive behaviour in animal models. 113 The current research suggests that the bacterial endotoxin LPS has multiple effects on cognition and neuronal integrity and that multiple inflammatory cytokines would be responsible for these changes. 114 Experimental studies from rodents have shown that inflammatory mediators and pro-inflammatory cytokines are upregulated in peripheral tissues and the CNS. 115,116 Preliminary studies suggest that inhibition of pro-inflammatory cytokines or their signalling circuits ameliorates depressive symptoms and enhances response to traditional antidepressant treatments. 75,117,118 The hippocampus is one of the brain regions closely associated with psychiatric disorders. Numerous studies have concluded that the hippocampus has many cytokine receptors, which makes it vulnerable to high concentrations of pro-inflammatory cytokines during neuroinflammation. 119 Pro-inflammatory cytokines have a detrimental effect on the regulation of hippocampal neurotransmitter signalling, ultimately leading to excitatory neuronal damage and cognitive dysfunction. 118 Recent studies have used the mice behavioural tests, western blot and immunofluorescence experiments, respectively, to detect changes in cognitive behaviour, protein expression and proinflammatory cytokines associated with cognitive function in the mice hippocampus, as well as activation of microglia in the hippocampal dentate gyrus region. 120,121 The results suggest that cognitive dysfunction is caused by overproduction of proinflammatory cytokines and hyperactivation of microglia. Anti-inflammatory cytokines regulate the duration and intensity of behavioural symptoms of the disease, possibly via inhibition of pro-inflammatory cytokine production and attenuation of their signalling. 122,123 Anti-inflammatory therapy would be an important treatment for cognitive dysfunction. Future treatment of cognitive impairment in psychiatric disorders may be aimed at the immune system through the action of inflammatory cytokines.
| Cognitive dysfunction, immune molecular signalling pathway as well as psychiatric disorders
The complex interactions between the brain and the immune system showed that cognitive impairment in LPS-induced neuroinflammatory mice was associated with TLR4. 125 The NLRs have an important role in the regulation of cognition, anxiety and hypothalamic-pituitaryadrenal (HPA) axis activation. 126 The complement cascade is an imperative component of the innate and adaptive immune system.
Complement is a critical intermediate link between innate and acquired immune defences. Complement activation occurs mainly through three different pathways (classical pathway, alternative pathway, and lectin pathway), each leading to a common terminal pathway. 127 Complement activation leads to the release of a variety of biologically active molecules that contribute to immune surveillance and tissue homeostasis. Complement activation is also responsible for the pathogenesis of various psychiatric disorders, and studies have found that the brain is also susceptible to complement-mediated damage. The complement cascade has an essential function in microglia-mediated synaptic refinement during brain development. 128 Synaptic loss in Alzheimer's disease (AD) is associated with cognitive decline, and studies suggest that complement C3, which is elevated in AD, colocalizes with neuritis plaques F I G U R E 2 A diagram of the three most common cytokine pathways concerning cognitive dysfunction in psychiatric disorders. The PI3K/Akt/ mTOR signalling pathway and the Ras/Raf/MEK/ERK signalling pathway have been identified as promising therapeutic targets for psychiatric disorder therapy. Amongst them, the mTOR can promote the synthesis of synaptic proteins, and PI3K stimulation can enhance NMDA-dependent LTD and stimulate the exocytosis of AMPA receptors, contributing to the enhancement of excitatory synaptic transmission. AMPA, alpha-amino-3-hydroxy-5-methyl-4-isoxazole-propionic acid; Akt, protein kinase B; ERK, extracellular signal-regulated kinase; FADD, Fas-associated death domain protein; GF, growth factor; GSK, glycogen synthase kinase; IKK, IκB kinase; JAK, Janus kinase; LTD, long-term depression; LTP, long-term potentiation; mTOR, mechanistic target of rapamycin; NMDA, N-methyl-D-aspartate; NF-κB, nuclear factor kappa-light-chain-enhancer of activated B cells; PI3K, phosphatidylinositol-4,5-bisphosphate 3-kinase; SOCs, suppressor of cytokines; STAT, signal transducers and activators of transcription; TARDD, TNF-receptor 1-associated death domain protein; TRAF2, TNF receptor-associated factor 2 and may contribute to the clearance of Aβ by microglia in the brain. 128,129 Cytokines are critical modulators of immunity. Cytokines act as intercellular transmitters to regulate the immune system as well as the inflammatory response. Generally, four types of cytokine receptors are available: (1) pathway) 130 ; and (4) transforming growth factor (TGF)-β receptors.
Studies suggest that over 50 cytokines signal through the JAK/STAT pathway to induce inflammation and regulate immune responses. 131 The JAK-STAT pathway, also known as the IL-6 signalling circuit, is a signal transduction circuit stimulated by cytokines. 132 Most cytokines promote gene transcriptional regulation through the JAK-STAT pathway, whilst their signalling is generally suppressed by the suppressors of the cytokine signalling (SOCS) family of proteins. 133 The STAT family in the cytoplasm is a downstream target of JAKs, which are amongst the transcription factors essential for cytokine activation in the immune response. 132 The SOCS proteins modify the activation of downstream genes by regulating the output of activated STATs and modifying extra diversity into the JAK-STAT pathway. 133 Recently, M1 macrophages were proven to upregulate the expression of an intracellular protein called SOCS3 and initiate inducible nitric oxide synthase (NOS2) or iNOS to produce NO. 68 There is evidence that M2 polarization downregulates the activity of NF-κB and STAT1, thereby acting as a common mechanism to limit inflammatory development( Figure 2). 134 A growing number of studies have shown that PI3K/Akt/mTOR pathway and Ras/Raf/MEK/ERK signalling pathway are associated with GF and GF receptors. 135,136 The PI3K/ Akt signalling pathway also can inhibit GSK-3β activity. Stimulation of PI3K can enhance NMDA-dependent LTD, stimulate AMPAR exocytosis, and enhance excitatory synaptic transmission ( Figure 2). More importantly, activation of the mechanistic target of rapamycin (mTOR) increases the synthesis of proteins required by existing synapse maturation and new synapse formation. 136 Synaptic plasticity performs a central position in both short-term and long-term memory, and the mechanisms behind its changes are relevant to the pathophysiology and treatment of various psychiatric diseases. 136
| COGNITIVE DYSFUNCTION AND NEURON-GLIA INTERACTION
As the complexity, importance and diversity of glial cell functions are increasingly understood, there is growing interest in the potential for glial cell specialization and heterogeneity. Glial cells represent a range of non-neuronal cells in the nervous system and comprise half of the volume of the human brain. A large body of evidence suggests that glial cells have an important influence on the development and structure of local neural networks, proving that higher brain functions are wrongly assumed to be exclusively neuronal activity. It is now believed that neuron-glia interactions are necessary for brain function. 137 In the CNS, a large number of glial cells and neurons communicate with each other to perform elaborate brain structures and functions. 138 140 The pathophysiology of cognitive dysfunction is thought to be related to impairments in neural and synaptic plasticity, which in turn is modulated by the integrity of neuron-glial cells. 141,142 Alterations in glial cell number and/or function will influence the integrity and activity of the neuron-glial network, thereby affecting various behavioural outputs (e.g., cognitive function). Disruption of the glial signalling pathway can lead to synaptic and cognitive damage in disease. 141 Glial cells restore vascularization, re-establish the blood-brain barrier(BBB), stimulate synapse formation and complete remyelination mainly by promoting homeostasis of neural tissue after injury. 143 Previous studies have shown that there is a cytokine network in the brain produced by neurons, microglia and astrocytes that deliver cytokine receptors and magnify cytokine signals. 144
| Neural and synaptic plasticity in cognitive dysfunction
Synaptic plasticity is the most fundamental function of the brain to perceive, evaluate and store complex information and to respond appropriately and adaptively to subsequent relevant stimuli. 148,149 Normal brain development is inextricably linked to the maintenance of neuronal plasticity and is highly dependent on the accurate response of neuronal cells to various stresses. Synapses are highly dynamic structures that can rapidly modify synaptic connections following neuronal activity and play an important role in improving cognitive performance. Synaptic deficits are the strongest correlative factor for cognitive decline in many neuropsychiatric diseases. The sustained increase and decrease in synaptic strength, known as LTP and long-term inhibition (LTD), respectively, are generally considered to be associated with learning and memory and essential for normal cognitive function. 150 The capability of weakening or strengthening synaptic connections between neurons, especially LTP and LTD, is one of the most dynamic areas of investigation in neuroscience. 151 Studies have demonstrated that LTP is caused by the temporal and spatial coordination of multiple postsynaptic processes, including reorganization of the actin cytoskeleton, exocytosis of endosomes and insertion of AMPAR in synapses. 152
| Cognitive dysfunction and microglia
Glial cells are a variety of nervous system cells. In the mammalian brain, glial cells outnumber neurons and account for about half of the volume of the CNS. In the CNS, glial cells are usually divided into macroglia (mainly astrocytes and oligodendrocytes) and microglia. 158 A century ago, microglia were identified as a distinct group of cells. It has long been thought that microglia are primarily mononuclear phagocytes from the mesoderm/mesenchyme responsible for removing debris during CNS development and disease. 159 When the CNS or the brain is damaged or inflamed, the number of microglia increases and they then act like phagocytes to remove diseased cells. In addition to their phagocytic function, microglia also promote cells involved in synaptogenesis (synaptic formation and plasticity) in the death of developing neurons. 160 PRR expressed on the surface of microglia appears to be a common pathway for ROS production by multiple toxin signals. TLRs invoked microglia activation is responsible for the neurotoxicity of various CNS diseases. TLRs have been demonstrated to identify pathogen-associated molecular patterns and trigger innate immune responses in their interactions with infectious agents. 169 Studies have shown that LPS treatment leads to behavioural and cognitive impairments and neuroinflammation in mice that are associated with microglia activation and loss of hippocampal neuronal cells. 170 Results indicate that as LPS causes proinflammatory in the hippocampus of prenatally stressed mice, mRNA levels of cytokines were significantly increased. 171 Furthermore, prenatally stressed mice showed a greater ratio of hippocampal Iba1-immunoreactive cells with morphological features of activated microglia than non-stressed mice. 171 CD11c + is a recently identified subpopulation of microglia, mainly found in the primary myelinated regions of the developing brain, expressing neuronal and glial cell survival, migration, and differentiation. 172 Parkhurst et al. suggested that microglia exert an influential physiological role in learning and memory via the brain-derived neurotrophic factor (BDNF) signalling pathway that promotes the formation of learningrelated synapses. 173 As the function of microglia in detecting brain alterations becomes better understood, microglia activation in the progression and pathogenesis has been increasingly discussed.
Insights into the mechanisms of brain-immune interactions can help us develop innovative strategies to activate microglia more efficiently whilst avoiding their deleterious effects on the components of neurons. There is no doubt that with the assistance of modern imaging and analytical techniques, microglia will illuminate the physiological functions of cognitive dysfunction in the years to come.
| Cognitive dysfunction and astroglia
Astrocytes, derived from neuronal progenitors, are the most abundant and functional of the various glial cells. 116 Astrocytes are also amongst the most specialized glial cells, outnumbering neurons by more than fivefold. 174 Astrocytes are essential for regulating glucose metabolism, providing structural support, neurotransmitter uptake (especially glutamate), synaptic development and the BBB. 140,175 In the rodent models experiments, changes in the number and morphological phenotype of astrocytes have been shown to trigger cognitive dysfunction. 176 Astrocytes are found throughout all regions of the CNS, mainly in grey matter and white matter. They are the most numerous and voluminous of the glial cells and are named after their stellate morphology. 167 Based on their morphology and anatomical location, astrocytes can be divided into protoplasmic astrocytes and fibrous astrocytes. 174,177 Protoplasmic astrocytes are covered in all grey matter, whereas fibrous astrocytes are found throughout all white matter and take the form of many long fibrous projections. About 60% of the synapses in the CA1 region of the hippocampus are wrapped by protoplasmic astrocyte membranes. 178 Astrocytes have the ability to regulate the structural remodelling and functional plasticity of synapses, so they are often regarded as the universal contributors to synaptic functions. 179 has shown that stress impairs LTP by limiting shuttling of the astrocyte network, thereby impeding neuronal access to the astrocyte energy depots in the hippocampus and neocortex. 181 Alterations in glial cell synapse or network function and abnormalities in astrocyte signalling may cause or contribute to synaptic and network imbalances, leading to cognitive impairment. 182,183 Astrocytes are directly involved in synaptic transmission by regulating intracellular calcium ion concentration and calcium signalling. 184 Astrocytes express many neurotransmitter receptors and transporters, which may be activated by neurotransmitters released by synapses.
Astrocytes have many neurotransmitter receptors that bind to intracellular calcium mobilization. Astrocytes can incorporate neuronal inputs, exhibit calcium excitability, and modulate neighbouring neurons via calcium-dependent release of the chemical transmitter glutamate. 185 Some evidence suggests that the Ca 2+ to excitatory neuronal activity response of astrocytes is largely mediated by the activation of type I and type V metabolic glutamate receptors (mGluRs). 186 Studies of experimental mouse hippocampal slices have shown that neuronal activity leads to increased calcium in glutamatedependent astrocytes. Recent studies suggest that delayed and deficient maturation of astrocytes may simultaneously lead to disruption of glutamate, potassium and neuroregulatory homeostasis, leading to impaired synaptic transmission. 187 Understanding the action of astrocytes in sustaining glutamate homeostasis and modulating the homeostasis between glutamate uptake and release in the CNS contribute to our better understanding of the mechanisms of glutamate excitotoxicity in psychiatric disorders. 188 Studies suggest that pathological processes targeting astrocytes may cause neurodegeneration in specific brain regions, which in turn may have an impact on cognitive function in mood disorders. 176 The prefrontal cortex (PFC) exerts a key part in cognition, memory and emotional processing. Astrocyte loss (reduced glutamate uptake) in PFC is sufficient to induce anhedonia and depression-like behaviour in rats. 189,190 Several drugs that enhance the functional activity of astrocytic glutamate transporters have been identified, such as β-lactam antibiotic (ceftriaxone sodium), which show neuroprotective potential and positive results in several animal models of neurodegenerative and psychiatric disorders. 191,192 Changes in astrocytes are usually coordinated with alterations in spines, and dynamic structural changes in astrocytes contribute to controlling the extent of neuron-glia interaction at hippocampal synapses. 193 Clinicopathological studies have shown that astrocyte density is decreased in the PFC of patients with major depression. 194 Prevalent astrocyte markers (based on a variety of protein antibody positioning) in the brain tissue including GFAP, the water channel aquaporin-4 (AQP4), the calcium-binding protein, and the glutamatergic markers (including excitatory amino acid transporters 1 and 2 [EAAT1, EAAT2]) and glutamine synthetase, etc. 175,195 There is evidence that each of these astrocyte markers is altered in psychiatric disorders. In addition, the lamina-specific GFAP response in the dlPFC astrocytes is reduced in patients with schizophrenia. 196 Reduced astrocyte numbers, morphological atrophy, decreased GFAP expression, as well as inadequate glutamate uptake, are manifestations of cognitive dysfunction in a variety of psychiatric disorders. [197][198][199] Astrocytes are intensively implicated in several aspects of the CNS function, including oxidative stress regulation. In certain pathological conditions, astrocytes may be one of the main sources of detrimental ROS and RNS. Increasing evidence suggests that astrocytes may be a prospective target for the modulation of oxidative stress in the CNS, which may provide us with a future approach for the treatment of Understanding the emerging role of astrocytes in cognitive dysfunction will provide us with some new therapeutic opportunities.
| Cognitive dysfunction and oligodendrocytes (OLs)
In terms of numbers, the most numerous glial cells are oligodendrocytes and NG2 cells (40%-60%). 201 Oligodendrocytes are the core of myelinated Schwann cells. 185 Oligodendrocytes are best known for their role in myelination. Myelination is a dynamic process that responds to external stimuli, which is consistent with the activity of neural networks that mature long after ontogeny. 187 Several experiments in rodents have shown that oligodendrocytes are particularly sensitive to alternations in the neural environment during critical windows of individual development when developing myelin formation is most likely to be disrupted. 187 Steadman's study showed that disruptive oligodendrogenesis reduces myelin formation and impairs spatial memory consolidation. 202 Oligodendrocytes and myelin are vital for rapid neuronal signalling and also supply metabolic and nutritional support for axons, and their damage causes axonal death and neurodegeneration, which are critical characteristics of AD. 203 Oligodendrocytes maintain normal axon function by myelinating the axons, and oligodendrocyte injury leads to Waller degeneration, which inevitably leads to axonal demise. 139 Myelinated axons not only transmit information more rapidly than unmyelinated axons but also transmit information more coherently. The oligodendrocytes wrap axons with myelin sheaths to support the local metabolic structure and homeostasis of axons. 204 Oligodendrocytes are a dominant cell type of white matter (approximately 50% of the total volume of the human brain) and express glutamate receptors and transporters. 139,205 Altered expression of NMDA receptors (NMDARs) and enzymes involved in glutamate metabolism has been found in schizophrenic patients. 206 Neuronal activity is a key determinant of myelination, therefore, myelination can be reduced by inhibiting neuronal NMDARs and inhibiting action potential. 207 NG-2 glial cells, also named oligodendrocyte precursor cells (OPCs) or polymorphic glial cells, are homeostatic and facilitate myelination in adulthood, although their function has not been better described. 208 OPCs originated in a specific region of the developing CNS, and once they are generated, they migrate, differentiate and mature in a specific region of oligodendrocytes. OPCs retain the ability to proliferate, migrate and differentiate into oligodendrocytes. 209 OPCs in the adult CNS form synapses with neurons, support the integrity of the BBB and mediate neuroinflammation. In mice with there is still a long way to go, and many exciting cellular mysteries remain to unravel. 212
| COGNITIVE DYSFUNCTION AND STRESS
Stress is a significant contributor to the risk of psychiatric disorders, and the progression and aggravation of depression and anxiety disorders are associated with recurrent psychosocial stress. 213 Psychological stress is a very prevalent life event that often affects brain function and cognitive performance, and the immune system has a supportive role in brain plasticity and homeostasis. The brain is a crucial organ for interpreting and coping with potential stress and serves an essential role in controlling the generation of stress and its behavioural and physiological reactions. Changes in the brain and immune system induced by acute and chronic stress and their potential mechanisms have been shown to possess unintended clinical results. Studies have shown that stress affects cognitive function in multiple ways and that the effects of stress on cognition involve multiple mechanisms and different time courses. 214 Preclinical studies have shown that stress paradigms, such as chronic unpredictable stress, social defeat and learned helplessness induce pro-inflammatory cytokines in the central and peripheral areas, and that these changes are associated with depression-like symptoms in psychiatric disorders. 215,216 Activation of the inflammasome and elevated cytokines during stress and depression suggest that inhibition of inflammatory response can alleviate depressive symptoms. 217 In an experimental study, it was found that adult male rats exposed to prenatal stress showed increased expression of cytokines in the hippocampus, and these cytokines were more likely to activate microglial and astrocyte responses in adulthood. 218 A person's ability to cope with stress is primarily regulated by the HPA axis, which is selfregulating under normal physiological conditions and its activity is reduced by negative feedback inhibition. Elevated stress hormones directly affect the motility of various types of cells in the immune system. 219 In particular, stress hormones and glucocorticoids may lead to damaged cognitive function and contribute to impairment of brain structures. 220 In general, when physiological stress causes neuroinflammation, glucocorticoids exert anti-inflammatory effects by increasing levels of anti-inflammatory cytokines and decreasing levels of pro-inflammatory cytokines. 221 Stress can reduce the number of axon-spinous excitatory synapses and may change the synaptic morphology. 222 The stress response stimulates specific neural circuits in the hippocampus, the PFC, and the amygdala. Even very mild and acute stress can cause rapid cognitive impairment in the PFC (the region most sensitive to the harmful effects of stress exposure), whilst prolonged stress exposure could lead to structural changes in the dendrites of the PFC. 223 Treatment with fluoxetine prevented stressinduced reductions in astrocyte numbers, whereas in non-stressed animals, fluoxetine did not affect astrocyte numbers.
| Cognitive dysfunction and chronic stress
Since the early 1990s, several neurobiological studies have emphasized the effects of chronic stress on the developing brain. 224 Unavoidable chronic stress will eventually lead to changes in mental status and pathological changes in the function of the immune system, ultimately causing irreversible damage to the body. 225 The risk of chronic exposure to stress hormones, whether occurring in prenatal, infancy, childhood, adolescence adulthood or old age, affects brainimmune interactions involving cognitive and mental health. 226 Chronic unpredictable stress (CUS) has been shown to induce changes in amino acid neurotransmitter metabolism in glial cells. 189 Studies have shown that chronic inflammation is caused by stress and disturbances in the activity of pro-inflammatory cytokines. Under severe or chronic stress situations, the immune system is intensely stimulated, and glial and other immune cells in the brain alter their morphology and function and secrete high levels of pro-inflammatory cytokines. CUS is a rodent model of depression, and a series of experimental evidence by Banasr et al. shows that chronic stress can damage the glial function occurring in the rat PFC. 189 Immunohistochemical results show that chronic stress dramatically reduced the overall number of GFAPpositive astrocytes in the mouse/rat hippocampus. There is considerable evidence that long-term stress exposure results in permanent loss of neurons in the hippocampal region, as well as long-term changes in dendritic structure. 214 There is also evidence that chronic stress-induced glial cell dysfunction leads to elevated extracellular glutamate levels with neurotoxic effects, resulting in astrocyte density and a decrease in GABAergic interneurons. 195,227 Interventions such as regular physical activity, regular diet as well as positive social support as an adjunct to medication, can reduce the long-term burden of stress and benefit brain and body health and resilience.
| Cognitive dysfunction and post-traumatic stress
Traumatic stress leads to a variety of psychiatric disorders, including depression, anxiety, and trauma-related disorders, especially posttraumatic stress disorder (PTSD). PTSD is highly correlated with immune dysregulation, and several studies have demonstrated that stress exposure enhances fear learning (SEFL), which is related to increased hippocampal IL-1β in preclinical animal models. 227 Research by Risa Imai et al. would suggest that serum inflammatory cytokine IL-6 levels in female individuals with PTSD are dramatically higher than those in healthy controls, 228 which supports the hypothesis of neuroinflammation in cognitive dysfunction. The results of metaanalyses showed that IFN-γ, TNF-α, and the C-reactive protein were elevated in PTSD. 229 Postoperative cognitive dysfunction (POCD) is a neurological disease associated with neuroinflammation and is the most common type of cognitive dysfunction and has been widely reported after various surgical operations and is particularly common in elderly patients. [230][231][232] Postoperative systemic and hippocampal inflammation is a key factor in the pathogenesis of POCD. [233][234][235][236][237] It has been shown that noticeably increased expression of GFAP, that is, activation of astrocytes, in the hippocampus of mice with the POCD animal model group can cause cognitive dysfunction. There are three main strategies for the treatment of POCD: blocking inflammation (anti-inflammatory) by inhibiting inflammatory mediators, preventing oxidative components of inflammation from oxidizing (antioxidant), and protecting neurons during surgery.
| Cognitive dysfunction and early-life stress (potential neuro-immunological mechanisms)
The sensitive window of time early in life is a critical period for brain development. Both animal and human studies have found that stressful life events, exposure to natural disasters, and symptoms of maternal anxiety and depression can affect the brain and increase the risk of offspring facing a range of emotional, behavioural and cognitive problems later in life. 238 During this sensitive time window in early life, stress may influence patterns of emotional and stress responses throughout life, altering the rate of brain and body ageing and ultimately leading to psychiatric disorders and neurodegenerative diseases. Research suggests that even minor activation of the immune system early in life may increase susceptibility to a range of psychiatric disorders and lead to physiological abnormalities in the body. 239 Early-life stress (ELS) significantly increases the risk of developing psychiatric disorders such as MDD, schizophrenia, and PTSD, all of which are characterized by cognitive dysfunction. 240 Early adversity is linked to deficits in a wide variety of cognitive and emotional functions. 241 Namely, early life experiences, such as maternal deprivation, exposure to infections, and medicine use, can lead to permanent alterations of neural and immune system function and can affect brain structure and function, which are critical for cognitive function, emotional behaviour, and neuroplasticity in adolescence and adulthood. 242 Epidemiological data indicate that ELS occurs in different age groups, with the youngest age group of 1-3 years being the most affected. 243 Studies have shown that nearly 32% of psychiatric disorders are related to adverse childhood experiences. 244 Experiments in rodents have shown that ELS increases stress sensitivity in adult mice during the postnatal sensitive period of female mice. 245 ELS is a fragile factor in the progression of psychiatric disorders, modulating inflammatory responses and influences neuroplasticity mechanisms throughout the life span.
High levels of pro-inflammatory cytokines produced by the maternal or foetal immune system during perinatal infection are related to an elevated risk of foetal brain abnormalities as well as neurodevelopmental disorders. Inflammation appears to play an important role in many ELS, and several research groups have suggested that elevated levels of pro-inflammatory cytokines mediate the long-term consequences of many ELS. 246,247 Studies by Diz-Chaves et al.
showed that prenatal stress influences the inflammatory response in the hippocampus of adult male mice. 218 Prenatally stressed mice show higher levels of IL-1β and TNF-α in the hippocampus and an elevated proportion of microglia with reactive morphology in hippocampal CA1. 218 Besides, prenatally stressed mice treated with LPS exhibited elevated TNF-α immunoreactivity in CA1, as well as elevated numbers of the ionized calcium-binding adaptor protein-1(Iba-1) immunoreactive microglia as well as GFAP immunoreactive astrocytes in the dentate gyrus. 218 ELS has been shown to interfere with microglia developmental processes, including their proliferation, death and phagocytic activity. Elevated corticosterone levels in the ELS mouse model (mice briefly exposed to a daily absence in the first weeks after birth) interfere with microglia function during a critical time in brain development. 247
| COGNITIVE DYSFUNCTION AND PSYCHIATRIC DISORDERS (FOCUS ON TREATMENT)
Traditionally, research on psychiatric disorders has mainly concentrated on psychological symptoms, such as depression, pessimism, anxiety and delusions. 257 An increasing number of researchers are now listing cognitive dysfunction as the primary cause of a range of psychiatric disorders in addition to emotional symptoms, 1,2,258 it underscoring the necessity to treat cognition dysfunction separately.
There is considerable evidence that persistent cognitive impairment may affect the resilience of patients with psychiatric disorders, 259 and it is generally accepted that individuals with cognitive dysfunction tend to suffer from long-lasting distress. 1 Despite the aetiology and pathogenesis of cognitive dysfunction remain incompletely elucidated, the pathological processes of brainimmune interaction appear to contribute to its treatment (Table 1). Therefore, studying brain-immune interactions may help us to understand the pathogenesis of cognitive disorders and ultimately to develop more effective treatment strategies for various psychiatric disorders.
| Cognitive dysfunction and MDD
Cognitive dysfunction is increasingly recognized as a core feature of MDD, with deficits observed in several domains (e.g., learning, memory, attention, executive function). 259 Although the specific neurophysiological characteristics of cognitive dysfunction in MDD have yet not been fully elucidated, a host of research suggests that patients with MDD have neurocognitive deficits in the cognitive domains. 258 There is ample evidence that cognitive dysfunction may persist following the remission of a major depressive episode. 10,175 A growing number of researchers believe that depression and inflammation are intertwined, with depression leading to inflammation, and inflammation-promoting depression, and that this two-way bidirectional cycle has a major impact on maintaining the health of patients with psychiatric disorders. 271 There is a growing body of compelling evidence that communication between the peripheral and the brain immune systems may lead to brain inflammation, which in turn leads to impaired neurogenesis and neuroplasticity damage, ultimately leading to MDD. Preliminary data from patients with autoinflammatory disorders and depression suggest that inhibition of proinflammatory cytokines or their signalling circuits leads to improved depressed mood and increased response to traditional antidepressant treatment. 18 Many meta-analyses have reported higher concentrations of pro-inflammatory cytokines in patients with depression than in controls. [272][273][274] Rising evidence of preclinical and post-mortem studies shows decreased glial cell number or density and morphological and functional glial atrophy in MDD patients and animal models of depression. [275][276][277][278] The current preclinical studies suggest that astrocytes and glial fibrillary acidic protein (GFAP) are involved in animal models of stress and depression-like behaviour. 175,271 The diverse astrocyte deficiencies noted in MDD suggest that astrocytes may be a new target for the antidepressant effect. Researchers have suggested astrocytes as a target for therapeutic intervention in depression, and several animal studies have shown that different classes of antidepressants affect astrocytes, such as fluoxetine, which prevents psychological stress-induced astrocytes reduction. 279 Thus, there is an increasing interest in treating depression from the perspective of brain-immune interaction mechanisms. SCZ is a complex, heterogeneous behavioural and cognitive psychiatric disorder with distinct damage to psychosocial functioning. 280,281 Although cognitive dysfunction is considered a central feature of SCZ, little is known about its underlying pathophysiology. SCZ may be linked to an imbalance of inflammatory cytokine, resulting in a decrease in Th1 and an increase in cytokine secretion by Th2. 282 Mounting evidence shows that inflammation leads to the pathogenesis and progression of SCZ, and therapeutic agents with antiinflammatory or neurotrophic effects may be beneficial in the treatment of SCZ. 283 Numerous studies have shown that plasma amounts of pro-inflammatory factors are considerably elevated in patients with schizophrenia compared to controls. 283,284 Meta-analysis of antiinflammatory treatment of mood disorders has shown that antiinflammatory treatment reduces the number of patients receiving post-treatment manic symptom scores and post-treatment depressive symptom scores in patients receiving anti-inflammatory drugs. 285,286 Studies also suggest that learning and memory impairment in schizophrenia may be caused by pro-inflammatory cytokine activity produced by microglia, [287][288][289][290][291] and the microglia inhibitor minocycline may block the cytokine damage to the hippocampus and reverse the cognitive dysfunction in the SCZ. 292 The microglia hypothesis of SCZ may provide new ideas for treatment strategies for SCZ. 290 Recent research by Yilmaz et al. has shown that significant loss of grey matter and dendritic spines occur during the onset of SCZ, suggesting that decreased synaptic connections may contribute to behavioural and cognitive dysfunction. 293 However, it remains unclear when the loss of synapses in the cerebral cortex occurs and how synaptic the relationship is between the loss of connections and the proteins they encode. It has been suggested that the SCZ may be caused by dysregulation of the normal function of astrocytes on synapses. Furthermore, the dysregulation of processes associated with oligodendrocytes in SCZ provides substantial evidence for irregular expression of myelin dysregulated genes and altered oligodendrocyte numbers in the brains of SCZ patients. 294 Defects in oligodendrocytes in SCZ patients may be the result of maturation failure and disturbed regeneration, which may underlie cognitive deficits in the disease and is closely associated with impaired long-term outcomes. 294 Therefore, treating SCZ from the perspective of synaptic plasticity and neuron-glia interaction, namely the mechanism of brain-immune interaction, may be one of the future directions.
| Cognitive dysfunction and AD
AD is one of the common senile diseases that can lead to a decline in intellectual function with cognitive impairment and memory loss, which can seriously affect the quality of life of patients. However, there are few treatment strategies for AD and it is difficult to cure, so it is critical to understand the pathogenesis of AD. Accumulation of amyloid-beta plaques, hyperphosphorylation of tau proteins in neurofibrillary tangles, and neuroinflammation are possible mechanisms of AD. 295 There is growing evidence that the pathogenesis of AD is not limited to the prevailing amyloid cascade hypothesis and microtubule-associated protein (tau-a microtubule-associated protein) dysfunction hypothesis, but interacts with immunological mechanisms in the brain. 296 Neuroinflammation is one of the fundamental features of AD, but its exact role in disease progression remains unclear. A growing body of brain imaging data is available to support the contribution of neuroinflammation in AD progression. 297 Several studies have also shown elevated levels of cytokines and chemokines in the brains of AD individuals and proliferation of microglia in damaged regions. 298 Several studies have also shown higher levels of inflammatory cytokines in AD patients compared to controls. 299,300 In addition, recent studies suggest that novel anti-inflammatory agents for instance minocycline (a tetracycline derivative) may exert neuroprotective action in the treatment of AD. 301 More and more studies also suggest that glial cells are one of the main drivers of neuroinflammatory processes, 302 and regulating glial cells could be a strategy for the treatment of AD. 303 In the AD, a cluster of reactive microglia surrounds senile plaques, and there is growing evidence that microglia proliferation and activation in the brain is a prominent feature of AD, and that impaired microglia activity and altered microglia response to β-amyloid (Aβ) are correlated with an increased risk of AD. 304 Aβ proteins are also widely used to activate microglial cells in vitro. Krasemann et al. demonstrated that the TREM2-APOE circuit is a principal modulator of the functional phenotypes of microglia in psychiatric diseases and a novel objective to assist in restoring microglia homeostasis. 305 A family of nucleoside-binding oligomeric domain-like receptors, pyrin domain-containing-3 (NLRP3) inflammasome, is one of the features of the neurodegenerative disease AD. 306 Recent studies suggest that NLRP3 may be used for targeted treatment to reduce neuroinflammation and that modulating endogenous cellular defence mechanisms might be an innovative method for the treatment of interventional AD. 307 Studies also suggest that microglia, as the major participants in neuroinflammation, may be modulated differently in AD locus to prevent or improve disease progression. 296 Enhancing autophagy in microglia may be a promising new therapeutic strategy for AD. 308
| Cognitive dysfunction and PD
PD is considered one of the most prevalent age-related neurodegenerative disorders of the CNS. 315 PD is closely related to neurobehavioural disorders (anxiety, depression), cognitive dysfunction (dementia), and autonomic nervous dysfunction (such as hyperhidrosis). 316 Several longitudinal studies have indicated that mild cognitive dysfunction is a harbinger of PD. 315 Recent investigations have revealed that higher levels of inflammatory biomarkers correlate with cognitive impairment in patients with PD. 317 Some data suggest that PD is connected with a proinflammatory profile, and soluble TNF receptors (sTNFR) are recognized biomarkers of cognitive performance. 318 Molecules that can activate the anti-inflammatory M2 phenotype or facilitate the conversion of the pro-inflammatory M1 phenotype to the anti-inflammatory M2 phenotype can be used to treat PD. 319 Anti-inflammatory molecules exert neuroprotective effects by modifying the balance of M1 and M2. In a mouse PD model of MPTP (a byproduct of methotrexate analogue, a neurotoxin), infusion of human AVV expressing IL-10 reduced the expression of pro-inflammatory iNOS and significantly increased the levels of anti-inflammatory mediators. 319,320 MPTP triggers an inflammatory response that leads to neurodegeneration, causing microglia activation and M1-related increases in pro-inflammatory cytokines. 319 There is growing evidence that PD is associated with elevated TNF-α levels, which are markedly elevated in the cerebrospinal fluid and postmortem brain in PD sufferers. Inflammation is thought to be an early-stage event in PD, so it may be more productive to target inflammatory pathways to prevent disease progression. Therefore, immunoregulatory therapies are currently recognized as a neuroprotective strategy for PD. 321 The pathogenesis of PD is characterized by the activation of microglia. Increased microglial activation is considered to lead to cell death in PD, and inflammation markers may be biomarkers for its prognosis. 322 Recent studies have shown that the NLRP3 inhibitor MCC950 effectively inhibits the activation of the microglial inflammasome and reduces motor defects in a variety of PD models. 321 A healthy lifestyle (such as the Mediterranean diet or a plant-based diet and regular exercise) may reduce chronic inflammation and positively affect PD symptoms and even disease progression, and this strategy may be an effective, non-pharmacological treatment strategy. 270
| CONCLUSION
Neuropsychiatric disorders involve complex cellular and molecular processes, and cognitive dysfunction is commonly considered to be a pivotal part of many psychiatric disorders. With an understanding of the molecular and cellular mechanisms underlying cognitive dysfunction has increased, many researchers believe that cognitive dysfunction depends on the bidirectional regulation of the neural and immune systems.
Research on the cognitive decline has focused on the regulation of several crucial neuro-immune physiological processes, including immune molecular, neuroglial, oxidative stress as well as early-life stress. An of evidence suggests that brain-immune interaction mechanisms perform a pivotal role in cognitive dysfunction. This article reviews how various factors involved in the brain-immune interaction mechanisms modulate the cognitive function and ultimately lead to cognitive dysfunction in the brain.
Although multiple factors may partially explain the brain-immune interaction mechanisms underlying cognitive dysfunction. However, the specific mechanism of brain-immune interaction in cognitive dysfunction is unclear and needs to be further investigated. In conclusion, we aimed to briefly summarize the role of brain-immune interaction mechanisms behind cognitive dysfunction to provide effective treatment strategies for various psychiatric disorders.
AUTHOR CONTRIBUTIONS
Fangyi Zhao wrote the manuscript. Wei Yang and Tongtong Ge provide the construction. Bingjin Li and Ranji Cui provide the final revision. | 2022-07-22T06:19:32.903Z | 2022-07-20T00:00:00.000 | {
"year": 2022,
"sha1": "c04fac062515acd0956868b206dfb84a6dfdc9b0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "ffa60fdcad78b10c48390db7c08f87885a748adf",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52073793 | pes2o/s2orc | v3-fos-license | Individual- and community-level determinants of child immunization in the Democratic Republic of Congo: A multilevel analysis
Understanding modifiable determinants of full immunization of children provide a valuable contribution to immunization programs and help reduce disease, disability, and death. This study is aimed to assess the individual and community-level determinants of full immunization coverage among children in the Democratic Republic of Congo. This study used data from the Demographic and Health Survey 2013–14 from the Democratic Republic of Congo. Data regarding total 3,366 children between 12 and 23 months of age were used in this study. Children who were immunized with one dose of BCG, three doses of polio, three doses of DPT, and a dose of measles vaccine was considered fully immunized. Descriptive statistics were calculated for the prevalence and distribution of full immunization coverage. Two-level multilevel logistic regression analysis, with individual-level (level 1) characteristics nested within community-level (level 2) characteristics, was used to assess the individual- and community-level determinants of full immunization coverage. This study found that about 45.3% [95%CI: 42.02, 48.52] of children aged 12–23 months were fully immunized in the DRC. The results confirmed immunization coverage varied and ranged between 5.8% in Mongala province to 70.6% in Nord-Kivu province. Results from multilevel analysis revealed that, four Antenatal Care (ANC) visits [AOR: 1.64; 95%CI: 1.23, 2.18], institutional delivery [AOR: 2.37; 95%CI: 1.52, 3.72], and Postnatal Care (PNC) service utilization [AOR: 1.43; 95%CI: 1.04, 1.95] were statistically significantly associated with the full immunization coverage. Similarly, children of mothers with secondary or higher education [AOR: 1.32; 95%CI: 1.00, 1.81] and from the richest wealth quintile [AOR: 1.96; 95%CI: 1.18, 3.27] had significantly higher odds of being fully immunized compared to their counterparts whose mothers were relatively poorer and less educated. Among the community-level characteristics, residents of the community with a higher rate of institutional delivery [AOR: 2.36; 95%CI: 1.59, 3.51] were found to be positively associated with the full immunization coverage. Also, the random effect result found about 35% of the variation in immunization coverage among the communities was attributed to community-level factors.The Democratic Republic of Congo has a noteworthy gap in full immunization coverage. Modifiable factors–particularly health service utilization including four ANC visits, institutional delivery, and postnatal visits–had a strong positive effect on full immunization coverage. The study underlines the importance of promoting immunization programs tailored to the poor and women with little education.
strong positive effect on full immunization coverage.The study underlines the importance of promoting immunization programs tailored to the poor and women with little education.
Background
Sub-Saharan Africa (SSA) has the world's highest risk of neonatal deaths sharing 40% of the under-five death globally [1].With an under-five mortality rate of 94 per 1000 live births, the Democratic Republic of Congo (DRC) has one of the highest child death rates in sub-Saharan Africa [2,3].Vaccine-preventable diseases such as tuberculosis and lower respiratory tract infection are still the leading causes of death in children in the DRC [4].Childhood vaccination is, therefore, one of the most effective ways to reduce child mortality rates [5].The World Health Organization (WHO) estimates that immunization averts 2 to 3 million deaths annually, and there is still scope to save an additional 1.5 million lives [6].
Global immunization coverage has increased in the past few decades [7,8].The Expanded Programme on Immunization (EPI) has played a central role in improving immunization coverage.The proportion of children receiving the vaccine against diphtheria, tetanus, and pertussis (DPT3) vaccine is typically used as a major indicator of the country's capability of providing immunization services.In 2000, the global DTP3 coverage was 72%, and it increased to 86% by 2016 [9].There has also been an improvement in measles vaccination.During the 1990s, measles coverage was about 71%, but since 2000 there has been a good increase, and in 2016, nearly 85% of children had received one dose of measles vaccine by their second birthday [9].However, still, an estimated 19.4 million infants worldwide missed out on basic vaccines in 2015 [9].About two-thirds of children without immunization coverage live in the DRC, Angola, Ethiopia, India, Indonesia, Iraq, Nigeria, Pakistan, the Philippines, and Ukraine [9].
Studies have shown that maternal education, socioeconomic status, and maternal service utilization during antenatal care (ANC), during delivery, and postnatal care associated with child immunization [10][11][12][13][14]. Additionally, factors such as media exposure, perceptions of vaccination, child's place of birth (e.g.health facility), and region of residence also influence the immunization coverage among children [10][11][12][13].Furthermore, the application of multilevel modeling has shown that community/contextual characteristics such as region or province of residence [12,15,16], community maternal education [15,16], level of ANC utilization [12], level of the institutional delivery [15], and community poverty level [12] are important determinants of immunization service utilization.The DHS surveys are designed in such a way that the individual-level characteristics are nested within community-level (or primary sampling unit level) characteristics.Therefore, the multilevel analysis is a highly recommended tool while further analyzing such data.In fact, omitting the community-level characteristics while estimating the determinants of immunization coverage using the DHS data could bias the results [17].First DHS survey in the DRC in 2007 found that only 31% of 12-23-month-old children were fully immunized [18].The Centers for Disease Control and Prevention (CDC) estimates that more than 764,400 children were unimmunized in the DRC in 2014 [19].The gap in immunization coverage possesses a threat to the DRC's global commitment to eliminate measles by 2020 [20]; therefore, some researchers have called for better EPI coverage and mass vaccination campaigns [21].Similarly, previous local studies from Kinshasa and Goma in the DRC have examined factors determining child immunization.The studies have found that fullness and timeliness of immunization is determined by various factors such as the type of clinic in which an infant is enrolled for immunization, socio-economic status, social and family support to mothers, maternal education, and marital status [22][23][24].There is no national-level study examining factors with implications for full immunization coverage in the DRC till date.Therefore, there is a need to better understand the level of immunization and variables related to the prevalence of immunization at the national level.
This study aims to assess the individual-and community-level determinants of child immunization coverage in the DRC using nationally representative, population-based, demographic and health survey data from 2013-14.
Study area
The DRC is one of the Central African countries with extensive amounts of natural resources.The total area of the DRC is 2.3 million square kilometers which is administratively divided into 26 provinces.The estimated total population of the DRC was 77.8 million in 2012 out of which 70% lived in rural areas.
At the national level, EPI administers the immunization program.The organization produces five-year plans, and these plans estimate the vaccine needs, cold chain supplies, and equipment needed to operate the immunization program.The EPI works in coordination with local partners to quantify the needs of the health zones.This information is then shared with UNICEF, which purchases immunization supplies [25].
The survey
A multistage cluster sampling method was used for the DHS survey (EDS-RDC II).At the first stage, the national territory was divided into twenty-six sample domains corresponding to the DRC's provinces.For urban areas, neighborhoods of cities and towns were sampled, whereas in rural areas, villages and chiefdoms were sampled.The final sampling unit selected was the cluster (neighborhood or village) and a total of 540 clusters were randomly selected as primary sampling units (PSUs).Subsequently, a fixed number of households were chosen from each of the selected clusters based on the probability proportional to size technique.A total of 18,360 households (5,474 in urban areas in 161 clusters and 12,886 in rural areas in 379 clusters) was drawn.DHS follows standard sampling procedure and detailed information can be obtained from the Measure DHS webpage [26].Country-specific information is elaborated in the final report [27], which can be downloaded from the DHS website [28].
Participant selection and exclusion criteria
A total of 3,441 children (unweighted number), aged 12-23 months, from 535 primary sampling units (DHS did not collect data from 5 sampling units) were included in this analysis.After adjusting the sample weight and cluster sample design, the sample size was equivalent to 3,366.Mothers, who were interviewed, were between the age group 15-49 years.
Study variables
Dependent variable.The dependent variable for this study is the full immunization coverage among children aged between 12 and 23 months.Information on immunization was collected from the children's immunization cards and interviews with their mothers.Children who had already taken a dose of BCG, three doses of DPT, three doses of polio, and a dose of measles vaccine were considered fully immunized and rest were considered not fully immunized.
Individual-level characteristics.Child-specific characteristics such as sex, birth order, preceding birth interval, and pregnancy intention were included in the analysis.Similarly, mother/parent-specific characteristics included were mother's age, father's age, education, marital status, mother's occupational status, mother's autonomy, father's occupational status, and household wealth status measured in wealth quintile.Mother's autonomy represents the level of a woman's participation in making household decisions such as spending money, purchasing household goods/property, visiting friends/relatives and her ability to make decisions regarding her health care.Autonomy was categorized as autonomous if the woman could decide solely or jointly with her husband on all the above-mentioned issues.The socioeconomic group variable wealth quintile was calculated from household assets using principal component analysis and was divided into five categories (poorest, poorer, middle, richer and richest), each comprising 20% of the population [29].
Sex of household head, family size, and religion were also included in the analysis.Utilization of health services, including four ANC visits, institutional delivery, and postnatal care visit was also considered for analysis.
Community-level characteristics.Primary sampling units (PSUs) were considered proxies for the community level.Place of residence, distance from the health facility, community poverty rate, community ANC utilization rate, community institutional delivery rate, community postnatal visit rate, community maternal education, community media exposure rate, and community maternal unemployment rate were used as community-level characteristics.Community poverty rate is the proportion of individuals within the community living in the bottom 40% of wealth quintiles (poorer and poorest quintile collectively).Regarding the distance from the health facility variable, women were asked if they have experienced any difficulties in obtaining the medical advice or seeking treatment because of the distance between their home and the health facility.Their response was categorized as either a "big problem" or "not a big problem", however, no physical distance in meters was measured.Every community-level characteristic, except the place of residence and distance to a health facility, was dichotomized into high and low, based on the median value.
Statistical analysis
The distribution of full immunization coverage, according to the children's background characteristics was cross-tabulated and the association was measured using the chi-square test.DHS-assigned sample weight was incorporated throughout the analysis.Similarly, bivariate logistic regression was performed to assess the unadjusted association between full immunization coverage and individual-and community-level predictors.The statistical significance was determined at a significance level of .05 and 95% confidence interval (CI).The Independent variables that were statistically significantly associated with immunization coverage in bivariate analysis were considered eligible for multivariate multilevel regression analysis.
A two-level multilevel modeling technique was used, as the individual-level characteristics were nested within the community-level characteristics.Level 1 modeling determined the association between individual-level predictors and childhood full immunization, while level 2 modeling assessed community-level determinants of full immunization.
The multilevel analysis consisted of four regression models.The first model (model 1) is an empty model without any individual-or community-level variables.This model measured the variation among the communities (primary sampling units).Model 2 consists of individuallevel characteristics that were significant (p< .05) in the bivariate model.A stepwise backward elimination method was used to restrict the model to variables significantly associated with the immunization coverage.Such significantly associated variable in the previous model was considered eligible for the adjustment in the final model.Similarly, community-level characteristics significantly associated (p< .05)with the outcome variable in the bivariate logistic regression analysis were entered in model 3.This model was also based on a stepwise backward elimination technique.In the final multivariate model (model 4), individual-and communitylevel characteristics statistically significant in models 2 or 3 were included in the analysis and the final model was determined using stepwise backward elimination technique.
Stata's "melogit" command was used for the multilevel logistic regression modeling to estimate fixed and random effect parameters.Fixed effects are measured as adjusted odds ratios and their 95% confidence intervals.Similarly, the random effects are the community-level (where the individual-level predictors are nested in) variance.The random effect is measured in the terms of Intra Community Correlation Coefficient (ICC) and Proportional Change in Community Level Variance (PCV).ICC is the measure of the percentage variance explained by the community-level variables, while PCV measures the proportional change in the community-level variance between the empty model and the subsequent models [30].
We also assessed two-way and three-way interaction terms; however, there was no statistically significant interaction within and between community-and individual-level predictors.Log-likelihood and Akaike's Information Criterion (AIC) were used as model fit statistics.
Results
About 45% of children aged between 12 and 23 months were fully immunized in the DRC.About 83% of the children were administered BCG vaccine.Polio coverage was found to be about 91% for the Polio 1, but it was reduced to 65.7% for the third dose (Polio 3).Similarly, about 72% of the children were vaccinated against measles.In the DRC, about 6% of the children aged 12-23 months were never vaccinated.Table 1 shows the coverage for the individual vaccine in the DRC.(Table 1) Regarding the participant characteristics, there were almost equal numbers of male and female children.Four in every five children were born in a health facility.The same proportion of children had a sibling less than 24 month older than themselves.About 19% of children were born to teenage mothers.About 40% of mothers had a minimum of secondary level education and about 15.2% of the children were from unmarried or single mothers.Four of every five children were from households headed by men.(Table 2) The sex difference in immunization coverage among the children was not significant.According to the place of delivery, only one among every five children born at home was fully immunized, whereas every second child born in a health facility was fully immunized.The percentage of full immunization coverage was higher among the children of mothers who had 4 ANC visits, gave birth in a health facility, and made postnatal visits.Similarly, the percentage of children being fully immunized was found higher in upper wealth quintiles in comparison to lower quintiles.The coverage ranges from 36.1% among the poorest to 65% among the richest wealth quintile.The proportion of fully immunized children were higher if the mothers had autonomy in household decision making or if they had mass media exposure.(Table 2) Regarding community-level characteristics, about two-thirds of the children were from rural areas.Distance to the health facility was a challenge to access the health care for about two in every five (41.2%) of the mothers.Similarly, about 45% of the children were from the communities with low ANC utilization.On the contrary, a majority (69.3%) of the children were from communities with high institutional delivery rates.(Table 3) Full immunization coverage was slightly lower in rural areas compared to urban areas (41.6% vs 53%).Similarly, about 40% of children from areas with low ANC utilization and about 50% of the high ANC utilization area were fully immunized.According to the institutional delivery utilization, a quarter of children from low coverage areas (25.9%) and more than half (53.9%) of children from high coverage areas were fully immunized.Similarly, about 39% of the children from the communities with lower maternal education rates were fully immunized, whereas 52% of the children from the communities with higher maternal education were found fully immunized.(Table 3) Across the provinces, full immunization coverage ranged from 5.8% in Mongala to 70.6% in Nord-Kivu.The Nord-Kivu province was the only one which had higher coverage than the capital Kinshasa (67.7%).Fig
1, S1 Fig
Table 4 shows the results of the multilevel multivariate logistic regression analysis.The null model (Model 1) revealed significant variability on full immunization coverage across communities [τ = 1.82; p<0.001].Similarly, the ICC showed that 50% of the variability in the odds of being fully immunized was due to community-level factors.(Table 4) After adjusting for individual-level characteristics, the variation in the odds of a child having full immunization remained statistically significant [τ = 1.40; p<0.001] across the communities.At the same time, about 37% of the variance in full immunization among the children was because of community-level factors.Similarly, about 23% of the variation in the odds of children being fully immunized between communities was attributed to the individual factors adjusted in the model (Model 2).
After adjusting the community-level characteristics, model 3 found a slightly reduced variance of a child being fully immunized [τ = 1.29] across the communities, as compared to the variance reported in model 2, and the variance also lost the statistical significance (p> .05).Model 3 further identified that about 33% of the variability in the odds of a child being fully vaccinated was due to community-level characteristics (ICC = 33.4%).Similarly, about 29% of the variability in the odds of children being fully immunized between communities could be explained by the community-level characteristics included in Model 3 (PCV = 29.3%).children.Similarly, children of mothers with secondary or higher education [AOR: 1.32; 95% CI: 1.00, 1.81] and from the richest wealth quintile [AOR: 1.96; 95%CI: 1.18, 3.27] had significantly higher odds of being fully immunized compared to their counterparts.Among the community-level characteristics, a higher rate of institutional delivery [AOR: 2.36; 95%CI: 1.59, 3.51] was found to be positively associated with immunization coverage.
Discussion
This study found a significant gap in immunization coverage in the DRC; only about 45% of children were fully immunized, an increase from 30.6% in 2007 [18].The coverage of full immunization reported here was found to be higher than in some other sub-Saharan countries, including Ethiopia (24%) and Nigeria (25%) [31].
The study showed that there are significant regional differences in full immunization coverage.The results revealed that the coverage is especially poor in provinces such as Mongala and Sankuru.For example, Mongala, Sankuru, and Tshuapa provinces have poor health infrastructure, including a weak cold chain system, which is a major barrier to accessing the immunization program.Similarly, Tanganyika also has low vaccination coverage.Many local militias operate in this province.Political instability can therefore perhaps explain the province's poor vaccination coverage.Despite the decades-long armed conflict, the immunization coverage in North and South Kivu is higher than in many other provinces.This might be because of the continuous effort of several organizations working in the immunization program.Our study is in line with other studies that have found regional differences in vaccine coverage [13].While we have highlighted political factors that create geographical inequalities in vaccination coverage, other studies have highlighted factors such as cultural beliefs, health service capacity, modes of vaccine procurement, supply, and cold-chain management as determinants of immunization coverage [13,25,32].
The higher odds of being fully immunized found among children whose mothers did four ANC visits can be explained by the fact that ANC visits provide an opportunity to promote health care utilization, including institutional delivery, PNC, immunization, and family planning [33][34][35].The association between ANC visits and child immunization is consistent with a study from India, which showed that ANC visits provide a platform for making mothers aware of child immunization [36].This finding is consistent with several other studies [35,37,38].
Similarly, children born at a health facility were more likely to be fully immunized.The results corroborate with studies from Ethiopia [37], Uganda [39] Kenya [40], other SSA countries [35], and India [41,42].The finding that institutional delivery increased the chances of children being fully immunized can be explained by the fact that women who give birth at a facility are probably more likely to be aware of their own and their children's health status.Women who utilized institutional delivery service, might also, be more confident in utilizing preventive services like child immunization.Also, administration of the BCG vaccine quickly after childbirth and vaccination counseling at a health facility might have contributed to the higher odds of full immunization.
This study found higher odds of being fully immunized when PNC was given by a skilled provider within two months after childbirth.A similar association between postnatal visits and full immunization coverage has been reported in several other studies, including a study of 14 LMICs [38], regional studies in Africa [31,35] and a systematic review [43].The association between PNC visits and immunization could be explained by the fact that an early postnatal visit provides an opportunity to initiate BCG vaccination.Also, DPT and polio vaccinations can be administered during PNC visits which could increase compliance with the immunization program and create an opportunity to initiate vaccination among children who are not immunized [12].This study found that the ANC, delivery care, and PNC visits were independently associated with the full immunization coverage.A study from SSA countries reported similar findings [35].An explanation for this might be that a continuum of care, with a four, recommended ANC visits, motivates pregnant women to give birth in a health facility.During the institutional delivery, a woman receives, not only the skilled care but also counseling and education to use the postnatal care and immunization services.ANC and institutional delivery open the door of opportunity for frequent contacts between health workers and a pregnant woman or recently delivered mothers, which eventually is more likely to result in higher compliance towards recommended immunization schedule [35].
This study showed that families with relatively higher wealth were more likely to fully immunize their children.Similar findings were published in studies from Nigeria [15,31], Burkina Faso [44], Swaziland [45] and Ethiopia [12].This finding was also consistent with a synthesis of DHS data in sub-Saharan Africa [16] and studies from South Asia, including Bangladesh [46] and India [41,47].The positive correlation between wealth and immunization can be explained by the fact that wealthier people tend to make better use of health services and thus regularly receive information about the benefit of child immunization [48].On the other hand, high travel costs and long distances to health facilities can restrict poor people's willingness to immunize their children [49].
Children from mothers with secondary or higher education had higher odds of being fully immunized.Similar findings were reported in studies from Ethiopia [50], Nigeria [31,51], Kenya [40] and in India [47,52].Education was identified as a strong predictor of full immunization in children in several other studies [16,37,41,42,53].Educated mothers are generally more aware of the importance of available health and immunization services, have better communication skills, and tend to better utilize available services [54,55].
In regard to community characteristics, this study showed that there was a positive correlation between community institutional delivery and full immunization coverage.In line with this observation, research from Nigeria [15] and Ethiopia [50], also has found that children residing in communities with high institutional delivery service utilization were likely to be fully immunized.
Significance and policy implications of this study
The link between full immunization coverage and child mortality has been well demonstrated by several researchers.A full immunization coverage as low as 45% suggests that children from the DRC are at high risk of avoidable death.A recent study by Doshi et al. showed that Congolese children are already at high risk of measles [56].
A landscape analysis of the DRC's immunization program conducted by PATH provided major recommendations around supply chain management and raised awareness about the guiding policy, activities, and responsibilities across the national, provincial, and local health facility levels [25].Similarly, UNICEF appealed for actions against the critical pitfalls of the DRC's immunization program, including unreliability of government funding for vaccines, ineffective cold chain management, lack of proper support from health workers in certain regions and insufficient coordination of immunization stakeholders at provincial level [32].
This study demonstrates the links between ANC, institutional delivery, PNC visits and immunization coverage.Programs that make efforts to improve ANC utilization, institutional delivery rate, and PNC coverage should, therefore, draw attention to these links.Our study demonstrates the need to properly target vulnerable groups.Vaccination programs must pay special attention to women with low or no education.Similarly, programs also must direct their interventions for women and children living in poorer households.These modifiable determinants of full immunization could be integrated with supply-side factors to make a comprehensive package to improve immunization coverage in the DRC.
Strengths, weaknesses, and limitations
This study identified important predictors of childhood full immunization in the DRC.It has described community-level variables in a multilevel model.Therefore, unlike studies with only individual characteristics, it has minimized bias due to omitted variables.Furthermore, the results are based on a nationally representative survey, making the findings generalizable with potentially important policy implications.
The study has some limitations.It is based on cross-sectional survey data, which makes it difficult to establish causal relationships between predictors and outcome variables.Data related to vaccination coverage is based on the records found in children's immunization cards and interview with mothers.Therefore, some bias might have been introduced because of self-reporting.Also, because full immunization coverage (yes, no) is used as a binary outcome variable, children who were partially immunized were not analyzed separately, as it was outside the scope of this study, but were considered not fully immunized.Moreover, community-level characteristics were categorized as low or high based on the median value; hence, there was some loss of information as we dichotomized these continuous variables.Due to the retrospective nature of data, findings might be subject to recall bias.Furthermore, this study has not fully taken into consideration potential influential factors for immunization coverage, such as quality of service, counseling provided, follow-up and reminder communication, and distance to the immunization clinic.
Conclusion
The DRC has a noteworthy gap in full immunization coverage.Individual-level characteristics,-particularly health services utilization, such as four ANC visits, institutional delivery, and postnatal visits-had a strong positive effect on full immunization coverage.Institutional delivery was also determined as a community-level factor.The findings suggest that maternal service sites might be effective in promoting full immunization coverage.Similarly, relatively higher wealth status and a minimum of secondary education had a positive effect on full immunization coverage.The study underlines the importance of promoting immunization programs tailored to the poor and women with little education.A major finding is a huge variation in immunization coverage across provinces.Efforts should be made to better understand the situation in provinces with low coverage.
Ethical approval
Data from the Demographic and Health Survey, DRC (2013-14) was used for this study with the permission from the Measure DHS program.Ethical approval for the survey was obtained from the Ethics Committee of the School of Public Health (ESP) of the University of Kinshasa and the ICF International Institutional Review Board.Informed consent to participate in the survey was obtained verbally from each respondent before the interview was conducted.A special statement explaining the purpose of the study was included at the beginning of the household and the individual questionnaires.Participation in the survey was completely voluntary and the respondents were informed that they had the right to refuse to answer any questions or stop the interview at any point.The informed consent statement was read aloud exactly as it was written before the respondents were asked to participate in the interview.For participants under 18 years of age, verbal consent was obtained from their parent or legal guardian.After this, the interviewer signed his/her name attesting to the fact that he/she had read the consent statement to the respondent.However, given the low level of literacy and as the information requested was neither controversial nor sensitive; no written consent was obtained.The ethics committee of the School of Public Health of the University of Kinshasa and the ICF international Institutional Review Board waived the requirement for written consent of participants and approved the consent procedure used in this study.
Finally, after the statistical adjustment of the individual-and community-level characteristics simultaneously, Model 4 depicted significant variability across the communities with regards to the odds of a child being fully immunized [τ = 1.34; p<0.001].About 35% of the variability in the odds of a child being fully vaccinated was due to the community-level factors (ICC = 35.3%).Similarly, the PCV revealed that about 26% of the variance in the odds of full immunization (PCV = 26.4%)across communities was due to the simultaneous effect of both individual and community-level characteristics adjusted in Model 4. Furthermore, the final model found four ANC visits [AOR: 1.64; 95%CI: 1.23, 2.18], institutional delivery [AOR: 2.37; 95%CI: 1.52, 3.72], and postnatal care (PNC) service utilization [AOR: 1.43; 95%CI: 1.04, 1.95] were significantly associated with full immunization of
Table 4 . Multivariate multilevel logistic regression analysis of individual and community level factors with childhood full immunization among 12-23-month-old children in the DRC in 2013-14 (n = 3,366).
Akaike's information criterion.Model 1: Null/baseline model without any predictor variable.Model 2: Adjusted for individual-level predictor variables only.Model 3:Adjusted for community-level predictor variables only.Model 4: Adjusted for the individual-and community-level predictor variables. | 2018-08-25T21:43:11.714Z | 2018-08-23T00:00:00.000 | {
"year": 2018,
"sha1": "7feb768261c4219ec0d9e0704259b72c8579b873",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0202742&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b3d747b193288079219eb43e82c2e402045c94fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119312363 | pes2o/s2orc | v3-fos-license | Line geometry and electromagnetism IV: electromagnetic fields as infinitesimal Lorentz transformations
It is first shown that the scalar product on any orthogonal space (V, g) allows one to define linear isomorphisms of the vector spaces of bivectors and 2-forms on V with the underlying vector spaces of the Lie algebra so(p, q) and its dual, respectively. When those isomorphisms are applied to the electromagnetic excitation bivector and field strength 2-form, resp., one can associate various algebraic constructions that pertain to them as bivector fields and 2-forms with corresponding constructions in terms of so(1, 3) and its dual. The subsequent association with corresponding things in line geometry will then become straightforward. In particular, the fields can be represented by motors, such as screws and wrenches, while the Cartan-Killing form on so(1, 3) is isometric to the scalar product on bivectors that gives the Klein quadric. When the space of bivectors (and therefore the space of 2-forms) is given an almost-complex structure (and therefore, a complex structure), one can also represent most of the constructions on the former and its dual in terms of so(3; C) and its dual.
-This article is a continuation of a series of papers [1] that have each examined the extent to which one can think of line geometry as the geometry of electromagnetism in the same way that metric geometry is the geometry of gravitation. Hence, a certain familiarity with the elements of the subject will be assumed, although some attempt has been made to make the discussion here relatively self-contained.
If one goes back to the basic definitions of the electric field strength E and magnetic field strength H then one must recall that they are defined in terms of dynamical concepts. That is, E represents the force F on a unit charge q, or more precisely: (1.1) As for the magnetic field strength, one actually has a choice: Typically, one thinks of B as something that exerts a force on a unit current I, as the Lorentz force law ( 1 ): (1.2) would suggest, or rather, the corresponding limit as the current goes to zero.
However, since a magnetic field B also exerts a torque on a magnetic dipole µ µ µ µ : (1. 3) and a magnetic dipole is (as far as any experimenter has managed to show) the most elementary source of a magnetic field, one can also think of (the magnitude of) B as the limiting (magnitude of the) torque on a magnetic dipole µ µ µ µ as its strength µ goes to zero: (1. 4) Once again the direction of B will be perpendicular to that of τ τ τ τ.
The reason that one might wish to go the latter route in one's basic definitions is that bivectors and 2-forms can be used for representing kinematical and dynamical concepts, in such a way that a bivector can represent an infinitesimal rigid motion in threedimensional Euclidian space (which is a clearly non-relativistic notion) or an infinitesimal Lorentz transformation. Similarly, a 2-form can represent an element of the dual vector space of either of the aforementioned Lie algebras. When one looks at the time+space decomposition of the electromagnetic bivector field H or the 2-form F, one will see that it is more natural to associate electric fields with forces and magnetic fields with torques.
One also has a choice in terms of how one regards the vector cross product on R 3 : 1. Due to its antisymmetry, one can think of it as a precursor to the more general exterior product that takes advantage of certain "accidental" isomorphisms of vector spaces that are true only for three dimensions.
2. One can regard it as defining a Lie algebra on R 3 , namely, the Lie algebra so (3).
In this article, we will not choose one or the other option, but take advantage of the fact that the linear isomorphism that associates bivectors with elements of orthogonal Lie algebras is more general than it appears for a three-dimensional Euclidian space. Hence, we will be considering both aspects of the cross product.
The second section of this article will then exhibit that isomorphism in its general form and then specialize it to the cases that are relevant to the electromagnetic fields on four-dimensional Minkowski space. The third section will then examine that very ( 1 ) Of course, the direction of the force will be perpendicular to the direction of B as a result of this. association of electromagnetic fields with corresponding elements of so(1, 3) and its dual vector space. In the fourth section, that association will then be combined with the association of bivectors and 2-forms with objects in line geometry that has been the main focus of this series of papers, and in the final section, the main results of this article will be summarized. so(1, 3) by bivectors. -The basic fact that we shall elaborate upon in this section is that when you lower an index of the components ω µν of a bivector ω ω ω ω ∈ Λ 2 V for some frame on an n-dimensional orthogonal space (V, g) (or raise the index of the components ω µν of a 2-form, resp.) by using the metric g µν on V then, due to the antisymmetry of ω in the index-pair µν, the resulting matrix: (2.1) will belong to so(p, q) [its dual so(p, q) * , resp.]. Here, we are assuming that the signature type of the metric g is (p, q); hence, in an orthonormal frame on V, one will have: g µν = η µν = diag [+ 1, …, + 1, − 1, …, − 1], (2.2) with p plus signs and q minus signs.
Representing
After that, since the association is a linear isomorphism of vector spaces, one can transfer all of the algebraic machinery on so(p, q) to Λ 2 V in a manner that makes many of the common constructions in terms of bivectors and 2-forms take on a corresponding significance in terms of the Lie algebra of infinitesimal orthogonal transformations for the chosen scalar product.
Since this is essentially a corollary to a more general result that involves gl(n), we shall start at that more "pre-metric" level of multilinear algebra [2].
a. Representing Hom V and its dual by second-rank tensors. -The basic fact that we shall start with is that if V is an n-dimensional vector space (whether real or complex) and V * is its dual then the elements of the tensor product V * ⊗ V can be associated with linear transformations of V to itself. If one calls that vector space of linear transformations Hom V then the way that one can define the linear isomorphism However, not all elements of V * ⊗ V are decomposable, but more generally they are finite linear combinations of decomposable elements. Hence, one "extends the map by linearity" to all finite linear combinations of decomposable elements in order to get the full isomorphism. That is, if T = ∑ v is a finite linear combination of decomposable elements then the action of T on a vector x will be: c. The algebras that can be defined on second-rank tensors. -The vector space Hom V has another natural structure beyond its vector space structure, namely, a bilinear binary operation Hom V × Hom V → Hom V, (S, T) ֏ ST that is defined by the composition of linear maps to give linear maps. That bilinear binary operation then gives Hom V the structure of an algebra. In order to distinguish the vector space Hom V from the algebra that it defines, we shall denote the latter by hom(n).
One can use the existence of the linear isomorphism V * ⊗ V ≅ Hom V to define a corresponding algebra on V * ⊗ V. For our purposes, it is easiest to describe in terms of matrices, since it basically amounts to matrix multiplication for the component matrices of tensors in V * ⊗ V. One is then dealing with the bilinear pairing: (2.11) One can then polarize this algebra product by commutation: Both of these brackets define algebra products by themselves, and we shall denote the algebra on Hom V that the commutator bracket [.,.] defines by gl(n), since it is the general linear Lie algebra. Its elements are infinitesimal generators of invertible linear transformations of V.
Typically, since the dual space (Hom V) * does not have a natural algebra product, one does not define one. Of course, if one has a linear isomorphism of V with V * then one can define a corresponding linear isomorphism of Hom V with its dual and give the latter vector space the induced algebra product, but that is not a natural construction, since it depends upon the choice of isomorphism of V with V * .
d. Lowering and raising the indices. -Suppose that one does have such a linear isomorphism C : V → V * , v ֏ v * . One can then define a bilinear functional on V: (2.14) Dually, one has the inverse isomorphism C −1 : V * → V, α ֏ α *−1 , which then defines a bilinear functional on V * : (2. 15) In terms of components, if the matrix of C relative to the aforementioned choice of frame and coframe is C ij then: Dually, the matrix of the inverse isomorphism is C ij , and: (2.17) One can then combine the isomorphisms that were just defined with the ones above and get isomorphisms: (2.18) From the standpoint of components, these isomorphisms amount to lowering and raising one index of the component matrix, respectively: (2.19) (We have suppressed the second isomorphism and its dual, since they amount to identities when one looks at component matrices.) One can also define the multiplication of elements in V ⊗ V in a manner that will correspond to the multiplication of elements in Hom V; i.e., matrix multiplication. Hence, if: S = S ij e i ⊗ e j , T = T ij e i ⊗ e j then the components of ST will be: (2.20) One can put this into a form that does not involve the components by forming: which will become: when we define the left and right interior products by the coframe elements by: Of course, the comment that was made at the end of the last subsection still applies: This construction of an induced multiplication on bivectors is not natural, since it depends upon the choice of isomorphism C. However, one sees that the product ST in (2.21) will be invariant under any change of coframe i θ = i j j A θ , since C kl will transform contragrediently to θ k : Another isomorphism of vector spaces that is canonical and simple to explain in terms of components is the isomorphism of V * ⊗ V with V ⊗ V * = (V * ⊗ V) * . One starts by associating every decomposable element α ⊗ v ∈ V * ⊗ V with the corresponding element v ⊗ α ∈ V ⊗ V * and then extends to all finite linear combinations of decomposable elements by linearity. In particular, if θ i ⊗ e j gives a basis for V * ⊗ V then e j ⊗ θ i will give a basis for V ⊗ V * , and the element T = j i T ⋅ θ i ⊗ e j ∈ V * ⊗ V will go the element: This shows that the linear isomorphism that we have defined essentially amounts to the transposition of a linear operator L : U → W to obtain a linear operator L T : W * → U * whose component matrix will then be the transpose of the component matrix of L.
e. The specialization to so(p, q). -So far, nothing has been said about the symmetry of components. In particular, suppose that the isomorphism C makes the corresponding bilinear form symmetric: (2.25) C will then define a scalar product on V, while C −1 will define a scalar product on V * . In order to make things more familiar to the people who deal in the Lorentzian structure on space-time, we then replace the symbol C with g, and introduce the notation η ij such that when the frame e i is orthonormal, one will have: g (e i , e j ) = η ij = diag [+ 1, …, + 1, − 1, …, − 1], (2.26) in which there are p plus signs and q negative signs. Thus, the group of orthogonal transformations of the orthogonal space (V, g) will be O (p, q), and the ones that preserve volume elements on V, as well, will be SO(p, q). The corresponding Lie algebra of infinitesimal orthogonal transformations will be so(p, q), in either case.
The condition for a transformation T to be orthogonal -so T ∈ O(p, q) -is that for every pair of vectors v, w in V, one must have: (2.27) In terms of components, this will become: (2.28) In order to get the corresponding condition for an infinitesimal transformation t to be orthogonal -i.e., t ∈ so(p, q) -one assumes that the matrix i j T belongs to a differentiable curve ( ) i j T s in gl(n) through the identity matrix, so (0) i j T = i j δ . Furthermore, we let: (2.29) If we differentiate (2.28) at s = 0 then we will get: since we are not varying g ij , and this will simplify to: The last condition says that t ij is an antisymmetric matrix, so t ij can also be used as the component matrix for a 2-form: The dual of the argument that led from (2.27) to (2.32) that starts with the condition for an orthogonal transformation of V * , which will give: i j kl k l T T g = g ij , (2.33) will conclude with the bivector: t = 1 2 t ij e i ^ e j .
(2.34) Putting all of this together, we get:
Theorem:
For any orthogonal space (V, g), there are linear isomorphisms:
Proof:
One needs only to make the t matrices more specific. In the first case, one has j i t ⋅ ⋅ ⋅ ⋅ , and in the second, one has i j t ⋅ . The first isomorphism lowers the index i in t ij , while the second one raises the i in t ij .
One can also think of the isomorphism Λ 2 ≅ so(p, q) * as being essentially the transpose of the isomorphism Λ 2 ≅ so(p, q). That is, if ι : Λ 2 → so(p, q) is the latter linear isomorphism then its transpose will be the linear isomorphism ι T : so(p, q) * → (Λ 2 )* = Λ 2 ; hence, the isomorphism Λ 2 ≅ so(p, q) * will be just the inverse of that.
Corollary:
a) One can define a Lie bracket on Λ 2 that makes the components of [s, t] equal to: (2.35) b) One then has: (2.36)
Proof:
a) This is an anti-symmetrization of (2.20).
b) This is an anti-symmetrization of (2.21), when one takes into account the symmetry of g kl and the anti-symmetry of t, which makes: The basic effect of our theorem is that it allows one to think of a bivector as a representative of an infinitesimal orthogonal transformation and a 2-form as a representative of the dual to such a thing.
Corollary:
The bilinear pairing of 2-forms and bivectors corresponds to the bilinear pairing of so(p, q) * with so(p, q).
Proof:
Of course, since the indices of both F and G are antisymmetric, one will have: (2.38) so one can also say that: (2.39) The bilinear form: <a, b> = Tr ad a ad b (2.40) can be defined on any Lie algebra g, and it is referred to as the Cartan-Killing form. In this, ad a is the linear transformation ad a : g → g that takes any b ∈ g to: (2.41) By choosing a basis {ε ε ε ε a , a = 1, …, N} for g, one can associate ad a with a matrix (2.44) If we abbreviate the notation [ad ] a b a to simply a b a then the Cartan-Killing form will look like: which is clearly the same form as (2.37). Although the bilinear form <a, b> is symmetric (from a basic property of the trace), it does not have to be non-degenerate; i.e., it does not have to define a scalar product on g.
However, it is a basic theorem of Lie algebras [3,4] that it will be non-degenerate iff g is semi-simple; i.e., it has no non-trivial Abelian ideals.
For example, that is true for so (3). In fact, if one regards so(3) as R 3 with the vector cross product then the Cartan-Killing form will amount to the Euclidian dot product on R 3 . It is also true for so(1, 3), and we shall see that the Cartan-Killing form on so(1, 3) is isometric to the scalar product on Λ 2 that gives one the Klein quadric.
3. Application to electromagnetic fields. -In the case of electromagnetic fields, one has the Minkowski field strength 2-form F and the excitation bivector field H. Typically, one also assumes that there is a Lorentzian structure g on the space-time manifold M ( 1 ), which we assume to have the signature type (1, 3); i.e., η = diag[1, − 1, − 1, − 1]. Hence, the relevant orthogonal group will be O(1, 3), and its Lie algebra will be so(1, 3).
( 1 ) Naturally, that statement will no longer be true when one deals with the pre-metric form of electromagnetism. 3) then it, along with its inverse L µ ν ɶ , will act upon any frame e µ on Minkowski space and its reciprocal coframe θ µ :
a. The action of the Lorentz group on bivectors and 2-forms
Those matrices and their inverses will then act upon the components of bivectors and 2-forms by way of: One assumes that L µ ν and its inverse belong to differentiable curves through the identity matrix, as above, with: When one differentiates both actions in (3.2), one will get: If we lower and raise an index on H and F, resp., then we will see that an application of the theorem above and its corollaries will say that we can also express these infinitesimal actions of the Lorentz group on bivectors and 2-forms in the form: Hence, one can just as well regard F and H as elements of so(1, 3), although since they are dual objects, it would be better to regard F as an element of so(1, 3) * and H as an element of so(1, 3). From (2.38), the difference essentially amounts to a sign.
( 1 ) From now on, the specialization to four-dimensional Minkowski space will entail the convention that Greek indices range from 0 to 3 and Latin ones range from 1 to 3. then one can also speak of an adapted frame for M 4 , which amounts to a frame {e µ , µ = 0, …, 3} such that e 0 generates [t] and {e i , i = 1, 2, 3} spans Σ.
Dually, the decomposition (3.7) will imply a corresponding decomposition of M 4* : although this time, [t] is composed of all linear functionals on M 4 that annihilate Σ, while the subspace Σ * is composed of all linear functionals that annihilate t: As a consequence, the reciprocal coframe {θ µ , µ = 0, …, 3} to e µ will be adapted to the decomposition (3.8). In particular, θ 0 will generate [t], and {θ i , i = 1, 2, 3} will span Σ * . The decomposition (3.7) implies corresponding decompositions of the spaces of bivectors and 2-forms over M 4 . (3.10) Any 2-form that belongs to [t] ^ Σ * will annihilate any bivector that belongs to Λ 2 Σ, and any 2-form that belongs to Λ 2 Σ will annihilate any bivector that belongs to [t] ^ Σ.
As a result of (3.10), any bivector b and any 2-form b can be expressed uniquely in the form: in which a ∈ Σ, c ∈ Λ 2 Σ, a ∈ Σ * , and c ∈ Λ 2 Σ.
If {e µ , µ = 0, …, 3} is an adapted frame for M 4 relative to the decomposition (3.7) and {θ µ , µ = 0, …, 3} is its reciprocal coframe then {e 0 ^ e i , i = 1, 2, 3} will be a frame for [t] ^ Σ, {ε ijk e j ^ e k , i, j, k = 1, 2, 3} will be a frame for Λ 2 Σ , {θ 0 ^ θ i , i = 1, 2, 3} will be a coframe for [t] ^ Σ * , and {ε ijk θ j ^ θ k , i = 1, 2, 3} will be a coframe for Λ 2 Σ. Hence, if we use e 0 for t and θ 0 for t then the decompositions in (3.11) can be expressed in the In particular, the electromagnetic excitation bivector H and the field strength 2-form F can be expressed as: In order to relate the spatial bivector H to the usual vector that goes by that name and the spatial 2-form B to the spatial vector B, one must first define a volume element on M 4 and then split it into a temporal and spatial part. Since we already have an adapted frame and coframe, we define the 4-form: which can then be split into: The spatial 3-form V s then defines a spatial volume element. One can then define the Poincaré isomorphism # : Λ k → Λ 4−k , k = 0, …, 4, which takes any k-vector a to the 4−k-form: Similarly, the spatial volume element V s defines a Poincaré isomorphism # s : Λ k Σ → Λ 4−k Σ, k = 0, 1, 2, 3, which takes a to: Under that spatial isomorphism, vectors go to 2-forms and bivectors go to 1-forms. In particular, one can define the vector B and the 1-form H that make: That explains where the usual vector fields of Maxwell's equations come from, when one adds that typically if one is dealing with R 3 and the Euclidian metric (whose components in an orthonormal frame will be δ ij ) then one will have "accidental" isomorphisms between the four three-dimensional vector spaces that consist of R 3 , its dual, its space of bivectors, and its space of 2-forms. When one does not distinguish between contravariant and covariant indices, they can all be represented by vectors, in effect. Of course, one must distinguish between "polar" vectors, which belong to R 3 or its dual, and "axial" vectors, which are really bivectors or 1-forms, and will involve the volume element in their transformations. One then sees from (3.13) that there is something more fundamental about regarding B as a 2-form and H as a bivector field.
When one applies the time+space splitting to the basic isomorphisms that we have established of Λ 2 with so(1, 3) and Λ 2 with so(1, 3) * , one will see that the bivectors of the form e 0 ^ D correspond to infinitesimal boosts, while the ones of the form H will corresponding to infinitesimal rotations. Dually (as we shall see in the next subsection), the 2-forms of the form θ 0 ^ E correspond to forces, while the ones of the form B correspond to torques.
One can make this explicit by introducing the standard basis matrices J i and K i for so(1, 3): Under the association: one gets a linear isomorphism Λ 2 ≅ so(1, 3) with the desired association of D with an infinitesimal boost and H with an infinitesimal rotation. Note that if one includes a factor of 1/2 in the Cartan-Killing form: <a, b> = 1 2 Tr ab (3.23) and introduces the general notation for the basis elements {ε a , a = 1, …, 6} with ε a = K a for a = 1, 2, 3 and ε a = J a−3 for a = 4, 5, 6 then: Hence, the basis ε a for so(1, 3) is orthogonal for a scalar product of signature type (3, 3) that is defined by the Cartan-Killing form. As we shall see in the next section, there is a natural scalar product on Λ 2 that makes the linear isomorphism Λ 2 ≅ so(1, 3) an isometry of orthogonal spaces. The dual isomorphism Λ 2 ≅ so(1, 3) * can be defined by using the basic matrices J i , K i , i = 1, 2, 3 as a reciprocal basis for so(1, 3) * , although one must prefix the J i 's with a minus sign in order to account for the fact that < J i , J j > = − δ ij . However, if one wishes to define a dual basis that is orthonormal for the Cartan-Killing form, as it gets defined on so(1, 3) * , then one can simply use the same basis matrices J i , K i with no change of sign.
c. Work and energy. -In mechanics, so(1, 3) can represent infinitesimal "displacements" of points in Minkowski space, while the elements of its dual so(1, 3) * can represent "generalized forces." Hence, the bilinear pairing of an element δx ∈ so(1, 3) and an F ∈ so(1, 3) * will give a scalar: that represents the virtual work that that is done by F during the virtual displacement δx.
If the electric field strength E is to represent a force per unit charge (at least, in terms of units), and D is to represent a spatial density of electric dipole moments (which have the units of charge times distance, in their own right) then one can see that the scalar E(D) should represent an energy density. This is probably why D was usually referred to as the "electric displacement," although it is probably empirically more consistent to think of it in terms of the response of a dielectric medium to the imposition of E; i.e., the electric excitation of the medium by the formation of electric dipoles.
Similarly, if H has the units of torque per unit dipole and B has the units of a spatial density of magnetic dipoles then the scalar H(B) will have the units of torque density, or really energy density, since the torque is acting through a unitless angle of rotation.
All of this leads credence to the idea that we can think of the 2-form F as a generalized relativistic force that is associated with an element of so(1, 3) * , while H is a generalized displacement that is associated with an element of so(1, 3), in such a way that F(H) = F µν H µν will become an energy density. In fact, it is typically used as the Lagrangian density for the action functional that will give Maxwell's equations as the Euler-Lagrange equations.
If one decomposes F and H according to a time+space decomposition, as in (3.13), then one will get a corresponding decomposition of F(H): (3.26) which will give: when θ 0 ^ E annihilates the space that H belongs to and B annihilates the space that e 0 ^ D belongs to.
As we shall see shortly, (3.27) is not the most general expression that we can encounter in the electrodynamics of continuous media. In particular, D and H might each depend upon both E and B, which would have the effect of making the middle two terms in (3.26) non-vanishing. In order to get from the most general case of an electromagnetic constitutive law to a linear isomorphism, one must first note that 2-forms and bivector fields on M actually belong to infinite-dimensional linear spaces of functions, so the most general association would take functions to functions. There are three basic types of operators on functions: algebraic, differential, and integral. Differential operators are not commonly used as electromagnetic constitutive laws, so we shall rule them out directly. Integral constitutive laws are associated with "dispersive" media, so if one wishes to consider only algebraic operators then one must be dealing with only non-dispersive media.
Among the algebraic operators, which are the ones that can be associated with maps of fibers of Λ 2 M with fibers of Λ 2 M, the two basic types are linear and nonlinear. Certainly, it would be naïve to suggest that the nonlinear constitutive laws have no use in physics, since they are, in effect, the ultimate form of all constitutive laws empirically. However, for small enough field strengths, most electromagnetic media will behave linearly, so that is why one usually deals with the linear case first and then considers how things might change when the field strengths increase beyond the bounds in which the response of the medium is approximately linear.
Hence, when one is looking at a linear, non-dispersive electromagnetic medium, one can define a (linear, non-dispersive) electromagnetic constitutive law by an invertible map C : Λ 2 M → Λ 2 M such that: One can further analyze the possible manifestations of linear, non-dispersive constitutive laws according to the way that the medium in question responds to the imposition of electric and magnetic fields, such as whether it is a conductor or insulator, which might even depend upon the type of fields, and whether electric or magnetic dipoles form. Often those two conditions are mutually exclusive: That is, magnetic materials are often conductors, while dielectrics are often insulators. Most optical materials tend to be non-magnetic insulators, such as the various glasses and quartz.
One then sees that what a linear, non-dispersive, electromagnetic constitutive law C (or rather, its inverse C −1 ) defines is a correlation between each vector space Λ 2, x and its dual vector space 2 x Λ . If one introduces the notations (i, j, k = 1, 2, 3): then one can represent H and F in the forms: 29) which also makes: (3.30) Of course, one must pause to note that, just as the fields H and F seem to be mixing field strengths with excitations, they also seem to be mixing contravariant objects with covariant ones.
The components of the linear isomorphism C x relative to the bases above on Λ 2, x and 2 x Λ , with either four or two components, are defined by: resp. Clearly, the matrix C ab seems more concise, although sometimes one might also want to keep track of the antisymmetries in the index-pairs κλ and µν.
One can put the matrix into C ab block form relative to a time+space decomposition of the fibers Λ 2, x and 2 x Λ : which allows one to write the constitutive law as a system of linear equations: So far, there is no particular symmetry to the indices ab. However, one can polarize C ab accordingly: A further reduction is based upon the fact that the volume element already defines one linear isomorphism # −1 : 2 x Λ → Λ 2, x whose component matrix with respect to the chosen bases above is: Hence, since this matrix is symmetric in ab, it will be included in ab C + , multiplied by a scalar factor α. That means that we ultimately have a decomposition of C ab into: (3.36) In the terminology of Hehl and Obukhov [5], the components 0 ab C , ab C − , α # ab of C ab are referred to as the fundamental, skewon, and axion parts of C ab , respectively. The matrices of the first two can be put into the form: The reason for the minus sign before the ij µ ɶ is to make the signature type of 0 ab C consistent with that of # ab , which is (3, 3), although a frame that diagonalizes # ab will not generally diagonalize 0 ab C . Indeed, when the medium is isotropic, in addition to nondispersive and linear, 0 ab C will take the form: and the equations in (3.33) will take the form: which is how they are usually presented in physics literature [6,7]. The classical electromagnetic vacuum is then defined by making the medium homogeneous, in addition to everything else, so ε 0 and µ 0 will then be assumed to be constants. Of course, that condition of constancy is frame-dependent, and the principle of Lorentz invariance only requires that the product ε 0 µ 0 must be independent of a choice of Lorentz frame, or rather: It is indeed intriguing that introducing only a volume element on R 4 , but not a scalar product, will allow one to define a scalar product on Λ 2 and Λ 2 that has the same signature type as the Cartan-Killing form for so (1, 3), which does require one to define a scalar product on R 4 . Of course, the scalar product that is defined by 0 ab C has a more empirical nature in the eyes of physics, although it presumably has the same signature type as # ab . Hence:
Theorem:
When one has defined a scalar product of the signature type (3, 3) on Λ 2 , the linear isomorphism that takes bivectors in Λ 2 to infinitesimal Lorentz transformations in so(1, so(1, 3).
3) will also be an isometry of orthogonal spaces relative to the Cartan-Killing form on
Because the electromagnetic constitutive law of a medium in which electromagnetic fields exist is so fundamental to pre-metric electromagnetism, it would take us too far afield to give a more thorough discussion of the possibilities here, so we shall simply refer to the author's book on pre-metric electromagnetism for more details [8]. However, we shall mention that the linear isometry of Λ 2 with so(1,3) also yields a linear isometry of Λ 2 with so(1,3) * , when one gives Λ 2 a scalar product by way of C 0,ab , which is the inverse of 0 ab C , or # ab , which is the inverse of # ab .
As a result of the linear isomorphism ι : Λ 2 → so(1,3), one can associate the electromagnetic constitutive law that takes Λ 2 to Λ 2 with a more mechanical constitutive law that associates so(1,3) with so(1,3) * . Actually, the inverse of the mechanical constitutive law C : so(1,3) → so(1,3) * is easier to define directly: Actually, that topic is closely related to one of the oldest applications of line geometry, which is how lines in RP 3 can represent rigid motions of Euclidian R 3 . That study went back to the ground-breaking treatise of Julius Plücker in 1868 [9], which was developed further by his student Felix Klein [10] and Eduard Study [11], and it was Plücker who introduced the notion of a "Dyname." To the French, the corresponding word was "torseur," and to the Englishman Sir Robert Ball [12], the kinematical object was a "screw" and the dual dynamical object was a "wrench." These ideas are still being applied by modern mechanical engineers, especially in the study of robot manipulators [13].
Since all of that is clearly rooted in Newtonian mechanics, it is useful to verify rigorously that the Lie algebra iso(3) of infinitesimal rigid motions in three-dimensional Euclidian space is the Newtonian limit (c → ∞) of the Lie algebra so (1,3). Hence, we shall do that first, and then discuss how Lorentz transformations, as well as electromagnetic fields, can amount to screws and wrenches, at least, in the Newtonian limit, which should apply to the rest space of any measurer/observer. We shall conclude by discussing the relationship between the Cartan-Killing form on so(1, 3) and the Klein quadric on Λ 2 .
a. Rigid motions as Newtonian limits of Lorentz transformations.
-In order to show how Lorentz transformations of M 4 will become rigid motions of E 3 , it is entirely sufficient to show that not only are translations the Newtonian limits of boosts, but that situation also relates to the structure constants of the two algebras. Here, it is better to express a boost in terms of v and c explicitly, instead of cosh α and sinh α.
An elementary boost along the x-direction in R 2 = (t, x) to a frame with a relative velocity of v with respect to the first one takes the form: We can then put this into matrix form: One can already see that the Newtonian limit of an elementary boost is a translation at the level of finite transformations: It is important to see that, physically, this translation acts upon velocities, not points of space-time.
In order to get the infinitesimal generator of the boost B(v), we make v = v(s) a differentiable curve with v(0) = 0, which will then make B(v(0)) = I, and differentiate B at (4.4) in which, we have, of course defined a to be (0) v ɺ . It is already clear that since: the Newtonian limit of the elementary boost b(1) will be an elementary translation in R 2 .
Hence, we represent the three elementary boosts in the x, y, and z directions by the matrices ( 1 ): ( 1 ) We have replaced 1 / c 2 with λ for the sake of brevity. The Newtonian limit will then be the limit as λ goes to zero. (4.6) this time, instead of K i , i = 1, 2, 3. By direct calculation, one sees that the commutation relations for the set of basis elements {J i , b i , i = 1, 2, 3} for the vector space so(1, 3) are: (4.7) These are clearly the commutation relations (i.e., structure constants) for so(1, 3) when λ = 1, so the basis for the vector space so(1, 3) also generates its Lie algebra when one takes commutator brackets of matrices.
It is also clear that in the Newtonian limit, only the last set of relations will change.
In particular, the infinitesimal translations a i = b i (λ = 0) that the b i go to when λ = 0 will commute with each other in the appropriate manner, and in fact, the limiting Lie algebra as λ goes to 0 will be: [a i , a j ] = 0, (4.8) which is that of iso(3).
b. Electromagnetic fields as motors. -A general terminology for infinitesimal rigid motions and their dual forces and moments that evolved in engineering mechanics was that of referring to the infinitesimal rigid motion or the dual object as a motor (moment + vector) [14,15]. More precisely, the term "motor" usually referred to the combination of a force and a torque, although one can distinguish between infinitesimal rigid motions and their dual objects by referring to kinematical and dynamical motors accordingly. Hence, from our discussion above, we can also think of bivectors as kinematical motors and 2-forms as dynamical ones.
A particular type of motor that gets a lot of attention in engineering mechanics is the "screw," which is essentially a canonical form for an infinitesimal rigid motion. The dual object is then a "wrench," which is then a canonical form for a dynamical motor. The two theorems upon which the theory of screws (dynames, torsors) is based are Chasles's theorem, which is kinematical in character, and Poinsot's theorem, which has a dynamical character. (Historically, the latter preceded the former, though.) We shall begin by simply stating them and refer the interested reader to the literature (which is vast) for more details (cf., e.g., [16]):
Chasles's theorem:
When a rigid body moves freely in space, one can represent any rigid motion of that body as a translation along an axis and a rotation about that axis.
When a finite set of discrete forces acts upon various points of a rigid body in space, that system of forces can be combined into a single force that acts along a line and a force-moment about that line.
In both cases, if we understand "space" to actually mean RP 3 then we are saying that there is always a "canonical form" for a rigid motion or a finite system of discrete forces that consists of a line in space, which is called the central axis of the motion or system of forces, and a plane that is perpendicular to that line. In the kinematical case, the line is the axis of both translation and rotation, while the plane is the plane of rotation; in the dynamical case, the line is the line of force, as well as the axis of the force-moment, and the plane is the plane of the force-moment (M = r ^ F). Notice that insofar as planes are dual to lines in the eyes of projective geometry, the canonical forms that we have described both involve a line in space and a dual object to a line.
The canonical form of a rigid motion is what is most commonly referred to as a screw nowadays, although Ball's terminology appeared somewhat later than the foundational work of Plücker, Klein, and Study. The use of the term "screw" is consistent with the fact that an initial point in space that is then subjected to a simultaneous translation along an axis and a rotation about it will describe a helix. Dually, the canonical form is called (by Ball) a wrench, since a massive point that is subjected to a simultaneous force along a line of action and torque about it will also describe a helix as a response. However, the term "wrench" is rarely used now, as opposed to "force screw" [15]. is referred to as the parameter of the screw; the sign is positive iff v and ϖ ϖ ϖ ϖ point in the same direction and negative otherwise. Hence, ρ = 0 for a pure translation and the parameter will become infinite for a pure rotation. Typically, the first problem associated with screws is finding the central axis. Say one is given an infinitesimal rigid motion v + ω ω ω ω that consists of a velocity vector v at a point in space and an angular velocity ω ω ω ω. An axis of rotation will be associated with the real eigenvalue (= 0) of the linear transformation ω ω ω ω. However, one is still free to paralleltranslate that axis throughout space. The central axis will have the property that v lies in the plane that is perpendicular to the position vector r from any point on the axis to the point of application of v; one finds the axis from that property. Since v t = ω ω ω ωr is perpendicular to the plane of r and the central axis, the vector v n = v -v t will be parallel to the central axis. Hence, if one is given the canonical form v n + ω ω ω ω for the infinitesimal rigid motion then one can reconstruct v from v n + ω ω ω ωr.
The way that all of this relates to electromagnetic fields becomes clearer when we appeal to the isomorphisms Λ 2 ≅ so(1, 3) and Λ 2 ≅ so(1, 3) * that we described above, in conjunction with the Newtonian limit that turns Lorentz transformations into to rigid motions. Hence, one can think of a general electromagnetic excitation bivector field H as an association of a kinematical motor to each point of space-time and a general electromagnetic field strength 2-form F as associating a dynamical one. At each point of space-time, one can then associate D with an infinitesimal translation, H with an infinitesimal rotation, E with a force, and B with a torque, as was suggested above. However, only a special class of electromagnetic fields will associate the same motor to every point, namely, spatially-constant ones. Hence, it is probably better to think of the electromagnetic fields as being associated with "local motors" that act upon the fibers of the bundles Λ 2 (M) and Λ 2 (M) of bivectors and 2-forms on the space-time manifold M. More generally, one would essentially have to drop the constraint of rigidity of the moving body in order to get a formal analogy between electromagnetic and mechanical notions.
Similarly, one sees that only special configurations of electric and magnetic fields can be associated with screws, namely, ones for which D is parallel to H or E is parallel to B. More generally, one would have to speak of a local screw that pertains to some particular point of space-time. or the fundamental part C 0 of any non-dispersive, linear constitutive law C : Λ 2 → Λ 2 , which takes the form: C 0 (F, G) = F(C 0 (G)) = G(C 0 (F)). (4.11) One interesting aspect of that situation is that one can define the scalar product (4.10) without introducing any physically-empirical data, but in order to get to the Lorentz group in the pre-metric theory of electromagnetism, one must look at the dispersion law for electromagnetic waves that follows from the electromagnetic constitutive law after making numerous reductions in generality, such as making it non-dispersive, linear, and isotropic. Hence, it seems that the appearance of things that pertain to the Lorentz group in electromagnetism can be associated with a much more elementary level of geometric considerations than the ones that follow from the constitutive law. In particular, the appearance of the Cartan-Killing form for so(1, 3) requires only that one restrict oneself to a four-dimensional vector space and give it a volume element. Of course, one will still require the Minkowski scalar product if one is to define the linear isomorphism of Λ 2 with so(1, 3), when it is regarded as a vector space.
If we use the scalar product that is defined by V as an example then since the frame {b a , a = 1, …, 6} on Λ 2 that we have been using does not make the associated scalar product diagonal, but the basis {J i , K i , i = 1, 2, 3} that we are using on so(1, 3) does make the Cartan-Killing form diagonal, the first thing that we have to do is to find a basis for Λ 2 that diagonalizes the scalar product. That is quite straightforward, and we define: for which we will have: Hence, the frame {Z i , j Z } is orthonormal for the scalar product. The linear isomorphism Λ 2 → so(1, 3) that takes Z i to K i and i Z to J i will then also define an isometry of the scalar products that preserves the component matrices.
As discussed in an earlier article in this sequence [1.I], the quadric hypersurface in Λ 2 (or really its projective space PΛ 2 ) that is defined by all bivectors A such that: <A, A> = 0 (4.14) is called the "Klein quadric," and a bivector is decomposable (i.e., A = a ^ b) iff it belongs to that quadric.
We then look for the elements ω ∈ so(1, 3) that correspond to the points of the Klein quadric in Λ 2 under the isomorphism in question, so: <ω, ω> = 1 2 Tr ω 2 = 0. (4.15) We shall call such elements isotropic for the scalar product that is defined by the Cartan-Killing form. Note that the condition (4.15) is quadratic and homogeneous, as one would expect from a quadratic form, as is the condition for the Klein quadric. Hence, the isotropic elements of so(1, 3) will form a quadric hypersurface in so(1, 3), which will then be five-dimensional; we shall then call it the Cartan-Killing quadric.
If we perform essentially the same transformation of the basis {J i , K i , i = 1, 2, 3} that we did on {b a , a = 1, …, 6}, namely: then we will get: (4.17) Hence, this basis for so(1, 3) behaves like the original basis {b a , a = 1, …, 6} does for Λ 2 .
In particular, we see that we already know six distinct points on the Cartan-Killing quadric, namely, the six basis elements {z i , i z , i = 1, 2, 3}. They are then related by just the algebraic relations that take the form of (4.17). When we look at the geometric nature of the elements of that basis, we see that they each consist of an infinitesimal boost along and axis and an infinitesimal rotation about that same axis. In other words, the isotropic elements of so(1, 3) that are proportional to z i and i z are essentially "relativistic screws," while the corresponding elements of the dual quadric on so(1, 3) * will be "relativistic force screws." If one has, in general: the most general element of the Cartan-Killing quadric will be characterized by having v = ± ω. However, that does not imply that ω i = v i , or even that the two vectors are parallel, so there are other null elements than relativistic screws.
d. Almost-complex structure on Λ 2 . -If we return to the basis {b a , a = 1, …, 6} for Λ 2 and define a linear isomorphism * : Λ 2 → Λ 2 by way of: ) then we will see that: Hence, * defines an almost-complex structure on Λ 2 . The general bivector can then be written in the form: The almost complex structure then allows us to define its conjugate by: If one considers the basis that is defined by (4.12) then one will see that i Z is, in fact, the conjugate of Z i with this definition.
The scalar product of A and B = σ i b i + τ i *b i then becomes: (4.26) so: which is consistent with the real form. However, since the almost complex structure * is linear and self-adjoint, i.e.: <A, *B> = <*A, B>, (4.28) it also allows one to define another scalar product: (4.29) for which: This is the other field invariant that is commonly used in electromagnetism, along with (4.27).
Analogous considerations on Λ 2 will give analogous results. In particular, the almostcomplex structure * on Λ 2 will define a corresponding almost-complex structure on Λ 2 by way of: As it turns out, the Hodge * operator defines an almost-complex structure for 2-forms on a four-dimensional Lorentzian manifold. However, that operator is the starting point for all pre-metric electromagnetism, since it is the only place where the metric structure of space-time is actually used in Maxwell's equations. Indeed, one replaces it with the composition of a (non-dispersive, linear) electromagnetic constitutive law and the Poincaré isomorphism #. However, as the author pointed out in [17], not all constitutive laws will actually yield something that is proportional to an almost-complex structure for that composition. One finds that although the scalar product (4.26) is general in scope, the scalar product (4.29) is closely tied to the Lorentzian structure that comes from the classical electromagnetic vacuum, and by way of the introduction of *.
e. Complex structure on Λ 2 . − When an almost-complex structure * has been defined on Λ 2 , in order to define a complex structure on Λ 2 , all that one needs to do is to define complex scalar multiplication: That definition allows one to regard {b i , i = 1, 2, 3} as a complex basis for Λ 2 , since other basis elements b i+3 will then become simply i b i . Hence, the complex dimension of Λ 2 is three.
The general element of Λ 2 will then take the form: and a relativistic screw will take the form: for some bivector b that generates the central axis.
One can define the complex conjugate of a bivector (4.33) in the obvious way: 35) and the basis (4.12) will take the form: which is consistent with the definition of a complex conjugate, since b i is a real bivector, Notice that if we wish to introduce a scalar product on the complex form of Λ 2 that corresponds to A ^ B then we must first note that the exterior product is no longer defined on Λ 2 with its complex structure. Rather, if: then we define their complex scalar product by making b i orthonormal for the threedimensional Euclidian scalar product, so: (4.38) That includes both of the scalar products that were defined above on Λ 2 with an almostcomplex structure, since: <A, B> C = (A, B) + i <A, B>. (4.39) In particular: which includes both of the commonly-used scalar products on bivectors. Hence, the null elements of this scalar product must satisfy both of the conditions: and when one interprets that in terms of electromagnetic fields, that will imply that one is dealing with perpendicular D and H fields that have the same magnitude, such as one would encounter with electromagnetic wave fields, but not exclusively.
Note that the quadric that <.,.> C defines by its vanishing is a subspace of the Klein quadric, namely, its intersection with the quadric that is defined by the vanishing of (.,.).
Once again, the almost-complex structure on Λ 2 that is induced by the one on Λ 2 will allow one to define a complex structure on Λ 2 in analogous way, with analogous dual results. In particular, one can define a complex Euclidian scalar product and a dual quadric.
One should be careful to distinguish the complex structure on Λ 2 that one gets from an almost-complex structure * from the complexification of Λ 2 , which amounts to replacing the real components of bivectors with complex ones. In particular, the former complex vector space has a complex dimension of three, while the latter has a complex dimension of six. The author has pointed out in an article [18] that this implies a certain simplification of the complex formulation of relativity, which typically uses the complexification of Λ 2 and then restricts to a three-complex-dimensional subspace that is defined by the "self-dual" elements. Hence, it does not actually use all of the complexified bivectors and 2-forms.
f. The isomorphism of Λ 2 with so(3, C). -When Λ 2 is given an almost-complex structure, and therefore a complex structure, there will then be a C-linear isomorphism Λ 2 ≅ so(3, C) that associates the basis elements {b i , i = 1, 2, 3} with the three elementary rotations in E 3 : However, the Lie algebra so(3, C) is isomorphic to the Lie algebra so(1, 3) by the association: Hence, one can simply regard an infinitesimal boost as an imaginary rotation. The general element of so(3, C) then takes the form: The Cartan-Killing form on so(3, C) will be isometric to the Euclidian scalar product on C 3 , whose components will be δ ij for a complex-orthonormal basis. One can also get this elementary fact directly from the fact that the Cartan-Killing form on so(3, R) has that property in a real form by simply complexifying R 3 to C 3 . The vector cross product on will then define the algebra of so(3, C). Dually, the corresponding (transposed) scalar product on so(3, C) * will also make it isometric to complex three-dimensional Euclidian space.
A relativistic screw will take the form: for some real rotation π with unit norm relative to the Cartan-Killing form, and some real numbers v and ρ.
One can also associate null vectors and null covectors of the two complex Euclidian spaces so(3, C) and so(3, C) * with the null vectors in Λ 2 and Λ 2 when they are regarded as three-dimensional complex Euclidian spaces. However, only the electromagnetic fields for which both real invariants <.,.> and (.,.) vanish (such as electromagnetic waves) in Λ 2 will go to null elements of so(3, C), and only the analogous fields in Λ 2 will go to null elements of so(3, C) * .
Note that the null elements of so(3, C) and so(3, C) * include the elements of the form J i ± iJ i = (1 ± i) J i , which amount to the combination of an infinitesimal rotation (torque, resp.) along a certain axis with an infinitesimal boost (force, resp.) along that same axis. Hence, it is proper to think of those elements as representing relativistic kinematical (dynamical, resp.) screws of a certain type, namely, the parameter of the screw would have to be ± 1. Once again, the null elements are not all relativistic screws, nor are the relativistic screws all null, since from (4.45): <Ω, Ω> C = v 2 (ρ 2 -1), (4.46) which vanishes iff ρ = ± 1.
The representation of electromagnetic fields by complex 3-vector fields goes back to the time of Riemann [19], and the technique was later developed by Ludwik Silberstein [20] and A. Conway [21]. It is closely related to the use of such fields to represent relativistic quantum wave functions by Majorana [22] and Oppenheimer [23]. The author has also discussed the role of complex structures in pre-metric electromagnetism [24]. Furthermore, the use of complex 3-vector fields is also established in the complex formulation of relativity [25], in which they are sometimes referred to as "3-spinors"; one might confer the author's own comments on that topic [18].
5.
Summary. -In conclusion, we shall distill out the main points of the discussion above: 1. When bivectors and 2-forms are defined on four-dimensional Minkowski space, there are linear isomorphisms of Λ 2 with so(1, 3) and Λ 2 with so(1, 3) * , when the former is regarded as a vector space.
2. Under those isomorphisms, the electric excitation D gets associated with an infinitesimal boost, the magnetic excitation H goes to an infinitesimal rotation, the electric field strength E becomes a force, and the magnetic field strength H becomes a torque.
3. The bilinear pairing of 2-forms and bivectors that amounts to the evaluation of the 2-form on the bivector corresponds to the bilinear pairing of linear functionals in so(1, 3) * with elements of so(1, 3). 4. The scalar products that one commonly defines on 2-forms and bivectors by way of a volume element or a linear, non-dispersive electromagnetic constitutive law have the same signature type as the Cartan-Killing form on so(1, 3), and can make the linear isomorphisms in question into isometries. 5. One can then relate the electromagnetic constitutive law to a corresponding mechanical constitutive that associates elements of so(1, 3) * with elements of so(1, 3). 6. The Lie algebra so(1, 3) goes to the Lie algebra iso(3) in the Newtonian limit as c becomes infinite. 7. When bivectors and 2-forms are associated with lines in RP 3 , one can also associate elements of so(1, 3) and so(1, 3) * with lines in a manner that reduces to the classical theory of motors in the Newtonian limit. However, only spatially-constant electromagnetic fields will correspond to a single infinitesimal rigid motion or dual object, so generally one must think of the motor as a local object that acts upon the fibers of the bundles of bivectors and 2-forms on space-time. 8. Relativistic screws are then elements of so(1, 3) that consist of the sum of a boost along a central axis and a rotation about that same axis. Analogous considerations can be applied to so(1, 3) * , which will then pertain to relativistic force screws. 9. The isotropic elements of so(1, 3) with respect to the Cartan-Killing form -i.e., the points of the Cartan-Killing quadric − correspond to points of the Klein quadric in Λ 2 . 10. The points of the Cartan-Killing quadric on so(1, 3) include elements of the form J i ± K i , which are essentially relativistic screws whose parameters are equal to ± 1, but not all points of quadric are relativistic screws, nor are all relativistic screws found on that quadric. Analogous dual statements apply to so(1, 3) * and relativistic force screws. 11. When one introduces an almost-complex structure on the space of bivectors, one can also defined a complex structure on it, and the real linear isomorphisms of Λ 2 with so(1, 3) * and Λ 2 with so(1, 3) * will become C-linear isomorphisms of Λ 2 with so(3; C) and Λ 2 with so(3; C) * , resp. 12. When Λ 2 has an almost-complex structure, one can define a complex Euclidian scalar product on it that combines both of the usual scalar products on Λ 2 that are defined in the Lorentzian formulation of electromagnetism. Therefore, the null vectors in so(3; C) or so(3; C) * will correspond to the null vectors in Λ 2 or Λ 2 , resp. 13. The Cartan-Killing form on so(3; C) is the complex Euclidian metric, so the Clinear isomorphisms Λ 2 ≅ so(3; C), Λ 2 ≅ so(3; C) * are also isometries. Both of the scalar products on Λ 2 or Λ 2 must then vanish in order for the bivector or 2-form to be a complex null element. Electromagnetic fields that have that property include the fields of electromagnetic waves.
14. The vectors in so(3; C) of the form v (ρ + i) π , where π is some real rotation of unit norm, will represent special relativistic screws, while the corresponding null covectors so(3; C) * will represent special relativistic force screws. | 2019-04-13T16:21:01.045Z | 2016-10-21T00:00:00.000 | {
"year": 2016,
"sha1": "8424d0517b91ee519554618eadf8bbe3eb1c647e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bb54f489de57391b812119879e26754bec7fc281",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
263966087 | pes2o/s2orc | v3-fos-license | Relapse recovery in relapsing–remitting multiple sclerosis: An analysis of the CombiRx dataset
Background: Clinical relapses are the defining feature of relapsing forms of multiple sclerosis (MS), but relatively little is known about the time course of relapse recovery. Objective: The aim of this study was to investigate the time course of and patient factors associated with the speed and success of relapse recovery in people with relapsing–remitting MS (RRMS). Methods: Using data from CombiRx, a large RRMS trial (clinicaltrials.gov identifier NCT00211887), we measured the time to recovery from the first on-trial relapse. We used Kaplan–Meier survival analyses and Cox regression models to investigate the association of patient factors with the time to unconfirmed and confirmed relapse recovery. Results: CombiRx included 1008 participants. We investigated 240 relapses. Median time to relapse recovery was 111 days. Most recovery events took place within 1 year of relapse onset: 202 of 240 (84%) individuals recovered during follow-up, 161 of 202 (80%) by 180 days, and 189 of 202 (94%) by 365 days. Relapse severity was the only factor associated with relapse recovery. Conclusion: Recovery from relapses takes place up to approximately 1 year after the event. Relapse severity, but no other patient factors, was associated with the speed of relapse recovery. Our findings inform clinical practice and trial design in RRMS.
Introduction
Relapses are the defining clinical feature of relapsing forms of multiple sclerosis (MS).In MS, relapses correlate with newly forming demyelinating lesions in the brain and spinal cord.Most clinical trials in relapsing-remitting MS (RRMS) use the number or annualized rate of clinical relapses as their primary outcome measure.It has been known since the 1960s that corticosteroid treatment (initially with adrenocorticotropic hormone (ACTH)) 1 can hasten the recovery from a relapse, and the disease modifying treatments (DMTs) for RRMS introduced since the 1990s both reduce the number and severity of relapses.Despite these successes in the treatment and prevention of relapses, relatively little is known about the time course of the recovery from relapses, and which patient factors are associated with the speed and success of recovery.
Clinical trial datasets give us an opportunity to study the time course of relapse recovery.While relapses occur at random time points throughout a trial, trial participants who experience a relapse are usually evaluated at an unscheduled study visit close to the relapse event, and then continue their regular trial follow-up afterward.This means that there is a record of a pre-relapse, an at-relapse, and often several postrelapse study visits, which makes it possible to study the time course of relapse recovery longitudinally and investigate the factors associated with the time to relapse recovery.Furthermore, most RRMS patients today are treated with DMTs, so that participants in clinical trials using these DMTs are a good representation of the patient population seen in clinical practice.
In this study, we used patient-level data from CombiRx, a large phase 3 trial of people with RRMS treated with the DMTs glatiramer acetate (GA), interferon beta (IFNB), or both, to investigate relapse recovery in people with RRMS.
Standard protocol approvals, registrations, and patient approvals
The ethical approval for CombiRx is described in the original trial publication. 2Ethical approval for this analysis was sought and granted by the Conjoint Health Research Ethics Board at University of Calgary and the Institutional Review Board at the University of Alabama at Birmingham.All participants gave written informed consent to their participation in CombiRx.
CombiRx dataset
CombiRx was a three-arm, randomized, double-blind, placebo-controlled, multicenter, phase 3 trial of GA plus placebo (25%), or IFNB plus placebo (25%), or the combination of GA and IFNB (50%) in treatmentnaïve people with early RRMS.Trial participants were followed until the last trial participant reached 3 years of follow-up.For these analyses, we included all study visits up to the 42-month visit.The inclusion criteria were age 18-60 years inclusive, a diagnosis of RRMS by Poser et al. 3 or 2001 McDonald et al. 4 criteria, and an Expanded Disability Status Scale (EDSS) 5 of 0-5.5 inclusive.Trial participants needed to have at least two relapses in the 3 years before inclusion, where one relapse could be a magnetic resonance imaging (MRI) change meeting the 2001 McDonald MRI criteria for dissemination in time. 4xclusion criteria were any prior use of IFNB or GA, an acute exacerbation within 30 days of screening, steroid use for acute exacerbations within 30 days of screening, chronic systemic steroid use, evidence of progressive MS, and any previous treatment with natalizumab, cladribine, alemtuzumab, daclizumab, rituximab, or total lymphoid irradiation.
Relapse recovery
In CombiRx, relapses were defined as new or worsening symptoms attributable to MS, preceded by 30 days of stability, lasting for more than 24 hours, not associated with fever, and leading to ⩾ 0.5 EDSS points increase compared to a prior visit or ⩾ 2 points increase in one EDSS functional system, or ⩾ 1 point increase in two EDSS functional systems (excepting bladder and cognitive changes) as assessed by a treatment-blinded observer.Relapses were defined as a "protocol-defined exacerbation" if an EDSS assessment took place within 7 days after relapse onset, and as a non-protocol-defined exacerbation if the EDSS assessment occurred more than 7 days after relapse onset. 2 For this study, we combined these two categories into a single "confirmed relapse" category which is consistent with relapse definitions in most clinical trials.
For our analyses, we selected the trial participants' first confirmed relapse.Included relapses had to have the at-relapse EDSS assessment within 30 days from relapse onset, a relapse severity (the difference between the at-relapse and pre-relapse EDSS) of at least 0.5 points, and at least one post-relapse EDSS assessment.We marked relapse recovery at the first instance; a post-relapse EDSS was equal or smaller than the pre-relapse EDSS.Trial participants were censored at the time of their last EDSS assessment, or at the time of a second confirmed relapse.In addition to unconfirmed relapse recovery, we investigated 12and 24-week confirmed relapse recovery.For the two confirmation cohorts, we selected trial participants who had at least one additional EDSS assessment at least 12 or 24 weeks after the recovery event.
Additional analysis: illustration of short-term EDSS fluctuation
To illustrate the occurrence of short-term fluctuation in EDSS measurements in the absence of relapses, especially in its lower ranges, we compared screening and baseline EDSS measurements.We first selected all CombiRx participants with a screening EDSS score of 0.0-3.0.In CombiRx, the screening and baseline visits occurred at most 45 days apart and participants had to be relapse-free within 30 days of the screening visit.We then compared screening and baseline EDSS, and recorded the percentage of participants with identical screening and baseline scores, and the proportion of participants with higher and lower baseline than screening EDSS scores.
Statistical analyses
We used Kaplan-Meier survival analyses and Cox regression models to investigate the association of the factors sex, age at baseline, disease duration at baseline, treatment arm, pre-relapse EDSS (in the categories "0.0," "1.0-2.0," and "> 2.0"), number of relapses in the year before inclusion, contrast-enhancing lesions (CELs) on the baseline MRI scan (yes/no), burden of disease (BOD, in mL) on the baseline MRI scan, high-dose steroid treatment of the relapse (yes/ no), and relapse severity (in the categories "0.5 EDSS points," "1.0 EDSS points," and "> 1.0 EDSS points") with the time to relapse recovery.To investigate the possible interaction between relapse severity and high-dose steroid treatment, we included an interaction term for these variables in all models.We used the R statistical software package for Windows, version 4.2.2 for all statistical analyses. 6Statistical significance was assumed to be at the two-sided 0.05 level.
Data availability
Access to the CombiRx dataset can be requested from the Coordinating Center or MS Center at the University of Alabama at Birmingham (Birmingham, Alabama, USA) by completing a data use agreement that is reviewed by a committee overseeing the use of the data.Qualified researchers have or will obtain appropriate Institutional Review Board approval for the study request.Depending on the complexity of the request, researchers may need to cover the cost of producing the de-identified data.
CombiRx dataset
The CombiRx dataset contained individual patientlevel data of 1008 participants.Table 1 shows their baseline characteristics.The treatment arms were well balanced with a slightly older average age for GA (Table 1).
Relapse recovery
Table 2 shows the characteristics of the relapses included in the analysis on unconfirmed and 12-and 24-week confirmed relapse recovery.We identified 240 relapses matching the inclusion criteria.As expected, the confirmed cohorts included considerably fewer participants, 167 (69.6%) in the 12-week confirmed and 156 (65.0%) in the 24-week confirmed cohort.The pre-relapse EDSS was very similar to the baseline EDSS, both with a median of 2.0 (interquartile range, IQR, of 1.5-2.5) in all cohorts.The median times between the pre-relapse EDSS and relapse onset, the at-relapse EDSS, and relapse severity were similar in the three cohorts (Table 2).The median time between relapse onset and the at-relapse assessment was 6 days (IQR 4-13) in the unconfirmed, 7 days (IQR 4-12) in the 12-week confirmed, and 6.5 days (4-12) in the 24-week confirmed cohort.Table 2 and Figure 1 show the time course of relapse recovery.The median time to relapse recovery was 111 days (95% CI: 99-138).Most recovery events took place within 1 year of relapse onset: for example, in the unconfirmed relapse recovery cohort, 202 of 240 (84%) individuals recovered during follow-up and 189 of the 202 (94%) during the first 365 days after relapse onset (Table 2).Relapse recovery was not linear over time: 80% of those who recovered did so within the first 6 months (Table 2 and Figure 1).
In addition to unconfirmed relapse recovery, we investigated 12-and 24-week confirmed relapse recovery (Table 2).Since these cohorts required additional follow-up time points, the confirmed cohorts included fewer individuals.When 12-and 24-week confirmation was mandated, there were much fewer recovery events: only 52% of the 12-week confirmed cohort and only 55% of the 24-week confirmed cohort experienced relapse recovery, compared to 84% in the unconfirmed cohort.However, the time course of recovery was roughly similar in the unconfirmed and confirmed cohorts (Table 2).
Factors associated with relapse recovery
Table 3 shows the results of the Cox regression model for the unconfirmed cohort.We deem unconfirmed relapse recovery to be the most clinically relevant, since most clinicians would likely accept that a patient has recovered from a relapse if their EDSS score reached the pre-relapse level.Furthermore, since the unconfirmed cohort was far larger than the confirmed cohorts, it is the most attractive to analyze in the interest of the precision of the estimated hazard ratios in the Cox regression models.Only relapse severity was significantly associated with relapse recovery in our cohort: participants with a relapse severity of more than 1.0 EDSS point were significantly less likely to experience relapse recovery with a hazard ratio for relapse recovery of 0.58 (95% CI: 0.34-0.98)compared to participants with 0.5 EDSS points relapse severity (Table 3).All other investigated factors, which included clinical, treatment, and MRI characteristics, were not associated with relapse recovery.
The Kaplan-Meier curves for the risk factors relapse severity and high-dose steroid treatment are shown in Figure 2. It appears that participants receiving a highdose steroid course have a faster recovery in the first few months, with the Kaplan-Meier curves separating up to approximately Month 4.However, this difference disappears afterward and does not reach statistical significance in either the Kaplan-Meier analysis (log-rank p = 0.15, Figure 2) or in the multivariable Cox regression model (hazard ratio 1.43, 95% CI: 0.87-2.36,p = 0.16, Table 3).
Tables 4 and 5 show the results of the Cox regression models for the 12-and 24-week confirmed cohorts, which showed largely similar results.In the 24-week confirmed cohort, relapse severity did not reach statistical significance, while having a pre-relapse EDSS of between 1.0 and 2.0 was associated with a greater chance for relapse recovery (Tables 4 and 5).The interaction term between relapse severity and highdose steroid treatment was not significant in any of the models.
Additional analysis: illustration of short-term EDSS fluctuation
In CombiRx, 849 participants had a screening EDSS score between 0 and 3.0.The median number of days between the screening and the baseline visit was 22 days (IQR 3-28 days).Of these 849 participants, 354 (41.7%) had identical screening and baseline scores, 231 (27.2%) had a higher baseline EDSS score compared to screening, and 264 (31.1%) had a lower baseline score compared to screening.
Discussion
Despite the importance of clinical relapses in relapsing forms of MS, relatively little is known about the time course of relapse recovery.This is likely in part since people with MS in typical clinical practice are not seen often enough to have a pre-relapse assessment that is close to the relapse event, and often not followed up as closely as in a clinical trial afterwards.Clinical trial datasets are a valuable data source to address this question because participants are generally assessed every 3 months, assuring that a randomly occurring relapse during a trial is never farther removed from a scheduled assessment than these 3 months.Previous studies on relapse recovery were more focused on the residual disability from a relapse rather than on the time course of recovery.One such investigation used data from the placebo arms of several clinical trials in RRMS to determine the percentage of patients with and the magnitude of residual deficits following a relapse. 7This study had restricted access to trial data and focused on analyzing the difference between pre-, at-, and post-relapse EDSS scores after a varying time of follow-up: 224 people with RRMS were analyzed, and 42% had not fully recovered after an average of 64 days. 7This study is difficult to compare to our investigation because of differences in the data source, analyses, and purpose; but based on our analyses, it appears that further recovery occurs after longer follow-up, in our cohort for up to a year.Another study compared relapse recovery between placebo-and natalizumab-treated participants in the AFFIRM study. 8This investigation included 283 participants and found a substantial advantage in 12-week confirmed relapse recovery for natalizumab-treated patients.While this study is also difficult to compare to our investigation, it does Relapse recovery in CombiRx appears to occur in two phases.There is a steady almost linear increase in recovery events up to about contain survival curves showing a similar time course with the majority of recovery occurring in the first 6 months of follow-up, noticeably fewer recovery events between 6 and 12 months, and even fewer afterwards. 8The observed time course of relapse recovery raises the question of the appropriateness of the confirmation of disability worsening in RRMS trials.Clinical trials in RRMS often mandate that a disability worsening event be confirmed at a further assessment after 3 or 6 months.This practice is done in part with the intention of measuring "fixed" disability worsening, and to minimize the effect of relapses.However, our analyses suggest that relapses still influence these measures after 3 and 6 months, as 65% of people in the unconfirmed cohort recovered after the 3-month mark and 20% after the 6-month mark (Table 2).This would imply that some of the 3or 6-month confirmed progression events in RRMS trials may in fact be recovering relapses.
Our analyses showed a marked difference in unconfirmed and confirmed relapse recovery: while 84% of trial participants experienced recovery to their previous baseline in the unconfirmed cohort, this percentage shrank to only 52% and 55% for the 12-and 24-week confirmed cohorts.We believe that two main factors are responsible for this difference.One important factor is trial participants who may not have had sufficient follow-up data to confirm the recovery: for example, almost 10% of subjects in the unconfirmed cohort (21 of 240) either completed the trial or had a relapse within 12 weeks of the recovery index date, and thus were not evaluable for confirmation.Another factor contributing to this difference is short-term variability that characterizes the EDSS, especially in its lower ranges.Koch et al. 9 and Liu and Blumhardt 10 have previously commented on this characteristic of the EDSS, which-for example-is also in part responsible for the substantial difference between confirmed and sustained disability progression if measured with the EDSS.To illustrate this point, we added a description of the short-term score change between the screening and the baseline EDSS measurements in CombiRx in the absence of relapses.In 58.3% of the 849 participants with a screening EDSS of 3.0 or lower, there was a difference of at least 0.5 EDSS points between the screening and baseline EDSS scores, with the baseline score higher than the screening score in 27.2% of participants.If a similar amount of random variation of the EDSS is present throughout the trial, 27.2%, or close to a third, of participants would not be confirmed due to measurement error, which is non-trivial.We observed 12-week confirmation in 52% of participants, thus 48% were not confirmed.Given the substantial short-term variation in EDSS, 27.2% of these 48% unconfirmed relapse recoveries might simply be due to measurement error.This would move the 52% confirmation percentage to 65.1% (52% + 0.272 × 48%) which suggests that the striking difference between unconfirmed and confirmed relapse recovery is certainly much less different than it appears.Given these considerations, it may be unnecessary to mandate confirmation for relapse recovery.While it is understandable that disability worsening, especially if it is used as the primary outcome measure in a trial, should be confirmed, most clinicians would likely agree that a patient has recovered from a relapse once they reach their pre-relapse disability level.
In our investigation of factors associated with relapse recovery, we found that greater relapse severity was associated with a lower chance of recovery.While this finding is expected, it is also relevant that many other factors were not associated with relapse recovery.We included factors associated with disease activity (relapse activity in the year before inclusion and MRI CELs at baseline), demographics (sex, age, disease duration), indicators of disease burden (pre-relapse EDSS and MRI BOD), and treatment (DMT treatment arm and high-dose steroid treatment) into our regression model; none impacted relapse recovery.As highdose steroid treatment is very widely used in clinical practice, it may appear counter-intuitive that steroid treatment did not significantly affect relapse recovery.However, this finding is in keeping with the Optic Neuritis Treatment Trial, which showed that high-dose intravenous steroid treatment (with 1 g intravenous methylprednisolone for 3 days) hastened the recovery of visual acuity, but there was no statistically significant difference in visual acuity between high-dose steroid and placebo arms at 6 months of follow-up. 11imilarly, one could have expected that at least age may have an effect on the success of relapse recovery, as an observational study in 132 pediatric and 632 adult patients with MS found that children recover significantly better from relapses than adults. 12However, as noted above, there is high variability in the assessment of the EDSS and "false recovery" due to measurement error of the EDSS could also be a reason for the lack of statistical significance on some expected risk factors for recovery.Still, while one-third of the relapses were severe and were half as likely to recover even on first-generation DMT therapy, the linkage between disability worsening and relapses is evident.
Our study has several limitations.Since CombiRx was a trial including treatment-naïve patients with early MS with a mean disease duration of only 1.2 years at baseline, it is unclear whether our findings can be generalized to the entire spectrum of people with relapsing or progressive forms of MS who experience relapses.The longitudinal investigation of relapse recovery should be investigated in other clinical trial and real-world adult and pediatric MS datasets.Even though 1008 people participated in CombiRx, our cohort of patients with a first on-trial relapse included only 240 patients over an average follow-up period of over 3 years, this may have influenced the precision of the estimates.
In this study, we described the time course of and investigated factors associated with relapse recovery.Further studies into the effect of newer DMTs on relapse recovery, and on patient factors and biomarkers associated with relapse recovery are warranted.
Figure 1 .
Figure 1.Overall unconfirmed recovery from first relapse in CombiRx participants up to 1 year after relapse onset.
Figure 2 .
Figure 2. Effect of relapse severity and high-dose steroid treatment on relapse recovery up to 1 year of follow-up.(a) A relapse severity of more than 1.0 EDSS point was significantly associated with slower relapse recovery.(b) People receiving high-dose steroid treatment had a faster relapse recovery, especially in the first 4 months post-relapse, but this difference was not statistically significant.
Table 1 .
Baseline characteristics of the CombiRx participants.
Table 2 .
Characteristics of the analyzed relapses.
Table 3 .
Cox regression model of factors associated with the time to unconfirmed relapse recovery.
a Per year increase.b Per unit increase.
Table 4 .
Cox regression model of factors associated with the time to 12-week confirmed relapse recovery.
a Per year increase.bPer unit increase.(Continued)
Table 5 .
Cox regression model of factors associated with the time to 24-week confirmed relapse recovery.
a Per year increase.b Per unit increase. | 2023-10-14T06:17:46.527Z | 2023-10-13T00:00:00.000 | {
"year": 2023,
"sha1": "f33d49655b9a1e8db40619bb0c99d7bec9df7b44",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/13524585231202320",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "03a93966b934c2beea0a53dc88a1baf3b0d414f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250071407 | pes2o/s2orc | v3-fos-license | Atezolizumab for Pretreated Non-Small Cell Lung Cancer with Idiopathic Interstitial Pneumonia: Final Analysis of Phase II AMBITIOUS Study
Abstract Background Interstitial pneumonia (IP) is a poor prognostic comorbidity in patients with non-small cell lung cancer (NSCLC) and is also a risk factor for pneumonitis. The TORG1936/AMBITIOUS trial, the first known phase II study of atezolizumab in patients with NSCLC with comorbid IP, was terminated early because of the high incidence of severe pneumonitis. Methods This study included patients with idiopathic chronic fibrotic IP, with a predicted forced vital capacity (%FVC) of >70%, with or without honeycomb lung, who had previously been treated for NSCLC. The patients received atezolizumab every 3 weeks. The primary endpoint was the 1-year survival rate. Results A total of 17 patients were registered; the median %FVC was 85.4%, and 41.2% had honeycomb lungs. The 1-year survival rate was 53.3% (95% CI, 25.9-74.6). The median overall and progression-free survival times were 15.3 months (95% CI, 3.1-not reached) and 3.2 months (95% CI, 1.2-7.4), respectively. The incidence of pneumonitis was 29.4% for all grades, and 23.5% for grade ≥3. Tumor mutational burden and any of the detected somatic mutations were not associated with efficacy or risk of pneumonitis. Conclusion Atezolizumab may be one of the treatment options for patients with NSCLC with comorbid IP, despite the high risk of developing pneumonitis. This clinical trial was retrospectively registered in the Japan Registry of Clinical Trials on August 26, 2019, (registry number: jRCTs031190084, https://jrct.niph.go.jp/en-latest-detail/jRCTs031190084).
Discussion
This was the first phase II study conducted to evaluate the efficacy and safety of atezolizumab in patients with NSCLC and comorbid IP. Although the planned enrollment of 38 patients could not be completed and only 17 patients were enrolled, the primary endpoint of the 1-year survival rate was 53.3% (95% CI, 25.9-74.6), and the lower limit of the 95% CI exceeded the threshold of 15% (Table 1, Fig. 1). It is noteworthy that although IP is a distinctly poor prognostic comorbidity in patients with NSCLC and pneumonitis of grade ≥3 developed frequently in this study, the efficacy and survival benefits of atezolizumab were comparable between patients with comorbid IP in this study and those without IP treated in previous prospective trials. Evidence on the efficacy of cytotoxic agents as second-line or later therapy in patients with comorbid IP is limited, and long-term survival can hardly be expected. Therefore, for patients with NSCLC with comorbid IP who have a poor prognosis and few treatment options, immune checkpoint inhibitor (ICI) continues to hold promise as the only existing treatment option that can provide long-term survival.
However, even if the balance between safety and efficacy of atezolizumab is considered, the 23.5% rate of developing grade ≥3 severe pneumonitis may be too risky. The logistic regression analysis suggested that honeycomb lung on chest computed tomography (CT) may be a risk factor for the development of pneumonitis. This result, however, was not significant, and the risk factor analysis was done post hoc on only a small number of cases, so no definitive conclusion can be drawn from these results alone. For appropriate patient selection, large observational and retrospective studies that include data, such as CT and pulmonary function tests, are needed to identify the risk factors for ICI-induced pneumonitis.
As biomarkers of efficacy, our post hoc analysis results showed a tendency for longer overall survival and progression-free survival in patients with PD-L1 ≥50% than in those with PD-L1 <50%. In practice, the decision to administer atezolizumab would need careful consideration of the risks (especially pneumonitis) and benefits, with reference to PD-L1 expression. Author disclosures and references available online.
Sample Size
In this study, we set the threshold for the primary endpoint of the 1-year survival rate at 15%. Assuming a clinically meaningful 25% increase and a set expected value of 40%, 36 patients were required in this study according to the exact binomial test (2-sided α = 0.05, 1 − β = 0.9). Considering patient ineligibility, a sample size of 38 was set.
Study Details
Registration began on September 2, 2019. At the time of enrollment of 15 patients, 3 patients (20%) developed grade 3 pneumonitis, so the new patient enrollment was interrupted on January 31, 2020. Two patients from whom consent had already been obtained were reintroduced, and a total of 17 patients were eventually registered (the last patient enrollment was on February 10, 2020). Subsequently, one patient with pneumonitis worsened from grade 3 to 5, and one new patient developed grade 3 pneumonitis. Therefore, the present study was terminated following the recommendation of the efficacy and safety evaluation committee. PD-L1 expression, which was measured using the Dako PD-L1 immunohistochemistry 22C3 pharmDx assay (Agilent Technologies, Santa Clara, California, USA), was ≥50% in 7 patients (41.2%), 1-49% in 3 patients (17.6%), <1% in 4 patients (23.5%), and unknown in 3 patients (17.6%). The median %FVC and % diffusing capacity for carbon monoxide were 85.4% and 54.4%, respectively. Regarding the radiological findings of preexisting IP as judged by the central review committee, 6 patients (35.3%) had UIP patterns, 3 patients (17.6%) had probable UIP patterns, and 8 patients (47.1%) had indeterminate UIP patterns. Seven patients (41.2%) had honeycomb lung on HRCT.
The median number of delivered cycles of atezolizumab as the study treatment was 3 [interquartile range: 2, 5]. Five of 6 patients who were on treatment at the time the trial was terminated agreed to continue receiving atezolizumab as usual clinical treatment outside of this trial, with a median number of additional cycles of 3 [interquartile range : 3, 8].
For translational research on the predictive biomarkers of atezolizumab efficacy, we extracted deoxy-nucleic acids from archival formalin-fixed paraffin-embedded tumor tissues and analyzed tumor mutational burden (TMB) and somatic variations in 409 cancer-related genes using the Oncomine Tumor Mutation Load Assay (Thermo Fisher Scientific, US) and analyzed microsatellite instability (MSI) on a panel of Bethesda markers (BAT25, BAT26, NR21, NR24, and MONO27). In all 17 enrolled patients, consent for the use of archival tumor samples was obtained. However, due to the insufficient amount of residual tumor samples, not all items could be measured in 4 patients, and one patient could be analyzed only for MSI. For TMB, 33.3% (4/12) had ≥10 mutations per megabase (mut/ Mb), and 66.7% (8/12) had <10 mut/Mb. TP53 mutation was detected in 50.0% (6/12), KRAS mutation in 25.0% (3/12), and abnormalities of RAS/RAF/MAPK signaling pathway (including KRAS mutation) in 33.3% (4/12), and abnormalities of PI3K-AKT signaling pathway in 25.0% (3/12). All cases were classified as microsatellite stable, and no cases were classified as MSI-high or MSI-low.
Completion
The study terminated prior to completion.
Investigator's assessment Active but too toxic as administered in this study.
Approximately 5%-10% of patients with advanced nonsmall cell lung cancer (NSCLC) have comorbid interstitial pneumonia (IP) at the time of diagnosis and are reported to have a poor prognosis. 1 There is no significant difference in the proportion of patients with comorbid IP at diagnosis between Japan and the US. Among idiopathic IPs, the incidence of lung cancer complications varies, with Kreuter et al reporting 15.8% for idiopathic pulmonary fibrosis, 6.3% for nonspecific interstitial pneumonia, and 5.6% for cryptogenic organizing pneumonia. 2 Common risk factors for the development of IPs and lung cancer have been reported to include smoking, environmental, and occupational exposure to toxic substances, bacterial and viral infections, and chronic tissue damage. 3 In addition, microsatellite instability, loss of heterozygosity, p53 mutations, and fragile histidine triad mutations have been reported as common genetic alterations in the pathogenesis of lung cancer and IP. 4,5 Pharmacotherapy for NSCLC can occasionally cause pneumonitis or acute exacerbation of preexisting IP (5%-20%), with a mortality rate of 30%-50%. Because there are only few prospective studies on patients with NSCLC with comorbid IP, there is an urgent need to establish a safe and effective pharmacotherapy, especially for second-line or later lines. This was the first phase II study conducted to evaluate the efficacy and safety of the anti-programmed cell death-ligand 1 (PD-L1) antibody in patients with NSCLC with comorbid IP. In this study, we distinguish between the terms "IP" for pre-existing interstitial lung disease and "pneumonitis" for new interstitial shadows that appeared after immune checkpoint inhibitor (ICI) administration. The term "pneumonitis" is usually used to refer to noninfectious causes of lung inflammation, such as those induced by anti-cancer drugs. Meanwhile, interstitial lung disease of unknown cause characterized by fibrosis and inflammation in the lung interstitium, that progresses in a chronic course, is usually described as "idiopathic IP." In addition, when new interstitial shadows appear after ICI administration in patients with preexisting IP, it is difficult to distinguish between "pneumonitis as pure immune-related adverse events" and "acute exacerbation of pre-existing IP triggered by ICI administration." Therefore, in this study, the appearance of new interstitial shadows after atezolizumab administration that was judged by the investigator not to be infection, heart failure, or an exacerbation of carcinomatous lymphangitis, was collectively defined as "(ICI-induced) pneumonitis." Because of the high incidence of severe pneumonitis, the present study was terminated and the planned enrollment of 38 patients could not be completed; therefore, only 17 patients were enrolled. 6 However, the primary endpoint of 1-year survival rate was 53.3% (95% CI, 25.9-74.6), and the lower limit of the 95% CI exceeded the threshold of 15% (Table 1, Figs.1 and 2). In the OAK study on pretreated e698 The Oncologist, 2022, Vol. 27, No. 9 NSCLC without IP, the 1-year survival rate of atezolizumab was 55%, which was comparable with the results shown in this study. 7 Furthermore, this study had comparable survival rates with the OAK study, which reported median overall survival (OS) of 13.8 months (95% CI, 11.8-15.7), median progression-free survival (PFS) of 2.8 months (95% CI, 2.6-3.0), objective response rate of 13.6%, and disease control rate of 48.9%. It is noteworthy that although IP is a distinctly poor prognostic comorbidity in patients with NSCLC, 8 the efficacy and survival benefits of atezolizumab were comparable between the patients with NSCLC with comorbid IP in this study and those without IP in previous prospective trials. However, it should be considered that this study included a small number of patients.
Although no standard treatment has been established for pretreated NSCLC with comorbid IP, S-1 or docetaxel has been considered by retrospective studies to be relatively safe and has been often administered in clinical practice in Japan. [9][10][11] However, all retrospective studies on cytotoxic agents as second-line or later therapy in patients with NSCLC and comorbid IP have shown a 1-year survival rate of at most 10%. 9,12 These results were inferior to the data from the EAST-LC study on Asian patients with previously treated NSCLC without IP; that study reported a 1-year OS of 50% in both the S-1 and docetaxel groups, with a median OS of 12.8 months in the S-1 group and 12.5 months in the docetaxel group. 13 Evidence on the efficacy of cytotoxic agents as second-line or later therapy in patients with NSCLC and comorbid IP is limited, and long-term survival can hardly be expected. Therefore, for patients with NSCLC with comorbid IP who have a poor prognosis and few treatment options, ICI holds great promise as the only existing treatment option that can provide long-term survival.
However, even if the balance between safety and efficacy of atezolizumab is considered, the 23.5% rate of developing grade ≥3 severe pneumonitis and the associated 17.6% mortality may be too risky (Table 2). Therefore, in order for atezolizumab to become a recommended treatment option for patients with NSCLC and comorbid IP, further investigation is required to clarify the following risk factors for the development of pneumonitis: severity, subtype, and specific radiologic findings of the preexisting IP; serum biomarkers; and the presence of specific genetic alterations. In the present report, to verify the risk factors for pneumonitis, we repeated the post hoc logistic regression analysis and included as new covariates the detected genetic alterations, such as TP53 mutation, which is a possible common etiology of lung cancer and IP. 14 However, no new risk factors for pneumonitis were identified (Table 3). Although the presence of honeycomb lung on HRCT was suggested as a candidate risk factor, the result was not significant and the risk factor analysis was done post hoc on only a small number of cases. Therefore, no definitive conclusion can be drawn from these results alone. For appropriate patient selection, large observational and retrospective studies that include data, such as CT and pulmonary function tests, are needed to identify the risk factors for ICI-induced pneumonitis.
Biomarkers of efficacy are also important to consider when deciding on the choice of a high-risk treatment. Compared with previous prospective trials on NSCLC without IP, 2 previously reported trials on nivolumab in NSCLC with mild IP showed higher efficacy. 15,16 The relatively high efficacy of ICI in patients with IP has been speculated to be associated with high tumor mutational burden (TMB) and microsatellite instability (MSI), which may also be related to the etiology of IP. However, this study did not demonstrate superior efficacy or survival benefit, compared with the data from previous studies on NSCLC without IP. Moreover, MSI was not observed in any of the cases, and TMB was not associated with efficacy (Table 4). On the other hand, our post hoc analysis results showed a tendency for longer OS and PFS in patients with PD-L1 ≥50% than in those with PD-L1 <50% (Fig. 3). On post hoc analysis, compared with patients with PD-L1 <50%, those with PD-L1 ≥50% had a tendency to have a higher 1- , log-rank test P = .184). In practice, the decision to administer ICI would need careful consideration of the risks (especially pneumonitis) and benefits, with reference to PD-L1 expression.
A limitation of this study was that it was a single-arm phase II trial with a small number of patients and was terminated prematurely. Therefore, definitive conclusions on both safety and efficacy cannot be drawn from the results of this study alone. In the future, accumulation of further knowledge is needed by conducting more studies on a large number of patients that have observational and retrospective designs, rather than prospective studies alone. A univariate logistic regression analysis was performed to verify the risk of pneumonitis. 1 Regarding the radiological findings of pre-existing interstitial pneumonia as judged by the central review committee, 8 patients had indeterminate for usual interstitial pneumonia pattern. 2 Seven patients had honeycomb lung on high-resolution computed tomography. 3 TP53 mutations were detected in 6 patients and 11 were negative or unmeasured.
e702
The Oncologist, 2022, Vol. 27, No. 9 A univariate cox regression analysis was performed to verify whether tumor mutation burden, detected genetic mutations and pathway abnormalities could be predictive biomarkers of efficacy. Hazard ratios were calculated using patients with tumor mutation burden < 10 mutations per megabase and patients with no or unknown genetic mutations or pathway abnormalities as the control group, respectively. For tumor mutation burden, 4 patients had ≥10 mutations per megabase. TP53 mutation was detected in 6 patients, KRAS mutation in 3 patients, abnormalities of RAS/RAF/MAPK signaling pathway (including KRAS mutation) in 4 patients, and abnormalities of PI3K-AKT signaling pathway in 3 patients. | 2022-06-28T06:18:06.093Z | 2022-06-27T00:00:00.000 | {
"year": 2022,
"sha1": "a73e4e87d2712c51925c86f7cf2306555f054faf",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/oncolo/advance-article-pdf/doi/10.1093/oncolo/oyac118/44266520/oyac118.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fdda01aba7140b95296b994a9e7a277ba94f9c57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15695328 | pes2o/s2orc | v3-fos-license | Long tails on thermonuclear X-ray bursts from neutron stars: a signature of inward heating?
We report the discovery of one-hour long tails on the few-minutes long X-ray bursts from the `clocked burster' GS 1826-24. We propose that the tails are due to enduring thermal radiation from the neutron star envelope. The enduring emission can be explained by cooling of deeper NS layers which were heated up through inward conduction of heat produced in the thermonuclear shell flash responsible for the burst. Similar, though somewhat shorter, tails are seen in bursts from EXO 0748-676 and 4U 1728-34. Only a small amount of cooling is detected in all these tails. This is either due to compton up scattering of the tail photons or, more likely, to a NS that is already fairly hot due to other, stable, nuclear processes.
Introduction
Type-I X-ray bursts, or X-ray bursts in short, result from thermonuclear shell flashes of hydrogen and helium on neutron stars (NSs). The fuel is accreted from a Roche-lobe-filling companion star. As the accretion of this material progresses, the pressure at the bottom of the accreted layer rises to ignition conditions for thermonuclear fusion processes like the (hot, β-decay limited) CNO cycle, triple-α process, α-proton capture and the (β-decay limited) rapid-proton capture (rp) process. In general, the fuel is burnt within 1 s in the top few meters of the NS and temperatures momentarily reach values as high as a few GK. The layer is covered by a non-burning layer, on top of which is the photosphere. What one measures is the cooling flux passing through the photosphere, with temperatures peaking at about 30 MK. The burst duration is primarily determined by the time it takes to cool the shell. For H-rich flashes, the duration is further lengthened due to prolonged nuclear burning through the rp process. The burst duration may range from a few seconds to a few hundred seconds for the large majority of X-ray bursts. Exceptional durations come from very thick helium layers on relatively cold NSs in hydrogen-poor ultracompact X-ray binaries (i.e., up to thousands of seconds; e.g., in 't Zand et al. 2005;Cumming et al. 2006;in 't Zand et al. 2007) and flashes of very thick carbon shells ('superbursts'; e.g., Cornelisse et al. 2000;Cumming & Bildsten 2001;Strohmayer & Brown 2002). For reviews on Xray bursts and further references, we refer to Lewin et al. (1993), Bildsten (1998) and Strohmayer & Bildsten (2006).
Sometimes very long tails are seen in X-ray bursts that are not related to the aforementioned long bursts. For instance, Chenevez et al. (2006) discuss a peculiar burst from GX 3+1.
Send offprint requests to: J.J.M. in 't Zand, email jeanz@sron.nl Since the advent of X-ray astronomy, about 100 bursts have been detected from this source (e.g., den Hartog et al. 2003) and all are shorter than a few tens of seconds except for this peculiar burst. It has a prolonged tail that starts at about 25% of the ordinary burst peak flux and decays with an e-folding decay time of 1110 s (for photons between 3 and 6 keV). Chenevez et al. (2006) hypothesize that the tail is due to rp capture of a rich hydrogen mixture that became available after the accretion rate dropped below the threshold where hydrogen is burnt in a stable manner (GX 3+1 was in a 10-yr minimum at about the time of the burst). However, the time scales of the slowest β decays expected in X-ray bursts are at least one order of magnitude shorter than 1110 s (e.g., Fisker et al. 2008). A few similar cases (i.e., from systems that are clearly not ultracompact X-ray binaries) are described in the literature, most notably in Czerny et al. (1987) and Gotthelf & Kulkarni (1997). There is a clear duality in the time profile of these bursts: they start with an ordinary short-lived burst, followed by a 10 2−3 s long tail without an unambiguous cooling signature. The long tail starts off quite brightly in these bursts, at a few tens of percents of the burst peak flux. Since the nature of these long tails is not well established, it is worthwhile exploring whether there are long tails that start off at a lower fraction of the burst peak (i.e., percents instead of tens of percents).
Burst studies usually concentrate on the brightest parts, roughly above 1% of the peak flux (which is often close to the Eddington limit, see Galloway et al. 2008), and for a good reason. Most bursts come from prolific bursters with mass accretion rates above the same 1% level. Since accretion is notoriously variable, this makes disentangling burst radiation from accretion radiation difficult at fluxes below 1% of the Eddington limit.
Exceptions are bursts from NSs that accrete at rates below 1% of the Eddington limit. This pertains to most persistently ac-creting ultracompact X-ray binaries (UCXBs; a nice example of a burst that could be studied with Swift to very deep levels originates in A 1246-588; in 't ) and bursts from transients whose accretion rate has dwindled down to low but nonzero values (e.g., in 't . These exceptions show tails that are natural extensions of the decays of the bright parts of the same X-ray bursts. In other words: there is no prompt/tail duality. Despite the fact that it is difficult to study bursts at sub-1% levels in fast-accreting bursters, the situation is sometimes not desperate. This paper presents a study of the unique burster GS 1826-24. The accretion is very stable in this source. The variability on a time scale of the burst recurrence time (few hours) is about 2% rms, so that the recurrence time from burst to burst is relatively stable as are the burst peak flux and profile (Ubertini et al. 1999;Cocchi et al. 2001;Galloway et al. 2004;Heger et al. 2007). This behavior earned it nicknames such as 'the clocked burster' (Ubertini et al. 1999) and 'the textbook burster' (Bildsten 2000). GS 1826-24 is also notable for a high-energy component of the burst emission (i.e., above 30 keV where negigible amounts of black body emission are expected; in 't Zand et al. 1999). Because of its stable bursting behavior and accretion rate, GS 1826-24 is excellently suited for low-level burst flux studies. The most recent distance determination is 6.07±0.18 kpc (for isotropic burst radiation; Heger et al. 2007).
The existence of long tails in GS 1826-24 was already implied in in 't Zand et al. (1999) and Thompson et al. (2005). The latter work concentrated on the persistent spectrum as measured with Chandra and RXTE. This emission could be successfully modeled by a combination of two comptonization components (see also Thompson et al. 2008) due to the presence of hot plasmas in the immediate neighborhood of the NS. Bursts were studied as well, and their spectra could be modeled by a black body component plus the variation of one of the persistent comptonization components. This is in line with the high-energy burst emission seen by in 't Zand et al. (1999) and in contrast with the so-called 'standard' modeling of burst spectra where the burst emission is modeled solely by one black body component and the persistent spectrum is unaffected by the burst emission. A simple physical interpretation of the change of one comptonization component would be that a hot plasma up-scatters some of the burst thermal photons to higher energies and itself is cooled down by the soft photons. This model was applied to 1000 s of burst data in Thompson et al. (2005). The final 850 s of the data were modeled by comptonization only.
In this paper we make a study of the long tail in GS 1826-24 employing all RXTE data available and cross checking with XMM-Newton data. Thus, we obtain a superior statistical quality and are able to probe the tail longer than Thompson et al. (2005). We supplement this analysis with briefer investigations of two other prolific bursters, EXO 0748-676 and 4U 1728-34. We propose an explanation for the tail.
Observations
We use data from the Proportional Counter Array (PCA) on the Rossi X-ray Timing Explorer (RXTE) and the European Photon Imaging p-n junction camera (EPIC-pn) on the XMM-Newton observatory. These data sets were chosen because they are complementary in photon energy bandpass (2-60 and 0.1-12 keV, respectively) with similar sensitivities and because they detected more than 10 X-ray bursts each.
The PCA comprises 5 identical proportional counter units (PCUs) with a total net collecting detector area of about Table 1. Selection of X-ray bursts from GS 1826-24 studied here. For more details, see Galloway et al. (2008)
RXTE is particularly suited for the study of low-mass X-ray binaries, and since its 1995 launch has accumulated unprecedented exposure times on many X-ray bursters, one of them being GS 1826-24. Galloway et al. (2008) compiled a catalog of all bursts detected between Feb 8, 1996, and June 3, 2007. The total PCA exposure time on GS 1826-24 over 127 observations in this time frame is 929 ks. 65 bursts were detected between Nov 5, 1997, andMarch 10, 2007, and the mean burst rate was 0.25 hr −1 . None of the bursts exhibit evidence for photospheric radius expansion, so that the peak luminosity must have been below the Eddington limit. The bolometric absorption-corrected peak flux ranged between 23.5 ± 0.8 and (29.6 ± 0.8) × 10 −9 erg cm −2 s −1 and the fluence between 0.923 ± 0.016 and (1.059 ± 0.004) × 10 −6 erg cm −2 ). The decay can be modeled by two subsequent exponential decays. The e-folding decay times ranged between 12.9 and 23.3 s for the first decay and between 40.5 and 57.2 s for A complete account of these observations is provided in Kong et al. (2007). Nine X-ray bursts were detected in the first and seven in the second observation. The final bursts in each observation suffered from high background levels and were excluded from the analysis. All X-ray detectors were on, but we concentrate on the 0.1-12 keV EPIC-pn measurements because that instrument (Strüder et al. 2001) is by far the most sensitive for our analysis. It has an effective area that ranges between 1000-3000 cm 2 for 0.5-2 keV and 900 cm 2 at 2-6 keV. The spectral resolution is 2.5% (FWHM) at 6 keV. The instrument was switched to Fast Timing mode, implying that the central CCD (of the 12 available), encompassing 13. ′ 6×4. ′ 4 of the field of view, is read out 1-dimensionally (along the 4. ′ 4 side) at a 1.5 ms resolution. Source photons were extracted between RAWX values of 30 and 45, with pixel patterns below 5 and grade 0; background photons were extracted between RAWX=10 and 25. We refer to Kong et al. (2007) for further details, noting that SAS version 7.1.0 was employed for our data analysis.
Data analysis
We selected 29 out of the 65 PCA-detected bursts that have data available between 4000 and 1000 s before the burst start and between 400 and 3000 s afterwards, see Table 1. The pre-burst interval was based on the general trend that the flux was lowest there and should provide the best estimate of the accretion flux. We processed the data of the 29 bursts to an average light curve as follows. Taking the 'standard-products' light curves as starting point (these are data from PCU2 in 5 energy bands, corrected for particle-induced background and with a resolution of 16 s) we determined the pre-burst flux level from data between 4000 and 1000 s prior to the burst (note that data gaps are common in this 3000 s time frame), subtracted that from all flux measurements, co-aligned the data at the start time of the burst (defined as the time when the photon flux exceeds 25% of the peak flux; Fig. 1 due to the different time resolution at the peak and the occurrence of saturation effects in the XMM-Newton data. The smooth curve shows an exponential function, fitted between 600 and 3000 s. The decay time was found to be τ = 1006 ± 26 s. The goodness-of-fit is formally unacceptable (χ 2 /ν = 179/22), so this result is only indicative. Galloway et al. 2008) and averaged all bursts. Figure 1 shows the resulting profiles in 2 to 4 and 4 to 9 keV, normalized to the peak flux. Apart from the well-known burst profile with a duration of about 400 s, it shows the striking appearance of an additional burst component that lasts approximately ten times longer at flux levels between 10 −3 and 10 −2 times the burst peak value. A comparison with higher time-resolution data shows that the averaging of the peak in 16 s time bins lowers the peak flux by approximately a factor of 2. While the initial decay of the profile shows the classical cooling of the bursts (the high-energy flux decaying faster than the low-energy flux), the slow decay does not show obvious cooling. The data can be described satisfactorily by an exponential decay function between 600 and 5500 s. For the 2-9 keV time profile the fit is shown in Fig. 2. the decay time is 1252 ± 25 s (χ 2 /ν = 16.9/13). Resolved in the two bands the e-folding decay times are 1261±29 s in 2-4 keV (χ 2 /ν = 16.3/10) and 1381±29 in 4-9 keV (as measured between 600 and 3500 s after burst onset for lack of statistics beyond 3500 s; χ 2 /ν = 53.4/10). Since it is expected that cooling by thermal/photon diffusion follows a power law (Eichler & Cheng 1989a) we fitted such a law and find, between 300 and 2300 s, a power law index of −0.93±0.02.
We sought verification of the long tail in data from the XMM-Newton observations. Figure 3 shows the 2-9 keV light curve, averaged over 14 out of the 16 X-ray bursts (leaving out the last burst of each observation, being compromised by increased background levels). The same long tail is seen as with Modeling results for time-resolved 0.7-10 keV burst spectra of 14 bursts detected XMM-Newton observation. Spectral channels were binned so that each bin contains at least 15 photons. The number of degrees of freedom ranges between roughly 500 and 2000 (for 2 spectra, one for each observation) which explains the small scatter in χ 2 ν . The anomalous values at 20 s are due to data drop outs due to full data buffers from high photon rates (see Kong et al. 2007).
RXTE. The e-folding decay time is τ = 1006 ± 26 s between 600 and 2400 s which is similar as for the RXTE curve. A tail is also seen in data below 2 keV, but the statistics are not so good to reveal it beyond 10 3 s. The e-folding decay time in this lowenergy bandpass is 524 ± 172 s between 400 and 1000 s, which at least shows that it is not longer than for the 2-9 keV band.
We performed time-resolved spectroscopy. Spectra were accumulated for 17 bursts that have small off-axis angles in the PCA and have identical detector-voltage settings (i.e., they are said to be in the same 'gain epoch'; see specification in Table 1). Only PCU2 data were employed to obtain a homogeneous data set. The time resolution of the spectral extraction was chosen to vary between 1 s early on in the burst and 200 s at the end. Each burst was divided in 72 time bins that are identical with respect to the burst start times. Spectra were extracted from event mode data with s/w tool seextrct for these time intervals and corrected for the particle-induced background as determined with pcabackest in the same time interval. For each burst, a pre-burst spectrum was generated from data available between 4000 and 1000 s prior to the burst, which also was corrected for particle-induced background. Subsequently the spectra for all 17 bursts were averaged in their respective time frames, and the average pre-burst spectrum was subtracted from the 72 burst spectra. These spectra were, between 4 and 20 keV, modeled with a simple absorbed black body function, employing a fixed absorption column of N H = 3.1 × 10 21 cm −2 (see XMM-Newton analysis below). The photo-absorption cross sections were taken from Bałucinska-Church & McCammon (1992) and the composition of the absorbing material from Wilms et al. (2000). About half of the spectra (in the brightest phase) turn out not to be consistent with this model and an additional power-law component with a fixed index (equal to that of the pre-burst data) was included in the model. The fitted values for the various black body parameters are shown in Fig. 4. All time intervals are well fitted. During the first 100 s the black body parameters are as expected for an X-ray burst: a temperature peaking at an equivalent of 2.3 keV and gradually decreasing after that, and an emission area that remains approximately constant after the rise phase. However, after 100 s the picture changes: the inferred emission area decreases sharply by at least an order of magnitude. The bolometric unabsorbed fluence in the 300-1500 s time frame is 3.4×10 −8 erg cm −2 . This is about 3% of the fluence in the prompt burst . A similar time-resolved spectral analysis was performed XMM-Newton data from the 14 low-background bursts, except that this involves an analysis of 0.7-10 keV photons and channels were binned to make sure that the number of photons per bin was in excess of 15 to ensure applicability of the χ 2 statistic (such a procedure was not necessary for the RXTE data). Prior to the burst spectral modeling we modeled the pre-burst data to find N H and the photon index to apply to the burst data. These are N H = 3.1 × 10 21 cm −2 and Γ = 1.47 (χ 2 ν = 1.001 for ν = 2923 over two spectra, for a systematic uncertainty per bin of 2%). The results of the burst spectra modeling (Fig. 5) are generally consistent with the RXTE results, except at the start and at the end of tail (beyond about 1000 s after burst onset) where a comparison becomes difficult because of statistical issues. This shows that the drop in radius at 100 s as seen with RXTE is not related to the lack of low-energy coverage of that instrument since it is also seen with XMM-Newton for which the bandpass is extended with the 0.7-4.0 keV photon energy range. We studied the timing properties of the tail in comparison to those of the pre-burst data. Fourier power density spectra were generated of data in the pre-burst -4000/-1000 s time frame and in the tail +1000/+3000 s time frame. Event mode data were employed at a time resolution of 2 −11 s (roughly 0.5 ms), as long as they were available (see Table 1). Power spectra were made in 1968 8-s data stretches for the pre-burst data and averaged, and for 2574 8-s data stretches in the tail data and averaged. Out of the FTOOL package vs 6.5, powspec was employed for this purpose. The relative difference between these two average power spectra is presented in Fig. 6. There is no difference between both spectra, indicating that nothing significantly changed in the accretion stream between these periods.
Other bursters
We checked the average burst profiles of other known prolific bursters in the PCA data archive, employing the RXTE bursts catalog . Most of these have variable accretion fluxes and are, therefore, difficult to analyze for the reasons discussed in the introduction. Nevertheless, they sometimes indicate long tails, although never as long as in GS 1826-24. In this section we report briefly two cases.
4.1. 4U 1728-34 = GX 354-0 4U 1728-34 was observed 346 times for a total PCA exposure time of 1.94 Ms and a total of 106 bursts were detected; a large portion of these, 69, show photospheric radius expansion. All bursts are short and have time scales between 4.4 and 8.7 s . 40 bursts have good coverage and are not within too wild accretion flux variations to search for a tail. It turns out that the average profile of these bursts extends to about 800 s which is 10 2 times longer than the initial burst phase. The exponential decay times are 477±80 s for 2-4 keV, 314±26 s for 4-9 keV (between 300 and 800 s) and 306±12 s for 2-9 keV. A power-law fit to the tail in the latter bandpass between 100 and 800 s yields a decay index of 1.17 ± 0.02.
We performed time-resolved spectroscopy in a similar manner as for GS 1826-24 (i.e., the persistent flux as determined from pre-burst data was subtracted prior to the spectral modelling). There was no need to include a power law. 36 bursts were selected for this procedure and only PCU2 data were employed. Since there is no majority within one RXTE PCA gain epoch, we employed bursts from epochs 3a, 3b, 4 and 5, averaged them per gain epoch, modeled per gain epoch and averaged the fitted parameters over the epochs per time bin. A warning is appropriate here: the bursts in 4U 1728-34 vary more than in GS 1826-24; averaging them will smooth out short features. The results yield the profiles in Fig. 7 (χ 2 ν was averaged as well; although that number does not adhere to χ 2 statistical properties, it does give a sense of the overall quality of the fit per bin). The bolometric flux can be followed downward over 3 decades. It has a smoother evolution than for GS 1826-24, possibly because the Eddington limit is reached for most bursts so that the flux flattens against a ceiling representing that limit. Furthermore noticeable is the drop in radius beyond 10 s, a similar effect as seen in GS 1826-24 beyond 100 s. The fluence of the tail is 9% of that of the prompt emission, taking 100 s as the boundary between prompt burst and burst tail. We scale the boundary with the decay time τ 2 of the latter part of the prompt burst (see Table 2, as read from Galloway et al. 2008).
EXO 0748-676
EXO 0748-676 is well known for the accretion-disk edge-on line of sight, causing eclipses and dips every orbital period of 3.8 hr (Parmar et al. 1986), and also for exhibiting very short burst recurrence times in certain accretion rate regimes. Boirin et al. (2007) discovered with XMM-Newton that there are times when this source exhibits 3 bursts in a row within only half an hour. The first burst in such a 'triplet' always shows a longer tail than the subsequent bursts. In the average burst profiles the e-folding decay time is about 2.5 times longer for the first bursts than for the subsequent bursts (50-55 s versus 14-19 s).
RXTE observed EXO 0748-676 ninety-four times for a total exposure of 1.39 Ms. Eighty-four bursts were detected. However, many of these observations were concentrated on catching eclipses and, thus, involve only small time stretches rendering burst tail studies impossible. Also, the bursts are rather weak so that multiple PCUs are needed to perform meaningful analyses. Rather, we employ XMM-Newton data to determine average burst profiles. EXO 0748-676 is the burster which, with XMM-Newton, was most intensely observed. In 158 hr of exposure investigated by Boirin et al. (2007) there are 33 singlet bursts detected, next to 14 doublets and 5 triplets. We determined average time profiles of 31 singlets that are not affected by eclipses or dips within 2000 s, in the same bandpasses as for the RXTE data for GS 1826-24 and 4U 1728-34. These also show a long tail. The exponential decay times are 266 ± 122 s and 260 ± 120 s for 2-4 and 4-9 keV respectively. Again no clear cooling is observed in this tail. The equivalent power-law decay index is 2.1 ± 0.8 for 2-4 keV and 1.6 ± 0.8 for 4-9 keV. RXTE data do show at least one interesting burst. It is a burst triplet that occurred on September 12, 2006 (onset of first burst at 16:30:36 UT). The light curve, as extracted from standard 2 data in ObsID 92013-01-03-000, is presented in Fig. 8. Standard-2 data from PCUs 0, 2 and 3 were added within 2.0-7.3 keV and the time resolution of these data is 16 s. The results of timeresolved spectroscopy are shown in Fig. 9. The unabsorbed bolometric fluence ratio over the 3 bursts is 5 : 1 : 1/10 (for the first burst accumulating fluence over only the first 300 s, so excluding the long tail). The light curve clearly shows a long tail to the first burst. The e-folding decay time is 300 ± 42 s (measured between 400 and 1200 s after burst onset and excluding the prompt flux of the 2nd burst). This is similar as found in the average XMM-Newton burst profile. The fluence in the tail, excluding the three bursts, is 4.5×10 −9 erg cm −2 which is 1.2 ± 0.1% of the prompt emission from the first burst (taking 300 s as the boundary between both). It is interesting to note that, in the bolometric flux, there is no clear boundary between the long tail in this triple burst and the prompt burst, in the sense that there is no excess of the early emission above the backward extrapolation of the tail. This is contrast to the situation for GS 1826-24 (cf., Fig. 4 top panel). The same applies to 4U 1728-34 (Fig. 7).
What is most remarkable about this long tail is that it is seemingly unaffected by the occurrence of the second burst. The tail progresses undisturbed along the same decay curve. This suggests that the first burst and its tail emission have a different ori- Fig. 9. Time-resolved spectroscopy of the triple burst from EXO 0748-676. A pre-burst spectrum was subtracted before fitting the spectra with an absorbed black body model with N H = 8 × 10 21 cm −2 . For data between 60 to 1200 s, we fixed kT for the tail emission to 1.4 keV, equal to the average value if kT is left free for the relevant time intervals. gin in the NS envelope than the second burst, either a different layer or a different locality on the surface. However, this is not testable: the fluence of the second burst is five times smaller than that of the first. If the tail would be proportionally smaller, the statistical significance of it will have dropped below the detection threshold.
Discussion
Our measurements are summarized in Table 2. We find that Xray bursts from GS 1826-24 show a long tail. In other words, they exhibit a dual time profile with a prompt burst phase lasting a few hundred seconds and a tail phase with an e-folding decay time of 10 3 s, a flux level of less than ≈1% of the peak flux and a fluence that is 3% of that of prompt burst. Beyond the first 200 s the spectrum can be modeled by a black body with a very slowly decreasing temperature of ≈0.9 keV and a strongly decreasing emission area. We find similar tails in RXTE data of 4U 1728-34 and RXTE and XMM-Newton data of EXO 0748-676, although the contrast between prompt and tail phase is less pronounced in those cases. Generally the detection of tails is diffi- Fig. 10. Average observed (red histogram) and predicted (solid) time profiles of bolometric flux, after normalization at the respective peak values and alignment at the peak. The vertical dashed lines indicate the times for the depth profiles plotted in Fig. 11. The inset shows the model on a broader and linear time scale, exhibiting the slow luminosity increase between bursts from burning of increasing amounts of accreted hydrogen in the hot CNO cycle.
cult because of confusion with variable accretion radiation. Still, bright pronounced tails have been reported in the literature for a few individual bursts. The two most obvious questions about the long tails are: what is the physical cause and why is hardly any cooling detected? We investigate the first question in Sect. 5.1, keeping in mind only GS 1826-24. In Sect. 5.2 we touch on the second question, considering 4U 1728-34 and EXO 0748-676 as well. In Sect. 5.3 we compare GS 1826-24 with the other cases listed in Table 2.
Origin of the long tail
What is the origin of the 10 3 s time scale of the long tail? One idea spawns from the detection of a hot plasma surrounding the NS (Thompson et al. 2005), namely that prompt burst photons are trapped in the hot plasma through scattering and that the 10 3 s time scale is the time it takes to drain the plasma of those photons. The maximum size of the plasma in a 2.1 hr orbit (Homer et al. 1998) is of order 10 11 cm. For an optical depth of 6 ( Thompson et al. 2005), this implies a drainage time of at most 10-10 2 s. This is one to two orders of magnitude shorter than observed. Therefore, if 2.1 hr is indeed the orbital period (this needs to be verified; e.g., Mescheryakov et al. 2004), the 10 3 s time scale cannot be explained by this idea.
Another idea is that the tail is due to cooling of layers that are deeper than the flash layer. In a thermonuclear shell flash, heat is transported upward (to be radiated by the photosphere) as well as downward ( Galloway et al. (2008); b References: 1 - Chenevez et al. (2006), 2 - Gotthelf & Kulkarni (1997), 3 -Czerny et al. (1987), 4 -Swank et al. (1977; c The time scale is defined as the burst fluence divided by the peak flux; d τ 2 is the e-folding decay time of the final part of the prompt burst; e The time range in parentheses refers to the interval in which the fit was carried out; f The two numbers refer to division by the mean time scale and τ 2 , respectively; g Fluences for GS 1826-24 and 4U 1728-34 were determined from data averaged over multiple bursts; that for EXO 0748-676 from the RXTE triple burst. The time between parentheses refers to the interval for the prompt emission; h this value applies to the triple burst only and excluding times for the second burst ; i based on counts of 0.8-12 keV photons in ASCA-GIS (table 1 in Gotthelf & Kulkarni 1997) nuclear H and He burning and are rich in elements up to a mass number between 60 and 100 (Schatz et al. 2001;Fisker et al. 2008). Radiative cooling of these deeper layers can explain the long tail. The time scale of the long tail points to a column depth that is 10 to 30 times larger than that of the burning layer. The amount of fluence in the tail is one to two orders of magnitude smaller than in the prompt burst, suggesting less heating downward than upward.
The idea that the long tail results from cooling of deeper layers is corroborated by model calculations. Heger et al. (2007) calculated various sequences of flashes specifically for GS 1826-24 with different mass accretion rates and metallicities. Their model 'A3' is the one whose recurrence time of 3.85 hr matches best the majority of our bursts. This model assumes a mass accretion rate of 1.58×10 −9 M ⊙ yr −1 (or 1×10 17 g s −1 ) and a metallicity of Z = 0.02. The long-term time profile of the radiated luminosity is very similar to Fig. 27 of Woosley et al. (2004). Ignoring the period before the first burst and after the accretion turn-on, the interburst time profile is characterized by a gradual decline till about 3000 s after the burst onset followed by a gradual increase for about 10 4 s untill the next burst. The decline is due to cooling of the deeper layers; the increase is due to hot CNO burning of newly accreted hydrogen in the burning zone. In Fig. 10 we plot the observed bolometric flux (see also Fig. 4) and the average luminosity profile of 29 bursts from the 'A3' model by Heger et al. (2007). The fluxes and luminosities were normalized to the peak value, and the light curves were aligned at the peak. Times in the model were corrected for general relativistic effects (through a multiplication with 1+z = 1.26; Woosley et al. 2004). The time profiles are an excellent match, all the way to the long tail. Figure 11 shows for this model the evolution of the depth profiles of temperature, net outward luminosity and nuclear energy generation rate per unit mass. Figure 12 presents a more detailed view of the latter panel, showing the dynamic depth profile of the specific nuclear energy generation rate, annotated with the various nuclear processes playing a dominant role at the various locations. Most of the nuclear burning occurs for column depths y < 2 × 10 8 g cm −2 , but layers are heated that are 10 times deeper (see first panel of Fig. 11). The energetically less important nuclear burning below 2 × 10 8 g cm −2 ('heated (α, γ)' or heat-induced α capture by heavier isotopes) is actually the result of conductive heating from the shallower layers.
The inward heating is due to conduction. To first order, the heat transport scales with A/Z 2 (Yakovlev & Urpin 1980;Bildsten & Cutler 1995;Cumming & Bildsten 2001) where Z is nuclear charge and A nuclear mass number. Therefore, one may expect less inward heating for a layer with heavier isotopes. The composition depends, on its turn, on the composition of the donor atmosphere and the accretion rate. Thus, one may expect more inward heating in for instance ultra-compact X-ray binaries or for accretion rates and burst regimes with high α in which most hydrogen is burnt through the hot CNO cycle instead of the rp process, because the rp process produces the heaviest elements. Unfortunately, the A/Z 2 proportionality of the heat transport is only a crude approximation so that inferences are not straightforward, for instance by comparing different bursters or bursts from the same source at different accretion rates.
There are other dependencies of the conductivity as well, such as on ignition depth which may vary from source to source and burst to burst. This will not only have an effect on the duration of the tail, but also on the fluence ratio between the tail and the prompt burst. For a relatively shallow ignition, the inward heating will not go as deep and the tail will be short and less fluent. This may explain qualitatively why only the first burst in the triplet from EXO 0748-676 has a long tail that is unaffected by subsequent bursts. The subsequent bursts ignite at shallower depths.
As mentioned above, the 'A3' model predicts a trend in the NS luminosity between bursts consisting of a gradual decline for 3000 s after a burst followed by a gradual increase for 10 4 s up to the next burst (see Fig. 10). In practice it is difficult to disentangle the NS luminosity from the flux measurements, but it Heger et al. 2007). Seven profiles are shown. The black curve is for t = 0 (burst onset), the other curves are for the times indicated with vertical lines in Fig. 10 with the same color. The top panel shows the inward heating (compare with the location of the heat source in the bottom panel, down to only 9 × 10 8 g cm −2 ) that is thought to be responsible for the long tails on the X-ray bursts of GS 1826-24. See also Fig. 12. interesting to show the straightforward observed photon rate, see Fig. 13. This is the same kind of measurement as shown in Fig. 2, except that it is shown linearly and between 12,000 s before and after the burst time. To obtain as much data as possible, particularly far away from the bursts when the data coverage is less than close to the bursts, we included 27 additional bursts for a total of 56. This plot does not show a clear increasing trend except for the final 2×10 3 s before the next burst. A similar behavior, but of worse statistical quality, is apparent from XMM-Newton data. One would expect a linearly increasing trend for the hot CNO cycle according to L = 4 × 10 34 (Z CNO /0.02)Ṁ 17 t hr erg s −1 , with Z CNO the CNO abundance,Ṁ 17 the mass accretion rate in 10 17 g s −1 and t hr time in hours, because the amount of accumulated fuel grows linearly with time and the hot CNO burning rate is a constant that depends only on the CNO mass fraction (Hoyle & Fowler 1965). The maximum slope consistent with the data shown in Fig. 13 is, forṀ 17 = 1 (Heger et al. 2007) and assuming a black body temperature of 0.5 keV, equivalent to an upper limit of Z CNO < 0.05.
A third idea for explaining the long tails is that the X-ray bursts influence the accretion disk in such a way that the accretion rate is temporarily increased. A change in accretion rate by X-ray bursts has been seen before. For instance, X-ray bursts from 4U 1820-303 (Strohmayer & Brown 2002) and 4U 1724-307 (Molkov et al. 2000) are so luminous that radiation pressure blows away the inner parts of the accretion disk, shutting off accretion for a few seconds. Secondly, there are suggestions in X-ray bursts from Cen X-4, XTE J1747-214 and 2S 1711-337 that X-ray bursts act as triggers for switching accretion disks from a cold neutral state to a hot ionized state a few days later (although that cannot be explained yet in a quantitative manner; Kuulkers et al. 2008). Lastly, it appears that immediately before and after superbursts the persistent flux from the accretion disk behaves differently in the sense that the flux has a somewhat decreased level for half a day before and an increased level for approximately a day after superbursts (e.g., Cornelisse et al. 2000Cornelisse et al. , 2002Kuulkers et al. 2002b;Keek et al. 2008). Perhaps there is a 4th type of effect from X-ray bursts on disks resulting in long tails. However, 'our' long-tailed X-ray bursts are less energetic than the aforementioned bursts that are either super-Eddington (i.e., relatively high flux) or superbursts (high fluence). The X-ray bursts from GS 1826-24 are not particularly luminous (none show photospheric radius expansion) nor extraordinarily fluent (Galloway et al. 2006). The same applies to EXO 0748-676 (only one burst shows photospheric radius expansion; Wolff et al. 2005). The majority of the bursts from 4U 1728-34 do show radius expansion, but have very small fluence due to a very short duration (Galloway et al. 2003). Finally, the power (Heger et al. 2007). Each level of shading indicates a change in the rate by one order of magnitude. The bottom of this plot is taken to be the bottom of the reservoir of ashes in the model. The hatched regions indicate convective layers (green hatched for convection and red cross-hatched for semiconvection; see Woosley et al. 2004). The convective region between 10 2 and 10 3 s is located at the bottom of the hydrogen left over after the burst and is probably due to a Rayleigh-Taylor instability related to composition inversion resulting from the burning. It does not have a major influence on the structure and evolution of the model. 'βCNO' refers to a variety of the CNO cycle where the reaction rates are limited by β decays; this is the 'hot' CNO cycle. '(α, γ)' refers to α capture by heavier isotopes such as 12 C and 16 O. A series of bursts is shown; the next burst occurs at log(time/s)=4.15. density spectra before the burst and during the tail are not significantly different (c.f., Fig. 6), suggesting no change in accretion stream or rate. We believe that an explanation for the long tails in terms of a changed accretion environment is less likely.
Lack of strong cooling, decreasing black body area
The lack of cooling could be explained by Compton upscattering of all photons by the hot plasma that surrounds the NS and the temperature inferred from the spectrum may in fact be representative of the plasma rather than the NS. A problem of this explanation is that the inferred emission area drops so suddenly in both GS 1826-24 and EXO 0748-676, after 100 and 60 s respectively. The temperature of the burst at that time is still sufficiently high that one should detect large numbers of unscattered photons. A decline of the normalization, if at all, is expected to be more gradual.
Another, in our opinion more likely, explanation is that the neutron star may already be fairly hot without the heating by thermonuclear flashes. The effective temperature can then never drop below the 'quiescent' NS value. There is sufficient persistent energy production to sustain a NS hot enough to explain our measurements, by stable hydrogen burning via the hot CNO cycle (the model depicted in Fig. 11 predicts about 1×10 35 erg s −1 ), pycnonuclear reactions and electron capture processes in the crust (the same model assumes 1.6 × 10 34 erg s −1 or 0.15 MeV/nucleon). Gravitational energy release by the settling of the accreted processed matter in the envelope is negligible for these accretion rates (Brown & Bildsten 1998). Gravitational energy release by accretion occurs just outside the NS, is radiated away from the NS (King 1995), and is decoupled from the burst emission as long as the burst flux is less than the Eddington limit. The total energy production rate depends on the mass accretion rate and the H and CNO abundance of the donor atmosphere. Also, it may partly heat up the core instead of the photosphere.
Luminosities are expected to reach up to at least a few times 10 35 erg s −1 . For a canonical 10 km NS radius and ignoring the gravitational redshift of a few tens of percents, the Stefan-Boltmann law predicts for a luminosity of 2×10 35 erg s −1 an effective temperature of the non-bursting NS of order 0.4 keV. If a burst occurs, the photosphere temperature rises to a few keV and the emission is completely dominated by this extra heating, but if the temperature becomes comparable to the quiescent value, in the tail of the burst, the spectrum is strongly affected by the already hot NS. In a standard time-resolved spectroscopic analysis, where a pre-burst spectrum is subtracted and the net spectrum is modeled through a black body, one actually subtracts one Planck function from another. The resulting function is not a Planck function and if, nevertheless, it is modeled as such, the physical meaning of inferred emission areas is lost. This effect has been extensively studied by van Paradijs & Lewin (1986). They find 1) that it is particularly important in the tail of an Xray burst; 2) that the fit results in constant temperatures and decreasing emission areas, as we find in our analyses, and 3) that the derived temperature has a tight relationship with the NS temperature: the measured temperature is about 30% higher than the NS temperature outside bursts. Taken at face value, our measurements imply a NS temperature of 0.7-0.8 keV for both GS 1826-24 and EXO 0748-676, ignoring again gravitational redshift and deviations from black body radiation that become more significant towards lower temperatures (e.g., Zavlin et al. 1996). This is encouragingly close to the simple prediction done above of 0.4 keV. A similar effect in reversed time order may be happening during the rise phase of bursts from GS 1826-24. Kuulkers et al. (2002a) investigate this effect in detail for bursts from the high-Ṁ system GX 17+2. van Paradijs & Lewin (1986) suggest to study burst data without subtracting the preburst spectrum and employing a model that includes components for the accretion disk emission and a black body for the thermal emission from the NS. Kuulkers et al. (2002a) follow this suggestion and model the accretion disk emission by a cutoff power law. They find unacceptable values for the goodness of fit and dismiss the explanation by an already hot NS and put forward the possibility that the decreasing black body radius is connected to blanketing effects in the NS atmosphere and comptonization of burst photons in the NS atmosphere. Theoretical calculations (London et al. 1986;Ebisuzaki 1987;Pavlov et al. 1991;Zavlin et al. 1996;Majczyna et al. 2005) show that the color temperature is between 1.2 and 1.7 times the effective temperature. The fitted black body radius thus decreases by the square of that, to maintain the same bolometric flux. This cannot explain the drop in radius that we observe which is at least a factor of 5.
As Kuulkers et al. (2002b) point out, the reason for the unsuccesful modeling for GX 17+2 is that the accretion disk spectrum also contains a strong black body component due to the highṀ in this LMXB. The disk black body has an only slightly higher temperature than that expected of the NS and therefore the latter is difficult to distinguish. Also, in GX 17+2 the lowenergy absorption is high with N H = 1.9 × 10 22 cm −2 (Farinelli et al. 2007) so that it is tough to find evidence for a NS of temperature kT ≈ 0.5 keV. Finally, the flux is expected to be low, of order 0.1% of the burst peak, so that accumulating a statistically relevant spectrum is challenging. Our measurements of GS 1826-24 do not suffer from these difficulties. Perhaps a decreasing radius and a flattening temperature in a 'standard' burst analysis (i.e., with subtraction of the pre-burst spectrum) constitute the best possible evidence for a hot NS.
We note that many X-ray bursts in the RXTE catalog show similar behavior: temperatures remaining above ≈1 keV and decreasing fitted radii. Since an explanation by a NS that is already hot without flashes is more likely to be applicable to a major portion of the burster population than scattering in a hot circumstellar plasma, this provides additional support to that explanation. However, these data are vulnerable to lack of low-energy coverage of the PCA, as well as from sometimes high absorption columns. As a result, it is quite difficult to accurately measure kT below 1 keV. GS 1826-24 is one of the few cases where this problem does not exist: we have the XMM-Newton data to corroborate the RXTE data and a low N H .
We furthermore note that the behavior of burst tails in UCXBs is expected to be markedly different in the hot-NS scenario for two reasons. Firstly, UCXBs have much lower H abundances so that hot CNO burning provides much less heating outside flashes. Secondly, in many UCXBs the accretion rate is lower so that again the energy production rate is lower outside bursts and the NS cooler. This expectation is in line with highsensitivity spectroscopy of some X-ray bursts from UCXBs, for example in A 1246-588 (in 't where the temperature is seen to decay to 0.5 keV. Figure 14 shows the time profiles of the bolometric flux for 4U 1728-34 and EXO 0748-676, together with that for GS 1826-24. These are the same data as shown in the top panels of Figs. 4, 7 and 9, except that one earlier time bin is shown. For convenience the fluxes have been normalized to the respective peak values. This figure shows that there is one property that distinguishes GS 1826-24 from the other two sources: there is a clear difference between the prompt and the tail phase. The tails of 4U 1728-34 and EXO 0748-676 are smooth extensions of the initial burst phase. This may be related to the energetic contribution of the rp-process being larger in GS 1826-24 (cf, Fig. 12). It is probably not related to a smaller amount of inward heating in 4U 1728-34 and EXO 0748-676. The alternative explanation, deeper ignition depths in 4U 1728-34 and EXO 0748-676, is not consistent with the small burst recurrence times, similar to GS 1826-24 ). The other documented cases (Swank et al. 1977;Czerny et al. 1987;Gotthelf & Kulkarni 1997;Chenevez et al. 2006, see Table 2) do seem to have dual time profiles. Their higher tail-topeak flux ratio indicates that the heating to the deeper layers can be even more efficient than seen in GS 1826-24. Perhaps there is a smaller abundance of heavy isotopes. However, this is difficult to assess quantitatively because a quantitative comparison with theoretical models has yet to be carried out.
Long tails in other sources
An exceptional case is the long tail in GX 3+1 (Chenevez et al. 2006). It contains much more fluence than the prompt burst, perhaps 40 times as much 1 ! Aql X-1 (Czerny et al. 1987) and the source in M28 (Gotthelf & Kulkarni 1997) may also have large fluence ratios but there are data gaps that preclude verification. If interpreted as direct NS emission, this cannot be explained through cooling of deep layers, particularly since the prompt fluence is similar to that for other bursts from GX 3+1 that do not show a long tail. One difference that distinguishes these other cases from the cases discussed in the paper is that they were most probably in a different burst regime when the long-tailed bursts occurred, namely in a regime without continuous hot-CNO hydrogen burning.
Several classes of long X-ray bursts, with e-folding decay times in excess of roughly 100 s, have been discovered in the past decade. This includes superbursts (flashes of 100 m thick carbon-rich layers; for a recent list, see Keek & in 't Zand 2008) and intermediately long bursts that may result from flashes of 10 m thick helium layers (in 't Zand et al. 2005;Cumming et al. 2006;in 't Zand et al. 2007) or otherwise (Chenevez et al. 2007;Linares et al. 2008). With the present paper, one realizes that even the 'classical' short X-ray bursts can be similarly long in some sense. The distinguishing factor is that these bursts initiate with a normal-sized X-ray burst and that, probably, the implied thick layer is not heated locally by nuclear reactions but by conduction from a hotter layer on top.
Conclusion
We have detected an hour-long tail to bursts in GS 1826-24 with fluxes and fluences that are two orders of magnitude smaller than those of the bursts themselves. We have found similar tails in bursts from 4U 1728-34 and EXO 0748-676, although they are less distinguished from the prompt burst emission. While detection in other bursters is hampered by varying accretion fluxes of similar magnitude, there are reports of individual cases of bursts with long tails, most notably in GX 3+1 (Chenevez et al. 2006). Model calculations show that the tail in GS 1826-24 can be explained by delayed cooling of layers that are up to ten times deeper in column density than where the flash occurs and that were heated up through inward conduction of the flash heat. Possibly tails in other sources can be similarly explained. Further model calculations are needed, where dependencies of donor composition, ignition depth and accretion rate are taken into account. Comparing such calculations to different kinds of bursts and bursters may yield constraints on the details of conduction in the neutron star envelope.
A characteristic of the tails, that at first hand is unexpected in this scenario, is the small amount of cooling. Rather than being due to up-scattering of the burst photons by a hotter opticallythick plasma, we believe that the most likely explanation is that the NS is already hot without flashes. The temperature that one measures in the burst tail is then representative for the nonbursting NS. This scenario also provides a more natural explanation for the decreasing black body normalization in the tails. As discussed by van Paradijs & Lewin (1986), this presents an interesting opportunity to study NS temperatures as a function of accretion rate. However, this may be a cumbersome. Apart from good low-energy coverage, one would need to work on a single-burst basis, also for the accurate modeling of the persistent spectrum. Better data would be needed than presented here. | 2009-02-02T18:08:28.000Z | 2009-02-02T00:00:00.000 | {
"year": 2009,
"sha1": "477e42a5b2f8e52a2be2d097f209c010cf397f36",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2009/14/aa11432-08.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "189fa7b02e1fac4513413d362ebeb7f5b9efd5cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225354692 | pes2o/s2orc | v3-fos-license | Dietary Camellia sinensis Influences the Broilers: A Review
This study was conducted in order to understand the impact of using dietary Camellia sinensis in broilers. In this regards several studies were explored and obtained findings were found to be much interesting and useful. In summary it has been reported by researchers that Camellia sinensis supports the feed intake (4480 g/b), water intake (8960 ml/b), live body weight (2356.8 g/b), weekly weight gain (2322.8 g/b), carcass weight (1381.8 g/b) and feed conversion ratio (1.92). Further, it was stated that Camellia sinensis reduces the relative weight of heart, liver, spleen, proventriculus, intestine and fat pad by 13.53, 61.1, 2.26, 58.13, 10.2 and 81.41%, respectively compared to their normal weights. Camellia sinensis enhances the immunity of broilers that results lower infection rate and mortality rate. Concerning digestibility it was indicated by researchers that digestibility of crude protein improves by 80.33%, ether extract by 76%, crude fiber 33.83% and metabolizable energy by 79.66%. In conclusion, Camellia sinensis has been proved an important dietary supplement for the broilers. It supports birds’ immunity, production and performance.
as a natural product that is non-toxic (Krul et al., 2001). There have been several reports that Camellia sinensis provides several functional activities related to free radicals and reduction in the incidence of cancer, blood cholesterol and to blood pressure. Also, Camellia sinensis has antitumor and anti-diabetes effects in the human body (Mukhtar and Ahmad, 1999). Keeping in view these facts current study was planned in order to investigate whether Camellia sinensis have any influence on broiler chicks.
Dietary use of Camellia sinensis
Medicinal plants are frequently used in the animal nutrition as possible natural alternatives for antibiotic growth promoters (Hazzit et al., 2006;Cross et al., 2007). In particular, the biological, physiological and pharmaceutical effects of Camellia sinensis have been widely studied in the past decade (Ishihara et al., 2001;Kondo et al., 2004;Zanchi et al., 2008). Camellia sinensis components have been found to maintain micro flora balance and exert antimicrobial effects Dietary Camellia sinensis Influences the Broilers: A Review against pathogenic bacteria without affecting lactic acid bacteria (Hara-kudo et al., 2005). Administration of Camellia sinensis with probiotics was found to have no negative effect on weight gain, feed efûciency, carcass composition or reduction of the values of thiobarbituric acid-reactive substances (TBARS), but inûuenced humoral and cellmediated immunity of pigs (Ko and Yang, 2008). In addition, treatment with Camellia sinensis does not alter the blood components of beef cattle or calves (Lee 2005;. The ability of Camellia sinensis polyphenols to increase lactobacilli populations and decrease bacteroidaceae in chicken cecal contents (Terada et al., 1993), porcine faeces (Hara et al., 1995) and ruminants (Bureenok et al., 2007) has already been reported.
Amongst the bio-active compounds found in Camellia sinensis, the largest component present in Camellia sinensis leaves is carbohydrates (cellulosic fibre). The simplest compounds are catechins, a group of flavanoids called flavan-3-ols (Yilmaz, 2006). These catechins are synthesised in Camellia sinensis leaves through malonic acid and shikimic acid metabolic pathways with gallic acid as an intermediate derivative (Naidu, 2000). Catechins are colourless, water-soluble compounds that impart bitterness and astringency to Camellia sinensis infusions (Wang et al., 2000). Catechins constitute 15% to 30% of dry weight of Camellia sinensis leaves, as opposed to 8% to 20% of oolong and 3% to 10% of black tea (Biswas et al., 2000). Camellia sinensis extract contains six primary catechins, namely epicatechin (EC), epicatechin gallate (ECG), epigallocatechin (EGC) and epigallocatechin-3-gallate (EGCG). Kajiya et al., 2004). Epigallocatechin-3-gallate is the most important and well-studied tea catechin owing to its high content (50%) in tea. It also has the most potent physiological properties in comparison to other components (Taylor et al., 2005). Ester-type catechins ECG and EGCG are more bitter and astringent than EC and EGC and these flavanoids have a greater synergistic action than individual tea components (Fujiki, 1999). Uuganbayar et al., (2006) reported that total catechin contents were 15.73% for Korean Camellia sinensis, 15.60% for Japanese Camellia sinensis and 14.04% for Chinese Camellia sinensis. Of the total catechin components, EGC was predominant, accounting for 67.50% (Korean Camellia sinensis), 67.90% (Japanese Camellia sinensis) and 63.80% (Chinese Camellia sinensis) of the total catechin contents. In another study, Camellia sinensis leaves contained active constituents of 1.01% total phenols, 105 mg/kg caffeine, 50 mg/kg catechin, 35 mg/kg EC, 185 mg/kg ECG and 17.5 mg/kg ascorbic acid (Abdo et al., 2010). The catechin content showed considerable variability in one study. The levels of EGCG ranged from 117 mg/L -442 mg/L, EGC from 203 mg/L -471 mg/L, ECG from 16.9 mg/L -150.0 mg/L, EC from 25 mg/L -81 mg/L and catechin from 9.03 mg/L -115.00 mg/L. Moreover, caffeine contents in the Camellia sinensis infusions studied were between 141 mg/L and 338 mg/L (Reto et al., 2007).
Minerals comprise about 4% -9% of the inorganic matter of tea (Chaturvedula and Prakash, 2011). Abdo et al., (2010) assayed the mineral content profile in Camellia sinensis leaves as 4.66% calcium, 1.62% total phosphorus, 865.1 mg/kg manganese, 146.3 mg/kg zinc and 858.1 mg/ kg selenium. Similarly, Reto et al., (2007) evaluated some minerals in Camellia sinensis samples and reported that potassium was found in larger amounts (92 mg/L -151 mg/L), whereas the content of sodium, calcium, fluoride, aluminium, manganese and iron were 35 mg/L -69 mg/L, 1.9 mg/L -3.5 mg/L, 0.8 mg/L -2.0 mg/L, 1.0 mg/L -2.2 mg/L, 0.52 mg/L -1.90 mg/L, 0.020 mg/L -0.128 mg/L, respectively. Costa et al. (2002) observed large variations in the mineral content (aluminium, calcium, magnesium and manganese) in Camellia sinensis from different origins. Shu et al., (2003) observed marked variations amongst different tea varieties in accumulating fluoride and aluminium. Xu et al. (2003) reported that the content of selenium in Camellia sinensiss was greatly increased by foliar application of selenium-enriched fertilizers; moreover, the selenium-enriched Camellia sinensis exhibited significantly higher antioxidant activity than regular Camellia sinensis.
Influence of Camellia sinensis on broilers
Only limited information is available for response of avian species, particularly in broiler chickens, to supplemental dietary Camellia sinensis powder. In our previous study on Dietary Camellia sinensis Influences the Broilers: A Review laying chicken in long term, supplemental Camellia sinensis powder (0.6%) caused decrease in body weight gain and also observed significant reduction of total fat in egg yolk either expressed as an absolute or relative weight (Biswas et al., 2000). It is basically consistent with present observation in broiler chickens that Camellia sinensis powder markedly reduced absolute weight, percentage of abdominal fat, cholesterol levels of liver and blood serum. Significant increase of thigh percentage with Camellia sinensis powder feeding is not clearly explainable, though this may enhance behavioral activity of the broilers (Biswas and W akita, 2001). Camellia sinensis containing high catechin may have an inhibitory effect on intestinal absorption of lipid (Ikeda et al., 1992). This may prevent an excessive accumulation of lipid in the liver and other tissues. The reduction in tissue cholesterol may also be explained by a negative effect of tea catechin on formation of micell that mediates reabsorption of bile acid (Muramatsu et al., 1986). Such increase of unabsorbable bile acids may also lead to reduction in liver cholesterol and blood serum cholesterol of Camellia sinensis powder feed broilers. The possible explanation is on tea fiber that there is a great deal of evidence that dietary fiber could reduce the level of cholesterol in animals (Evans et al., 1992) through adsorbing bile acids and various lipids on it. In addition, the phenolic compound in tannic acid plays an important role in the catabolism of liver cholesterol (Yugarani et al., 1992). Conversion of cholesterol to bile acids occurs exclusively in the liver and represents the major pathway for the elimination of cholesterol from the body. This may also explain the reduced cholesterol levels. Furthermore, it has been suggested that Camellia sinensis has thermogenic properties and promotes fat oxidation beyond that explained by its caffeine (Dulloo et al., 2000).
In another study it was stated that the Camellia sinensis extract may play a role in the control of body composition via sympathetic activation of thermogenesis, fat oxidation, or both in humans (Dulloo et al., 1999). This may be also one of the reasons. However, the reduction of carcass fat would have been caused by the suppressive effect of Camellia sinensis powder on feed intake, which in turn reduce hepatic lipogenesis a major site of lypogensis in poultry and fat accumulation in adipose tissue and muscles (Saadoun and Leclercq, 1983). Therefore, to make conclusion clear on fat and cholesterol reduction, further experiment with restricted feeding should be carried out. However, the dietary Camellia sinensis powder could be employed to reduce the undesirable carcass fat without altering general performance on carcass in broiler chickens. There was a tendency that Camellia sinensis powder feeding improves feed conversion ratio. This has been also observed in other feeding experiments using layers (Biswas et al., 2000;Yamane et al., 1999). All the results point out the decreased feed intake by Camellia sinensis powder supplementation without change of body weight gain or egg production. Even though the mechanisms involved in these improvements are not precisely understood, Camellia sinensis powder could be a potent feed additive for broilers as well as layers . Further, it has been reported that Camellia sinensis inclusion in broiler diets had positive effects on growth performance and lean meat production of the broilers. added four levels of Camellia sinensis powder (0.5%, 0.75%, 1% and 1.5%) to broiler starter and finisher diets. Supplemental Camellia sinensis powder tended to decrease feed intake and body weight gain at a higher dose, but tended to improve FCR. Dressing percentage was not affected by Camellia sinensis, although proportions of some parts of the carcass were influenced. The proportion of thigh meat was increased by the 1.50% level feed while that of wing meat was decreased in all treatment groups. The quantity and percentage of abdominal fat were decreased significantly with supplementation. Yang et al., (2003) and Guray et al., (2011) reported that when the Camellia sinensis by-product level was increased, the percentage of abdominal fat reduced in broilers. Kaneko et al., (2001) reported that 1%, 2.50% and 5% of Camellia sinensis in broiler diets linearly reduced body weight gain of the chicks. Similarly, Uuganbayar (2004) also reported that 1% to 1.5% Camellia sinensis supplement in broiler diet had the effect of reducing body weight gain of the chicks. Yang et al., (2003) determined the optimum level of Camellia sinensis byproduct (0.5%, 1% and 2%) in diets without antibiotics and evaluated its effect on broiler performances. They observed non-significant differences in feed intake and feed efficiency amongst treatments. Cao et al., (2005) indicated that body weight gain, feed intake and feed efficiency from 28 days to 42 days of age were not improved; however, mortality was significantly reduced by supplementation with Camellia sinensis byproducts. Recently, Shomali et al., (2012) investigated the effects of high levels of Camellia sinensis powder (1%, 2%, or 4%) on broiler growth performance for two weeks. Differences in body weight, feed intake and FCR were insignificant as well. In contrast to the above studies, observed significantly increased weight gain (1210.61 g/bird) in broilers during the finishing period at the 0.5% level compared to the 1% (1033.36 g/bird) level of Camellia sinensis. Guray et al., (2011) supplemented a liquid hydroalcoholic extract of fresh Camellia sinensis (0.1 g/kg or 0.2 g/kg) in broiler diets. The dietary Camellia sinensis extract increased the body weight, feed efficiency, carcass weight and dressing percentage. The broilers in Camellia sinensis supplemented groups consumed more feed than the control birds throughout the entire experimental period. The relative gut length of broilers in the high level of Camellia sinensis group tended to be lower than those in the control group. The dietary Camellia sinensis extract increased redness (1) and yellowness (2) values of the breast meat. Thus, the Camellia sinensis extract appeared to have a measurable impact on CIE colour values of the breast meat in broilers. The authors concluded that the improved Dietary Camellia sinensis Influences the Broilers: A Review production results in the broilers with added Camellia sinensis extract are directly associated with physiological mechanisms such as the regulation of the caecal microflora. The production sources of Camellia sinensis used in all of the above studies were different, for example: Japanese Camellia sinensis powder (Biswas and W akita, 2001;Kaneko et al., 2001); Japanese and Chinese Camellia sinensis or their polyphenols (Cao et al., 2005); Korean Camellia sinensis powder Yang et al., 2003) and eastern black sea coast of Turkey tea powder. All of these Camellia sinensis sources had different compositions. The inconsistency amongst the studies may be explained by the differences in total catechin content and its major components such as epicatechin, epigallocatechin, epicatechin gallette, epigallocatechin gallette, of the Camellia sinensis and Camellia sinensis extract used in these studies (Guray et al., 2011).
CONCLUSION AND SUGGESTIONS
Present study concludes that Camellia sinensis significantly supports the broilers production as well as overall performance. It has negative effect on visceral organs, whereby least weight of heart, liver, spleen, proventiculus, intestine, fat pad are noticed among the birds having Camellia sinensis in the diet. However, feed intake, water intake, live body weight, weight gain, carcass weight, nutrients digestibility and feed conversion ratio (FCR) are considerably favored with the use dietary Camellia sinensis in broilers. Camellia sinensis also supports the immune system and that results minimum mortality.
Moreover, it is recommended that Camellia sinensis should be incorporated in broilers' rations for obtaining maximum digestibility, production and overall birds' performance. Further studies should be conducted on the effect of supplementation of Camellia sinensis on blood biochemistry and genotype of all poultry birds. Histopathological studies on the effect of Camellia sinensis supplementation are also required be carried out in broilers as well as other poultry birds. | 2020-10-28T19:15:52.921Z | 2020-08-18T00:00:00.000 | {
"year": 2020,
"sha1": "9826dfef6d9e13dff5d47f030644df0812c59e98",
"oa_license": null,
"oa_url": "https://doi.org/10.18805/ag.r-150",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bb22387488732bdcf39fa13961dfa308cc16f36e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
266771082 | pes2o/s2orc | v3-fos-license | EVALUATING THE IMPACT OF DISPERSED PARTICLES IN THE WATER OF A POWER PLANT RECIRCULATING COOLING SYSTEM ON THE DISCHARGE OF SUSPENDED SOLIDS INTO A NATURAL WATER BODY
The object of this study is the processes of formation and changes of dispersed particles in fresh, make-up, cooling, and return water in open recirculating cooling systems (RCS) with an assessment of the influence of suspended sub-stances in discharge waters on the aquatic ecosystem. The study was carried out on the example of the Rivne Nuclear Power Plant (RNPP) and the Styr River. Dispersed particles (DPs) pose technological obstacles in the RCS of power plants, and their content in discharge waters determines the ecological quality of water bodies. This paper describes the results of studying the formation and chang-es of DP in raw, make-up, cooling, and return waters of RNPP RCS with an assessment of the impact of suspended substances in discharge waters on the aquatic ecosystem of the Styr River. It was found that the formed dispersed particles after water treatment by liming contain DP consisting of calcium carbonate and have a size of 10–30 μm. As a result of agglomeration of DP in RCS, they aggregate to 120–150 μm, and due to low sedimentation resistance (sedimentation time 0.97 h), they settle in RCS. As a result of the deposition of DP in RCS, their significant decrease in return water (min–max=7.31–16.12 mg/dm 3 ) is observed, despite the increase in their content in make-up water after water treatment (min–max=10.22–49.46 mg/dm 3 ). According to the ecological classification, according to the content of suspended substances, the water of the Styr River in the zone of influence of RNPP discharges belongs to the II class, category 2, which characterizes the quality of the water as “very good” in terms of its state, and “clean” in terms of its degree of purity. It was concluded that the content of suspended solids does not exceed the established maximum permissible concentration (25 mg/dm 3 ), the increase in the concentration of suspended solids does not exceed the established ecological standard of 0.25 mg/dm 3 and does not have a negative impact on surface water. The results of the research could be used for other power plants equipped with an open RCS
Introduction
Open recirculating cooling systems (RCS) of power plants are necessary for heat removal [1]; thermal loads from cooling water are released into the atmosphere through cooling towers, and return water is discharged into reservoirs [2].The reliability of RCS from the point of view of the chemical aspect is related to the probability that the system will not fail due to chemical processes [3], the manifestations of operational failures are symptoms of the main influencing factors: chemical composition, temperature, pressure [4].In addition to equipment failures in the RCS system, the chemical aspect determines the efficiency, productivity, and cost-effectiveness of operation [5], in particular, the implementation of proper chemical monitoring of process waters can increase the reliability of equipment and prevent the shutdown of power plants [6].
Make-up water is constantly sent to RCS and return water is discharged into the reservoir [7], the chemical composition for most components is calculated as a function of evaporation, blowing, and droplet removal [8], but this function is not preserved for dispersed particles (DP) [9].DPs enter the RCS with make-up water, their formation also occurs in RCS due to the formation of sediments of the main components of water: calcium ions and carbonate, sulfate, and phosphate ions [10].Costs to avoid scaling and anti-scaling treatment can be as high as USD 0.93 million per year or 0.88 % of revenue for a 550 MW base plant [11].In order to effectively control the formation of sludge and scale in RCS, it is necessary to understand the processes of the behavior of DP in the RCS of power plants [12][13][14].Common sources of DP in RCS are impurities in makeup water, which is used to replenish losses in the system; corrosion and erosion of structural materials; biological
The object of this study is the processes of formation and changes of dispersed particles in fresh, make-up, cooling, and return water in open recirculating cooling systems (RCS) with an assessment of the influence of suspended substances in discharge waters on the aquatic ecosystem. The study was carried out on the example of the Rivne Nuclear Power Plant (RNPP) and the Styr River. Dispersed particles (DPs) pose technological obstacles in the RCS of power plants, and their content in discharge waters determines the ecological quality of water bodies. This paper describes the results of studying the formation and changes of DP in raw, make-up, cooling, and return waters of RNPP RCS with an assessment of the impact of suspended substances in discharge waters on the aquatic ecosystem of the Styr River. It was found that the formed dispersed particles after water treatment by liming contain DP consisting of calcium carbonate and have a size of 10-30 μm. As a result of agglomeration of DP in RCS, they aggregate to 120-150 μm, and due to low sedimentation resistance (sedimentation time 0.97 h), they settle in RCS. As a result of the deposition of DP in RCS, their significant decrease in return water (min-max=7.31-16.12 mg/dm 3 ) is observed, despite the increase in their content in make-up water after water treatment (min-max=10.22-49.46 mg/dm 3 ). According to the ecological classification, according to the content of suspended substances, the water of the Styr River in the zone of influence of RNPP discharges belongs to the II class, category 2, which characterizes the quality of the water as "very good" in terms of its state, and "clean" in terms of its degree of purity. It was concluded that the content of suspended solids does not exceed the established maximum permissible concentration (25 mg/dm 3 ), the increase in the concentration of suspended solids does not exceed the established ecological standard of 0.25 mg/dm 3 and does not have a negative impact on surface water. The results of the research could be used for other power plants equipped with an open RCS
Keywords: discharge return water, granulometric and chemical composition, suspended substances fouling; therefore, to determine the sources of entry, it is necessary to establish the chemical composition of DP.
According to their origin, DPs of substances in water bodies can be natural or man-made [15].The composition of river runoff is dominated by silty particles with a size of 1-10 microns and silty particles with a size of 10-100 microns [16,17].In accordance with established environmental standards for insoluble substances, the indicator of suspended substances is determined [18][19][20], which characterizes the volume of impurities retained on a paper filter during sample filtration.Suspended substances have a great impact on hydrobionts and their habitat, and it consists in the fact that they can clog the gills of fish and prevent gas exchange [21], they also envelop eggs and interfere with their development: the eggs seem to dry out, and the embryo dissolves [22].The negative side effect of suspended substances occurs through the reduction of feed resources and is caused by a decrease in the intensity of photosynthesis due to a decrease in water transparency [23].Suspended substances, settling to the bottom, form sediments that prevent the development of benthos and the root system of plants [24].
The current paper reports a study into the processes of formation and changes of DP in RCS and the influence of their discharge with the return waters of RCS.Technological (processes of water preparation and concentration in RCS) and ecological (ecological standards of discharge and reservoir quality) aspects of the study are relevant both from the point of view of optimization of electricity production processes, and from the point of view of the impact on the environment.
Literature review and problem statement
On the basis of environmental safety standards for water use, water quality assessment for water use is carried out based on sanitary and hygienic standards.According to the existing concept of environmental regulation, the standard determines the ecological safety of water use with the provision of hygienic standards for maximum permissible concentrations (MPC) of pollutants in natural objects.
Pollution is the accumulation of unwanted solid substances on the heat transfer surface, the constant accumulation of pollution leads to an increase in thermal resistance and worsens the efficiency of operation of power plant equipment [25].The process of pollution formation in RCS occurs with the crystallization of calcium carbonate (CaCO 3 ) and magnesium hydroxide (Mg(OH) 2 ).It is important to establish the chemical composition of pollution at industrial facilities, as it can explain the process of its formation and, accordingly, take measures to minimize it [26].The importance of studying DP in RCS is emphasized by investigating the processes of formation of the heterogeneous phase of CaCO 3 [27][28][29].The processes of CaCO 3 crystallization and inhibition of its formation by phosphonates were studied, and changes in the crystalline form of CaCO 3 were noted [27].Heterogeneous nucleation and growth of CaCO 3 crystals was studied in supersaturated solutions [28]; it was shown that the forcing and further growth of crystals is reflected through the formation of seed crystals.It was shown [29] that the growth of DP crystals occurs due to their aggregation.However, the issues related to conducting research on the formation of the heterogeneous CaCO 3 phase on real objects, in particular during the formation of scale in RCS, remained unresolved since available studies [27][28][29] related only to model solutions and bench studies.It is obvious that the unresolved issue of research on real objects is due to insufficient effective cooperation between science and industry [30].Known modern studies are focused separately on the study of the morphology, granulometric composition, chemical composition of DP of insoluble substances that occur under the influence of technological factors, or studies into the ecological impact of DP directly in a natural reservoir.The reason for the unsolved problems of the integrated approach may be objective difficulties associated with the lack of implementation of a holistic solution since it requires taking into account environmental and technological aspects, and accordingly, the cooperation of specialists from various fields.Thus, the formation of DP in the RCS of power plants may have a detrimental effect on system efficiency, equipment life, and environmental compliance.An option to overcome the relevant difficulties may be to carry out a comprehensive study of the formation of DP, taking into account ecological and technological aspects.
In [31], the results of studies of the effect of DP on increasing the formation of scale in RCS are given; however, there are some gaps in the study of the influence of dispersed particles of substances on the formation of scale.To reduce the processes of scale formation in RCS, water treatment by liming is used, which is an inexpensive method of water treatment of make-up RCS water [32].At the same time, the wide application of liming for water treatment remains limited.According to paper [33], this is due to the fact that liming creates an excessive amount of sediment and introduces make-up components into the treated water, and therefore requires research into the mechanisms of heterogeneous phase formation.A corresponding attempt was made in work [34], which shows the possibility of directly determining the size of DP from images and indirectly from their sedimentation, which allows the use of microscopic and sedimentation methods to study the characteristics of DP during liming in water treatment.Other studies of sedimentation kinetics during liming are also known, but studies of the process of formation of DP during the industrial application of liming in clarifiers remain unresolved in them.For example, in works [33,35] it was established that liming increases the concentration of suspended substances, thereby affecting the quality of make-up water and can increase the amount of pollution in the sewage treatment plant.
Study [36] shows that as the mixing speed increases, it is possible to consistently reach a state of DP equilibrium, when all particles precipitate, or partially precipitate, and the rest are kept in suspension, or all particles are suspended.According to the authors, the state of equilibrium of DP in RCS determines their content in the return waters that are discharged into the water body.It also emphasizes the importance of research in view of the need to study the ecological aspect of water discharge of suspended solids.The authors of paper [37,38] noticed that the content of suspended substances in river water is determined by natural factors, in particular, it depends seasonally on water levels with the difference between the periods of their low and high marks, with a lower concentration during high water levels and a greater one during the period low water levels.However, in these works there are no data on the influence of natural factors of the content of suspended substances on the quality indicators of make-up water that has undergone liming and is used in an industrial facility.It is known that when the concentration of suspended substances increases due to anthropogenic pollution, this can lead to changes in the physical, chemical, and biological properties of a water body, as well as negatively affect its ecological state.Thus, according to the results of research [39], physical changes of negative environmental impact may include a decrease in light penetration, changes in temperature and levels of reservoir filling, which provokes biological changes: a decrease in the number, clogging of the food spectrum, and a decrease in the growth rates of hydrobionts.Despite the presence of partially unsolved issues, the fact is obvious that in order to avoid the negative environmental impact of suspended substances of anthropogenic origin on water bodies, their content should be strictly controlled during water discharge from industrial facilities.
Thus, DPs contribute to an increase in total suspended solids in return water, and elevated levels can lead to regulatory compliance issues and may require more frequent water treatment.All this allows us to state that it is necessary to conduct research on reducing pollution in RCS to ensure the economy, efficiency, and reliability of operation of the equipment of consumers of cooling systems with the provision of ecological discharge standards and the absence of impact on the water body.So, based on the existing problems of unresolved issues with setting up and conducting research for a real industrial object of RCS power plant with the interrelationship of technological and environmental aspects, conducting the research presented below has scientific and practical value.
The aim and objectives of the study
The purpose of our research was to determine the processes of formation and changes in the water level in the technological cycle of the power plant with an assessment of the impact of return discharge water on the water body.The practical value of the study is the possibility of applying the proposed approaches to power plants that have the same type of RCS to implement measures to minimize the water discharge of DP with return waters.
To achieve the goal, the following tasks were set: -to identify regularities and evaluate changes in the actual content and chemical composition of DP over the multi-year observation period (2013-2022); -to determine the morphological characteristics, particle size composition, and settleability of dispersed particles with the identification of mechanisms of formation and changes in the properties of DP; -to carry out an ecological assessment of the impact of DP discharge with return water on a water body (according to the indicators of the maximum permissible concentration (MPC), the dynamics of the permissible increase in concentration and the maximum permissible discharge (MPD) of suspended substances), to establish the factors that shape the volume of DP inflow to reservoirs.
The study materials and methods
The water treatment system and RCS at the Rivne Nuclear Power Plant (RNPP) and water in the Styr River, in the zone of influence of water discharges from the RNPP, were chosen as the object of this study.Water treatment of make-up water in RNPP RCS is carried out by repurification with liming agent in clarifiers under bicarbonate mode and corrective treatment with oxyethylidene diphosphonic acid (OEDF) and sulfuric acid.Water after water treatment is filtered on high-speed mesh mechanical filters with a filter cell size of 50 microns.RCS return water is discharged into the Styr River through one outlet, without treatment.
Research hypothesis assumes that during water treatment by liming, the formed DPs are not completely deposited in the clarifiers, which is why they enter the RCS with make-up water.It is assumed that compliance of the DP content with ecological standards in return waters is the result of their deposition and accumulation in the RCS.The research was carried out by experimental study of technological processes and did not require simplification as it contains the results of actual measurements.
The morphology of dispersed particles (DP) was determined using a binocular microscope XS-5520 LED (China) and a scanning electron microscope Tescan Vega 3 LMU (Czech Republic).Granulometric composition of DPsby obtaining the dependence of their number distribution using a laser particle counter HIAC/ROYCO 8000A (USA).
The chemical composition of DPs was determined according to recommendations from [40].Sample preparation involved the separation of DP by filtering with the help of "blue" tape filters.The samples were calcined at 600±25 °C to determine the loss on calcination of organic substances (OMLH) and at 825±25 °C to determine the loss on calcination of carbonate substances (OMLH).The mass fraction of the components was converted to the content in oxides.
Determination of the settling time of DPs (t, h) was carried out by the standard gravimetric method under the influence of gravitational forces [41]; settleability (W, %) was determined by the difference in concentrations of suspended substances before and after settling (1): where С 0 is the initial concentration of suspended substances before precipitation, mg/dm 3 ; С 1 is the concentration of suspended substances after the end of the time of complete sedimentation, mg/dm 3 .The selection of water samples was carried out in accordance with DSTU ISO 5667-6:2009, the quantitative content of DPs was determined by the concentration of suspended substances, which was determined by the gravimetric method [42] when filtering water samples through a "blue" tape filter.All measurements were carried out by the certified measuring laboratory at RNPP.
The ecological assessment of Styr River water in the area affected by water discharges was carried out by the method of comparing actual values with MPC according to [19,20] and determining the ecological status of surface waters [43] (Tables 1, 2).
Statistical treatment of the research results involved determining the range of data series (min-max), arithmetic mean (M), standard deviation (SD) of the corresponding sample and statistical analysis of data using the Minitab software package (Version 21.4.1,Minitab, LLC) (USA) .
1. Characteristics of quantitative content and chemical composition of dispersed particles
The Styr River in the area of RNPP water intake is characterized by years of low and abundant water.Low values of water consumption are observed in August-September, high values during floods in spring periods of the year in March-April.The actual water consumption of the Styr River for 2013-2022 varied in the range from 10 to 63 m 3 /s with M=27 m 3 /s, SD=18 m 3 /s.The actual volume of flow rate for refueling RNPP RCS from the Styr River depends on the season and is up to 2.63 m 3 /s in warm and up to 1.56 m 3 /s in cold periods of the year.For the researched period of 2013-2022, the costs of feeding RNPP RCS from the Styr River are M=1.68 m 3 /s, SD=0.41 m 3 /s.The return water consumption of RNPP RCS ranges from 15-22 % relative to recharge costs up to 0.65 m 3 /s in the warm and up to 0.37 m 3 /s in the cold periods of the year, which for the studied period of 2013-2022 is characterized by the values of M= 0.31 m 3 /s, SD=0.22 m 3 /s.
The concentration of DP in the water of the Styr River before water intake and after discharge in 2013-2022 varied in the range of min-max=6.44-14.35mg/dm 3 , M=11.35 mg/dm 3 , SD=2.44 mg/dm 3 (Fig. 1).
The concentration of DP in make-up water that underwent pre-treatment by liming and corrective treatment (OEDF and H 2 SO 4 ) in 2013-2022 varied in the range of min-max=10.22-49.46mg/dm 3 , M=27.46 mg/dm 3 , SD=13.85 mg/dm 3 and had no seasonal dynamics of changes.The concentration of DP in cooling water for which concentration processes occur as a result of evapo-ration and aeration in cooling towers of RCS in 2013-2022 varied in the range of min-max=17.31-27.85mg/dm 3 , M=20.52 mg/dm 3 , SD=5.44 mg/dm 3 , in return water of RCS min-max=7.31-16.12mg/dm 3 , M=12.44 mg/dm 3 , SD=3.11 mg/dm 3 .
The chemical composition of DP in the Styr River water is determined by the content of organic matter up to 20 % (OMLH), inorganic mass loss during heating (IMLH) and calcium carbonate up to 51 % (IMLH+CaO) and silicon compounds (SiO 2 ) up to 22 %.The chemical composition of make-up water is determined by the content of IMLH and calcium carbonate up to 78 % (IMLH+CaO) (Fig. 2).
Also, in the cooling water the content of IMLH and calcium carbonate decreases to 68 % (IMLH+CaO), in the return water it further decreases to 50.4 % (IMLH+CaO), and the content of OMLH increases to 25 % and SiO 2 to 15 %.For water bodies containing up to 30 mg/dm 3 of natural mineral substances, an increase in the content of suspended substances within 5 % is allowed: MPC -25 mg/dm 3 Fig. 1.Concentration of dispersed particles in technological waters of the cooling system at the Rivne nuclear power plant and water in the Styr River
2. Characteristics of morphology, granulometric composition, and sedimentation of dispersed particles
Microscopy of DP of process waters of RNPP RCS and water in the Styr River were obtained using Tescan Vega 3 LMU and XS-5520 LED, which made it possible to visually highlight the crystalline structure of DPs and their higher content in make-up water (Fig. 3, 4).Also, visual methods of research made it possible to notice that the deposition of DP in RNPP RCS occurs with the formation of two types of deposits: scale and soft sludge deposits (Fig. 5).
According to the data above, it can be determined that during liming, the granulometric composition changes with the formation of new fractions of the heterogeneous phase of DP.The maximum particle size observed in RCS water is 120-150 μm, the smallest size for the incoming water of the Styr River is 2-10 μm, for clarified water the maximum content of the fraction is 10-30 μm (Table 3).
The sedimentation properties of DP in the process waters of the cooling system at RNPP, determined by the settling time (t) and settleability (W), demonstrate the shortest 4).In waters of the Styr River and the technical waters of the cooling system at RNPP during the water treatment process, there is a noticeable change in the morphology of DPs (Fig. 3-5) with a change in the size of the particles (Table 3).The longest sedimentation time was measured for make-up water (6.63 h), and the lowest sedimentation was measured for return water (25 %) of the RNPP cooling system (Table 4).
3. Ecological evalof discharges of dispersed particles with return waters
In terms of the content of suspended solids, wastewater at the RNPP's wastewater treatment plant meets the hygienic requirements for the composition and properties of water in water bodies (Table 2).The increase in the concentration of suspended solids as a result of the water discharge of RNPP RCS for 2013-2022 was in the range of min-max=0.058-0.206mg/dm 3 , M=0.137 mg/dm 3 , SD=0.047 mg/dm 3 (Fig. 6) and did not exceed the rated value of the increase in the concentration of suspended substances during water discharge of 0.25 mg/dm 3 .
The value of suspended solids in the return waters at RNPP RCS and the water of the Styr River after the RNPP water discharge does not exceed MPC and is up to 0.4 MPC (Fig. 1, 6).According to the ecological classification of suspended substances, water in the Styr River after the water discharge at RNPP belongs to the II class, category 2, which characterize the water quality in terms of its condition as "very good"; the degree of purity -"clean".
A correlation dependence was built (Fig. 7, a), which determines the content of suspended substances in the water of the Styr River after the discharge of return water from RNPP RCS, depending on the background concentration of suspended substances in the raw water of the Styr River before the water intake at RNPP.
The dependence that determines the content of suspended solids in the Styr River water in raw water before the water intake and after the discharge of the return water of the cooling system of the Rivne nuclear power plant reveals a statistically significant (р<0.001)direct correlation at the level of very strong with r-Pearson=1.00and R-sq=99.82 % (Fig. 7, b) and is described by equation ( 2): where С b is the background concentration of suspended substances in the raw water of the Styr River, mg/dm 3 ; С с is the concentration of suspended substances in the Styr River water after discharge of return water, mg/dm 3 .For RNPP, according to requirements [45], the maximum permissible discharge (MPD) is 6825 kg/year per power unit.The actual values of the discharge of suspended solids with the return water at RNPP RCS for 2013-2022 were in the range of min-max=1105-1524 kg/year per power unit, M=1457 kg/year per power unit, SD=105 kg/year per power unit.The actual values of the discharge of suspended solids did not exceed the normalized value of MPD, the mass share of the actual discharge of suspended solids is up to 22 % of MPD.
The comparative characteristics of the concentration of suspended substances for the RNPP are comparable to Ecology other NPPs, which may indicate the identity of the processes of changes in the concentration of suspended substances in RCS (Fig. 8).Similar structural changes with their thickening and subsequent sedimentation can explain the rather low values of the concentration of suspended substances in the return water at other nuclear power plants (Fig. 8).
As a result, there is a decrease in the concentration suspended solids due to precipitation of DPs (more than 50 μm) in the return waters at RCS, compared to their concentration in the make-up and cooling waters.The established sedimentation process allows one to ensure compliance with the MPC of suspended solids in the Styr River water (25 mg/dm 3 ) after the discharge of return water from RNPP RCS and the rated difference in the content of suspended solids in the discharged water (0.25 mg/dm 3 ).
Discussion of results of investigating dispersed particles in the water of an open recirculating cooling system at the power plant
The content of DPs in the Styr River water depends on the water levels in the river, with a lower concentration during high water levels and a higher concentration during the period of low water levels [37].The detected fluctuations in the concentration of suspended solids in the Styr River (Fig. 1) confirm studies reported in [37,38], which explain such changes by the effect of natural factors on the content of suspended substances in surface waters.The results of our studies show that seasonal fluctuations in DPs in the Styr River water do not affect the content of DPs in the supplementary water at RNPP RCS and depend on the processes of water treatment by liming (Fig. 1).
The technology of liming in clarifiers tends to increase the content of DP compared to the values for fresh water (Fig. 1), which is also noted in studies [34,35].The increase in the content of DPs, compared to the values for fresh water, is explained by the presence of heating, which reduces the dissociation of lime Ca(OH) 2 .In the cooling water in RCS at RNPP, the content of DP decreases (Fig. 1), which determines the processes of the formation of the het-erogeneous CaCO 3 phase according to [27][28][29].As the speed of mixing, turbulence, and residence time in RCS increases, the effect of precipitation on DPs increases, which reflects the described equilibrium of CaCO 3 precipitation in the solution [36], and, subsequently, leads to a decrease in their concentration in the return water (Fig. 1).
For DPs in make-up and cooling water, there is a change in the chemical composition of DPs, namely, a decrease in the content of organic substances, silicon compounds, and an increase in the content of calcium compounds (Fig. 2).The detected change in the chemical composition during water treatment confirms the limitations of liming as this process introduces make-up components into make-up water [33].Changes in the chemical composition of DPs in the water treatment process indicate the removal of DPs in the Styr River water with the formation of a new heterogeneous phase in the make-up water during liming, consisting mainly of CaCO 3 .Subsequently, a recirculating change in the chemical composition is observed in RCS, which is due to the processes of scale formation in RCS.The chemical composition of DPs of the return water at RNPP RCS and water in the Styr River differ (up to 5 %) in terms of OMLH and SiO 2 content, however, the volume of water discharge in the amount of M=1.68 m 3 /s does not affect the chemical composition of DPs in the water of the Styr River since the content of DP components before water intake and after discharge is comparable (Fig. 2).
The formation and change of DPs in process waters determines the processes of their sedimentation in turbulent flows during heating in consumer heat exchangers, cooling, evaporation in cooling towers and concentration in RCS.These processes determine the agglomeration of DPs (Fig. 3, 4), and taking into account the maximum fraction of DP in the return water (Table 3), processes of their agglomeration occur.The formation of DPs during water treatment, obtained from the results of research at a real industrial facility, RNPP (Fig. 3, 4, Table 3), is confirmed by experimental studies of DP aggregation during liming [33] and sedimentation [27][28][29].The determined insignificant time of deposition of DP in RCS (Table 4) confirms their sedimentation and deposition in RCS.The absence of agglomerated DPs larger than 50 μm in the return water (Table 3) indicates the predominant deposition of the large main fraction in the cooling water of RCS (120-150 μm, 83.6 %), which confirms the research hypothesis.
Deposition of particles with a size of more than 50 μm is observed in the form of sludge (Fig. 5, b).The fact of the formation of sludge deposits confirms the previous research reported in [45] since the formation of solid dense deposits in the form of scale occurs from soluble components of the carbonate system.The detected shortcoming from the accumulation of DP in the form of sludge deposits in RCS can be leveled by its systematic mechanical removal during planned and preventive repairs of power units of power plants.
It should be noted that the discharge of suspended substances with return water is one of the important standardized discharge indicators and is included in all permits for water discharge of nuclear power plants in the amount of 1000-10000 kg/year per power unit [46].The obtained correlation dependence (Fig. 7) and the dependence equation (2) can be used to predict and limit the discharge of suspended substances with return waters of RCS.
This research could be used to understand the processes of formation and behavior of DPs in RCS at other power plants.The methodology used in the monitoring of suspended solids can be used for the ecological assessment of discharges with the identification of negative factors of the discharge of suspended solids of power plants into a water body.Monitoring and environmental assessment of discharges is important for safe, reliable, and efficient operation power plants [47,48].The limitation of the method of the current research may be the determination of the characteristics of nano-sized DPs, which is complicated by the use of existing instrumental methods.This is because the methods used in our study may not provide sufficient resolution or accuracy to measure nanoscale DPs.The behavior of DPs is accompanied by aggregation processes, which can affect their settleability, so another limitation of this study is the application of the results for waters that do not contain dispersants.
The development of this research may involve further studies into the possibility of using dispersants as a reagent for corrective treatment of RCS.Dispersants contribute to the splitting and dispersion of DPs and prevent their agglomeration, which could make it possible to abandon the systematic mechanical cleaning of RCS from sludge.However, the ecological assessment of discharge, in the case of dispersant application, may have different results since it will affect the settleability and retention of DPs in the form of a suspension.That is, the further development of the research also requires a comprehensive approach with an assessment of technological and environmental factors.
Conclusions
1.During clarification by liming, the volume of DPs increases by an average of 2.4 times, and their chemical composition changes with an increase of calcium carbonate to 27 % and a corresponding decrease in organic substances and silicon compounds.A change in the granulometric composition was also noted, in particular, the particle size increased from 2-10 μm in raw water to 10-30 μm in the make-up RCS water.The application of filtration of make-up water that has undergone liming on high-speed mesh mechanical filters with a filter cell size of 50 μm does not make it possible to completely remove the formed dispersion phase due to the high dispersion of particles and significant volumes of RCS feeding needs.
2. Formed DPs after liming, as well as seed crystals of calcium carbonate, are aggregated to 120-150 μm, and due to low sedimentation resistance, they are deposited in RCS.The heterogeneous phase at the available temperatures in RCS does not form a dense scale but settles in the form of sludge, which causes the need for systematic cleaning of the hydrotechnical structures in RCS from sludge.
3. As a result of the precipitation of DPs in RCS, their decrease is observed by 1.34 times in the cooling water, compared to make-up water, and by 1.64 times in the recirculating, compared to the cooling water of RCS.According to the ecological classification for suspended substances, RNPP wastewater belongs to the II class, category 2, which characterizes the quality of the water as "very good" in terms of its condition, and "clean" in terms of its degree of purity.That is, the content of suspended solids does not exceed the established MPC and does not exert a negative impact on the environment.
Fig. 2 .
Fig. 2. The chemical composition of dispersed particles in the technological waters of the cooling system at the Rivne nuclear power plant and water in the Styr River: a -water to the water intake; b -make-up water; c -cooling water; d -return water; e -water after discharge
Fig. 3 .
Fig. 3. Morphology of dispersed particles in process waters: a -water in the Styr River before intake; b -make-up water; c -cooling water; d -return water) of the cooling system at the Rivne nuclear power plant
Fig. 4 .Fig. 5 .
Fig. 4. Photographs of dispersed particles in technological waters: a -water of the Styr River before intake; b -make-up water; c -cooling water; d -return water of the Rivne nuclear power plant cooling system (magnification 20x)
Fig. 6 .Fig. 7 .
Fig. 6.Increase in the concentration of suspended substances due to the water discharge of the return water in the cooling system at the Rivne nuclear power plant for 2013-2022
Fig. 8 .
Fig. 8. Comparative characteristics of suspended solids discharges based on average content values for daily maximums at nuclear power plants
Table 2
Hygienic requirements for the composition and properties of water in water bodies at points of economic-drinking and cultural-domestic water use for suspended substances
Table 3
Granulometric composition of suspended solids in process waters at RNPP RCS and in the water of the Styr River
Table 4
Results of studies on determining the settling time and settleability of DPs in the process waters at RNPP RCS Note:* Measurement error according to the procedure for measuring the content of suspended solids | 2024-01-06T16:31:35.655Z | 2023-12-22T00:00:00.000 | {
"year": 2023,
"sha1": "1e03570c53b26c6401734e096f85f9a5b3aeba00",
"oa_license": "CCBY",
"oa_url": "https://journals.uran.ua/eejet/article/download/292879/287034",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a9ac2bfdba99480d5d1108c3ad5447efd6d9fbd5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
1898682 | pes2o/s2orc | v3-fos-license | An unusual pre-ligamentous thenar motor branch of the median nerve
The pre-ligamentous variant of the thenar motor branch (TMB) of the median nerve is extremely rare. In all previously reported cases, the branch arose from the radial or antero-radial aspect of the median nerve in the distal forearm and then pierced the antebrachial fascia to reach the thenar muscles. We report on a case in which the pre-ligamentous TMB not only arose from the ulnar side of the median nerve but it also remained deep to both the antebrachial fascia and the transverse carpal ligament until it reached the thenar muscles. The course of this variant puts the TMB at significant risk of injury during both open and endoscopic carpal tunnel release. Level of Evidence: Level V, risk study.
Introduction
Knowledge on the anatomical variations of the thenar motor branch (TMB) of the median nerve is essential, in order to avoid its iatrogenic injury during carpal tunnel release [1][2][3]. Classification of these variations is generally based on the site of origin of the TMB from the median nerve. In the extra-ligamentous type (seen in about 75% of the population) [3], the TMB arises distal to the transverse carpal ligament and then takes a retrograde course to the thenar muscles. In both the sub-and trans-ligamentous types (seen in about 13 and 11% of the population, respectively), the TMB arises within the carpal tunnel (under the transverse carpal ligament). In the former sub-ligamentous type, the TMB remains deep to the transverse carpal ligament until it reaches the thenar muscles. In the latter trans-ligamentous type, the TMB pierces the ligament to reach the thenar muscles. Finally, the pre-ligamentous TMB (very rare) [3] arises from the median nerve proximal to the transverse carpal ligament and then pierces the antebrachial fascia of the distal forearm to reach the thenar muscles [4].
Anatomical variations of the TMB may also be classified according to the side of branching from the median nerve. In about 98% of the population, the TMB arises from the radial or antero-radial aspect of the median nerve [3]. In the remaining 2% of the population, the TMB arises from the ulnar side of the median nerve [3].
In this paper, the authors document a previously undescribed variant of pre-ligamentous TMB in which it arose from the ulnar side of the median nerve in the distal forearm and remained deep to the transverse carpal ligament without piercing the antebrachial fascia to reach the thenar muscles. This variant is not only extremely rare but also puts the nerve at significant risk of iatrogenic injury during carpal tunnel release.
Case report
A 30-year-old man sustained a glass injury to the distal forearm resulting in a 3-cm longitudinal laceration in the mid-forearm along the course of the median nerve. Severe arterial bleeding was noted, which was controlled by a tight bandage at the workplace. At the time of presentation to our hospital, there was still active bleeding, despite the tight bandage. The patient was immediately taken to the operating room.
Following endotracheal intubation, an arm tourniquet was applied and inflated before removing the tight bandage. Wound exploration along with carpal tunnel release was performed. The source of bleeding was from a partially lacerated persistent median artery, which ran on the anterior aspect of the median nerve. There was no injury to the median nerve or flexor tendons. A very unusual variant of the TMB was noted intraoperatively (Fig. 1a, b). The TMB arose from the ulnar side of the median nerve 2 cm proximal to the distal wrist crease. The TMB remained deep to both the antebrachial fascia and the transverse carpal ligament, crossing (from ulnar to radial) the persistent median artery to reach the thenar muscles. Nerve stimulation resulted in thenar muscle contraction. The injured persistent median artery was transected and ligated after the release of the tourniquet to ensure an adequate vascularity of all digits. Both radial and ulnar arteries had strong palpable pulses at the wrist. Prior to ligation of the persistent median artery, a vascular clamp was applied to the injured median artery and a sterile pulse oximetry was applied to the pulps of the digits. Oxygen saturation was 99-100% in all digits with a normal pulse wave, and hence, repair of the median artery was not necessary. The postoperative course was uneventful with return to work 3 weeks after injury.
Discussion
The current case has a combination of rare variations: a persistent median artery (seen in about 10% of the population) [5], an ulnar origin of the TMB (seen in 2% of the population) [3], and a pre-ligamentous TMB variant (very rare) [3]. In all previously reported cases of the pre-ligamentous variants, the TMB arose from the radial or antero-radial aspect of the median nerve in the distal forearm and then pierced the antebrachial fascia to run superficial to the transverse carpal ligament until it reached the thenar muscles [3,4,6]. Our case was unique because the pre-ligamentous TMB not only arose from the ulnar side of the median nerve but it also remained deep to both the antebrachial fascia and the transverse carpal ligament until it reached the thenar muscles. The course of this variant puts the TMB at significant risk of injury during both open and endoscopic carpal tunnel release.
The presence of a persistent median artery in our case is also interesting. Several authors noted that a persistent median artery is frequently associated with a high division of the median nerve (also known as the bifid median nerve) [1,3,7]. A bifid median nerve is only seen in 2.6% of the population; but it is associated with a 63% prevalence of a persistent median artery [3].
Lanz [1] noted that the two parts of bifid median nerves are usually equal in size. Other authors documented a larger radial division of the bifid median nerve [8]. Hence, the presence of a persistent median artery in our case may actually represent a concurrent high division of the median nerve: a small ulnar division representing a pre-ligamentous TMB and a large radial division containing the remaining trunk of the median nerve.
Finally, it is important to be aware that the persistent median artery may be the main blood supply to the radial two digits [7][8][9]. Therefore, hand vascularity should be checked before transection or excision of the persistent median artery [7], and this was done in our case. | 2017-08-02T18:34:59.332Z | 2017-01-13T00:00:00.000 | {
"year": 2017,
"sha1": "23c1ded3cf8e509551cfbdbf02971dfd725c893f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00238-016-1271-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "23c1ded3cf8e509551cfbdbf02971dfd725c893f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
194845691 | pes2o/s2orc | v3-fos-license | Mrs. Dalloway’in İncelenmesi: Modernist Öğeler Olarak Değişim ve Gerçeğe Uyanış Olgusu
Virginia Woolf’un Mrs. Dalloway (1925) isimli eseri modernist ogeler olan degisim ve gercege uyanis olgusunu iceren taninmis modernist bir romandir. Woolf, eser boyunca iluzyonel gercekligi ve gercege uyanisi vurgulayan karakter orneklerini okuyuculara sunmaktadir. Onceki calismalarda Clarissa Dalloway gercege uyanmis bir karakter olarak tasvir edilmemesine ragmen, hem Clarissa Dalloway hem de Septimus Warren Smith modernist bir toplumdaki gercege uyanmis bireyleri temsil etmektedirler. Bu nedenle, bu makalenin amaci benzerlikler tasiyan, sanrilardan arinmis Clarissa Dalloway ve Septimus Warren Smith karakterlerini inceleyerek Mrs. Dalloway adli eserdeki degisim ve gercege uyanis olgusunu analiz etmektir. Bu baglamda toplumsal onay, evlilik, savas ve medeniyetin urunleri olan kurumlara duyulan guven meseleleri bu makalenin kapsami dahilinde tartisilmaktadir.
INTRODUCTION
Mrs. Dalloway is marked by the essence of change and disillusionment which is highly felt throughout the novel highlighting the modernist spirit it revolves around. The sense of disillusionment towards the project of modernity which is the harbinger of transformation and progress but ends up in destruction is dominantly prevalent in Mrs. Dalloway. An understanding of the sense of disillusionment, thus, might require an understanding of the phases of modernity. In All That Is Solid Melts Into Air (1982), Marshall Berman describes modernity in three phases. According to Berman, the unknown urge leading to modern that is observed from the end of the 16 th century till the end of the 18 th century entails the first phase of modernity process. The term "modern" is not used deliberately by people during the first phase. As Berman states: "People are just beginning to experience modern life; they hardly know what has hit them. They grope, desperately but half blindly, for an adequate vocabulary; they have little or no sense of a modern public or community within which their trials and hopes can be shared" (Berman 1988: 16-17). The French Revolution is regarded as the second phase of the modernity project and it differs from the previous phase since there is an intentional attempt for transformation in the second phase. Revolutionary and rebellious tendency towards change is the main feature of this phase. The modern public structure emerges as a result of this transformation enabling people to compare the past and the present. Comparison can be maintained between the conservative and the modern. As Berman suggests, the 20 th century is regarded the final phase of the modernity project. Due to the Great War, people are not in a deluded agreement with false promises of the modern world and the modernity project. It is understood that change leads to clash of powers and civilizations, innovations empowering malice and means of destruction. Therefore, confiding in falsified promises served through modernity, modern individual is disillusioned. As a result, society experiences an awakening from the illusionary condition and is faced with the sense of disillusionment. While "modern" recalls for the new and desirable in the first place, in fact, experience and history present that modernity does not reclaim its value as an optimistic project at all. On the contrary, the modernity project ends up in the revelation of terrible truth beneath the myth of progress and civilization.
The disillusionment experienced by Woolf as a survivor of the Great War causes a distrust in her towards the traditional means of expression just like most of the modernist writers. Unable to trust in the ability of the conventional narrative techniques to depict life in its real nature, she looks for new and experimental ways of expression that would make her readers question rather than simply accept what is provided to them through the narrative. Within a modernist perspective, letting readers struggle to reach their subjective reality via their own attempts is obtained when Virginia Woolf puts aside the illusory, in other words, traditionally imposed reality.
Clarissa Dalloway and Social Approval
Completed in 1925, Mrs. Dalloway is categorized as the second modernist novel of the writer, in which she depicts a day of Clarissa Dalloway in London. The novel reflects the thoughts of various characters throughout the novel as Clarissa Dalloway runs daily errands and completes the arrangements for her party which is going to take place in the evening of the same day. Mrs. Dalloway begins with the start of the day, is followed by the depiction of the whole day and ends after the course of the party. While the novel describes one day in London, Virginia Woolf manages to present different characters and their inner-thoughts which are unrelated with each other. As readers are exposed to inner-monologues of the characters in the novel, one cannot necessarily link them together, however; Virginia Woolf accomplishes to reflect the overall sense of frustration, change and disillusionment that is prevalent at the center of the society in Mrs. Dalloway.
Contemplating on Mrs. Dalloway as a modernist text, diversified debates and analyses have been implemented by various critics. In "Mrs. Dalloway and the Social System" (1977), Alex Zwerdling discusses the impact of the governing class over English society in the aftermath of the Great War. Zwerdling indicates that the novel reveals the superior position of the governing class (Zwerdling 1977: 120) (Whitworth 2009: 166) As a modernist novel, Mrs. Dalloway has been analyzed through various other perspectives; however, the sense of change and disillusionment in Mrs. Dalloway is an area to be discussed critically.
Virginia Woolf embarks on Mrs. Dalloway by giving voice to Clarissa Dalloway, who is one of the main characters in the novel. From depiction of Virginia Woolf, it is understood that Clarissa Dalloway, an elderly woman in her fifties, constantly makes an evaluation of life and criticizes her profound choices about life. The portrayal of the different phases of Clarissa's life from her maidenhood to married status allows the readers to have an idea about her development as an individual and pose a critical standing towards her thoughts. At the beginning of the novel, Woolf reflects a portrayal of Clarissa through the perspective of Scrope Purvis, who is a neighbor of the Dalloways, with the following statement: She stiffened a little on the kerb, waiting for Durtnall's van to pass. A charming woman, Scrope Purvis thought her (knowing her as one does know people who live next door to one in Westminister); a touch of the bird about her, of the jay, blue-green, light, vivacious, thought she was over fifty, and grown very white since her illness. There she perched, never seeing him, waiting to cross, very uptight. (Woolf 2003: 3) The external description of Clarissa Dalloway hints her stern and nervous nature. As the novel progresses, the readers learn this description is a result of the change and disillusionment that Clarissa experiences throughout her life. Woolf's use of "stiffen", "stern", "upright" while describing Clarissa has an underlying meaning from the beginning of the novel. Clarissa carries a mixture of contrasts within herself. On the one hand she has a touch of the bird, vivaciousness and loveliness, but on the other hand she is a stern and upright woman. The contrast related with her demonstrates the sense of change or frustration that might be born out of the disillusionment she experiences in her life.
Michael H. Whitworth elaborates on the multifaceted, multilayered personality of Clarissa arguing that "The body, to use an image which Woolf favored, is a hard shell covering a vulnerable, soft interior" (Whitworth 2009: 126), and he underlines that: "The distinction she draws between her private and public selves develops her earlier reflections about being both 'Clarissa' and 'Mrs. Richard Dalloway'; the idea of composing the disparate elements of the self into a unity is an artistic question" (Whitworth 2009: 125). Masking her inner personality with illusional features that are supposed to be socially desirable causes frictions in her real self. Therefore, Clarissa merges disparate elements of the self in disillusionment. Her socially invented illusionary mask, the body, conceals her real self, soft interior until the protagonist reaches her self-enlightenment.
Maria DiBattista also examines the impact of developing double identities which she observes in both Clarissa and Bernard, also a character in The Waves written by Virginia Woolf in Imagining Virginia Woolf an Experiment in Critical Biography and claims that: "His conviction of the absolute difference between outer and inner self reiterates a common belief in Woolf's fiction first elaborated in Mrs. Dalloway, whose protagonist is struck and somewhat tormented by the difference between the private Clarissa and the public Mrs. Richard Dalloway" (DiBattista 2009: 33). Bernard shares resemblance with Clarissa since both characters carry multiple masks. With her character portrayal in different works, Woolf emphasizes the burden that is created as an outcome of developing split personalities for the sake of acquiring social approval.
Throughout the novel, Woolf hints that Clarissa rarely manages to observe and evaluate her life objectively. In "Women and Interruption in Between the Acts", Helen Southworth states that: ' (158). (Southworth 2007: 50) Clarissa shapes her choices and life according to "one room". Her understanding of her life might change when it is observed through another re-sitting. While her actions and decisions can be justified reasonable from a point of view, they might be proven the opposite from another point of view. Clarissa comes to a realization that there is more than one room; her life might be defined differently. Her realization of variety in perspectives, in other words, her disillusionment concerning possible alternatives highlights relativism. The chaos in her thoughts might be fixed when Clarissa denies the validity of human constructed values and embraces the change and disillusionment that comes within elimination of human constructivism, instead. The female protagonist's self-inflicted repressed condition and otherness is also exposed through the language of the writer relentlessly. Maria DiBattista examines Woolf's language and suggests that: Woolf's respect for the drastic discipline that is the woman writer's most conspicuous inheritance from her creative female forbears underlies her conviction that a woman's language, shaped in confinement, allows her "to say a great many things which would be inaudible if one marched straight up and spoke out." Yet we can also detect in her writing a growing restlessness with such prolonged imaginative of confinement, a restlessness that surfaces, for example, in the exuberant opening of Mrs Dalloway, the novel in which Woolf's modernity and her heroine make exhilarating contact with the out-ofdoors. (DiBattista 2009: 89) The hunger born due to the sense of otherness concealed by the role of "the perfect hostess" and oppressed condition of Clarissa displays itself via the bursting language of Virginia Woolf. The prevalent confinement of the female protagonist is revealed by the use of intensified language. DiBattista advances her explanation related with exposed hunger in Woolf's language and states that: Although Woolf seemingly describes London and the surroundings on a June day along with Clarissa Dalloway running daily errands for her party, Woolf's intense use of punctuation in her writing underlines an extraordinary condition with description of an ordinary day. The conflict between the meaning and the punctuation, modernist features in Woolf's writing point out the confinement of Mrs. Dalloway. The sense of disillusionment unveils itself through abundant use of linguistic elements. The exaggerated amount of language items within the passage indicates contradiction and break from the traditional, which is peculiar to the modernist texts.
Love and Marriage in an Illusional World
As the novel progresses, Virginia Woolf presents the reader a Clarissa who has been once torn between groom candidates. Before Clarissa is Mrs. Dalloway, she is in a relationship with Peter Walsh, who represents the opposite values of her current husband Richard Dalloway. While Clarissa is walking in Westminster in order to buy flowers for her party, she thinks "he could be intolerable; he could be impossible; but adorable to walk with on a morning like this" (Woolf 2003: 5 So she would still find herself arguing in St James's park, still making out that she had been right -and she had too -not to marry him. (Woolf 2003: 6) Peter Walsh's criticism of Clarissa on being the perfect hostess exasperates Clarissa. According to Peter Walsh's point of view, fulfilling the necessities of a hostess, acquiring a respectable place in the society via marrying someone of status indicate banality of an individual. His irony in telling Clarissa to marry a Prime Minister underlines insignificance of social status for himself because Peter Walsh is interested in state of the world rather than social status. Although Clarissa's perspective is different from Peter Walsh and she is in favor of acquiring remarkable position in her society, she is still infuriated by Peter's speech during their breakup. In "Imagining Virginia Woolf an Experiment in Critical Biography", Maria DiBattista refers to the criticism of Peter Walsh and the reaction of Clarissa and claims that: "Neither sagacity nor innocence is suggested by Woolf's most searching and sardonic novelistic portrait of the 'perfect hostess,' as Peter Walsh sneeringly calls the worldly Clarissa Dalloway. Clarissa is at first dismayed by the epithet, but ultimately embraces it as a creditable even noble office" (DiBattista 2009: 60). In spite of being parted for "hundred years", Peter Walsh still has an impact over Clarissa and the character finds herself questioning their breakup although she does not clearly question her decision after years. Her implicit questioning indicates the disillusionment that she experiences with her life and the change that appear with her choices. While leading her life according to expectations symbolizes her acceptance and surrender to the illusional reality; Clarissa's implicit questioning underlines the change and disillusionment of the character subsequently.
Inner monologues of Clarissa demonstrate her satisfaction with the choices she makes about her marriage; however, the feeling of contentment is not justified because Clarissa makes constant comparison between Richard Dalloway and Peter Walsh: (Woolf 2003: 6) Clarissa states her expectations of how a marriage should be, her marriage with Richard symbolizes a rational marriage; however, her sincerity and contentment with her marriage is not clear. The way of expression Virginia Woolf adopts to use whilst reflecting Clarissa's thoughts, it is seen that Clarissa is not against having distance between Richard and herself under the title of "independence" but she cannot stand the idea of Peter's having a relationship with another woman still to that day, which signifies her distant boundaries with Richard is perhaps due to the fact that she does not care about him as she does for Peter. Clarissa might be yearning for the powerful bond she once had with Peter, who has also been a failure with his choices. In another words, Clarissa sounds resentful rather than content with her marriage.
Clarissa Dalloway is a sensitive woman as well. Virginia Woolf's portrayal of Clarissa displays her joy, excitement and reaction towards the beauty of anything she comes across in her daily life but Clarissa is criticized by society as well. She maintains a wall between herself and other people including her loved ones, which causes people to evaluate her as an insensitive and cold woman. Rejecting Peter Walsh prevents Clarissa from living an actual passionate relationship throughout her life. She has an average marriage in which she preserves her distance with her husband. The dialogue scene between the servant and Clarissa mirrors her disconnection with her environment: on the river-bed feels the shock of a passing oar and shivers; so she rocked: so she shivered. (Woolf 2003: 22) The fact that Richard Dalloway lunches out with Lady Bruton irritates Clarissa but she controls her behaviors and does not judge or criticize Richard openly. Although she is sensitive, Clarissa intentionally chooses to mute her thoughts. Clarissa Dalloway upholds her personal space by acting out the illusional role of cold and sane woman in the society.
Clarissa Dalloway keeps questioning the meaning of everything such as her decisions, her marriage and social status along with her existence as she runs errands for her party in the morning: Did it matter then, she asked herself, walking towards Bond Street, did it matter that she must inevitably cease completely; all this must go on without her; did she resent it; or did it not become consoling to believe that death ended absolutely? But that somehow in the streets of London, on the ebb of things, here, there, she survived, Peter survived, lived in each other, she being the part, she was positive, of the trees at home; of the house there, ugly, rambling all to bits and pieces as it was; part of people she had never met; being laid out like a mist between the people she knew best, who lifted her on their branches as she had seen the trees lift the mist, but it spread ever so far, her life, herself. (Woolf 2003: 7) Clarissa judges and criticizes herself constantly. She happens to conceive her existence as nobody of importance. While she weighs her decisions and faces with her frustrations which she has not put into voice openly, she points out if all of her choices matter in the end. Regardless of being the perfect hostess or the woman that would receive approval by Peter Walsh, the fact that her life comes to an end with death exposes the insignificance of her existence. As the novel progresses, it is understood that the death itself is a dream signifies an escape for Clarissa because the death can free one's soul or self from leading illusional lives and acting illusional identities. The fact that Clarissa relies on the consolation provided by death underlines Clarissa's disillusionment and the ensuing pessimistic and unsatisfactory attitude with her choices and unsatisfactory changes with herself or her life.
Woolf's depiction of Clarissa indicates Clarissa's urge to please others and her dedication to achieve her plans as "a good hostess": How much she wanted it -that people should look pleased as she came in, Clarissa thought and turned and walked back towards Bond Street, annoyed, because it was silly to have other reasons for doing things. Much rather would she have been one of those people like Richard who did things for themselves, whereas, she thought, waiting to cross, half the time she did things not simply, not for themselves; but to make people think this or that; the perfect idiocy she knew (and now the policeman held up his hand) for no one was ever for a second taken in. Oh if she could have had her life over again! She thought, stepping on to the pavement, could have looked even differently. (Woolf 2003: 8) Everything Clarissa does throughout her life serve to pleasing others; there is a constant feeling which causes her to do and live to make others content. Her obsession in putting the necessities of others first or receiving the approval of the society shapes her life. Clarissa notices other individuals like Richard Dalloway whose behavior is shaped according to what he exactly wants to do rather than acquiring social approval and she criticizes herself for wasting her life because of minding what others would think. Considering Clarissa's self-criticism about her behavior throughout her life, it can be suggested that her choices and the type of attitude she performs consume her life whilst pursuing illusionary priorities. Her once-essential thoughts and choices are transformed as a result of the disillusionment that come forward with the impact of change.
War and Disillusionment
Virginia Woolf presents the shades of change and disillusionment along the darkness of the Great War through portrayal of Septimus Warren Smith, the other essential character in Mrs Dalloway. Septimus is depicted as a young man who attends the Great War and fights for his country. His experience of war maintains a terrible kind of confrontation with trauma and he begins to suffer from almost everything due to his fear of "feeling nothing". He gets married with Lucreazia, a nice Italian girl whom Septimus meets at the end of the Great War in Italy and they start a life together in London but witnessing loss of his friend Evans as well as other deadly events he comes across during the Great War deranges his sanity and Septimus begins to complain about not feeling anything anymore.
Woolf introduces Septimus Smith Warren to the readers in the same setting with Clarissa Dalloway as both characters inhale the same atmosphere of London streets on the very same day: (Woolf 2003: 11-12) Woolf inscribes traumatic atmosphere appeared in the aftermath of the Great War gradually. The impact of the Great War and modernity, the search of the meaning of one's self and analyzing the essence of the world gain a darker tone as Woolf maintains the introduction of Clarissa to Septimus progressively. The impact of change and disillusionment is demonstrated by Septimus Smith as well. Septimus fears he does not feel anymore but his questioning of himself and the world indicates the opposite ironically.
The fall of the modernity project is indicated as Woolf introduces Septimus Warren Smith. Stereotypical optimistic individual who relies on government, the utility of war and the advantages of modern innovations is refuted by Virginia Woolf in Mrs Dalloway. Following the war, it is not clear where the world's whip will descend. Therefore, the world is not considered as sturdy and stable anymore. "The throb of engines" hints the technological innovations, the malice and the more death that comes along with them whereas "the motor car" which carries possibly a royal member of the monarchy renders Septimus or the rest of the people around trivial and disposable. As the balance of one's mind -just like Septimus-is shattered with the fall of the modernity project and the darkness of the Great War, Septimus becomes derailed.
Woolf reflects the embedded pessimistic approach of modernist thinking against the institutional units which recalls mistrust via Septimus Warren Smith. The roots of his derailed condition is a result of the Great War. Considering that the Great War is an unattained and failed goal in terms of bringing justice and solution to humanity, Septimus is psychologically destroyed.
As the novel progresses, his wife Lucrezia attempts to help her husband; however, her understanding of Septimus is inadequate: "Look, look, Septimus!" she cried. For Dr Holmes had told her to make her husband (who had nothing whatever seriously the matter with him but was a little out of sorts) take an interest in things outside himself. (Woolf 2003: 16-17) Woolf underlines the insufficiency in comprehending the importance of psychological wellbeing of one's self. Lucrezia and Dr. Holmes represent traditional approach in this respect. Ignoring the inner state of mind, as Virginia Woolf indicates, the reality constructed by them is based on the physical level, the appearance. Lucrezia experiences difficulty in categorizing the condition of Septimus because she cannot digest bipartite judgement about Septimus. Regarding Septimus as a brave soldier at once prevents Lucrezia from unseeing him otherwise, which causes her to reject him now even being "himself", Septimus. Lucrezia's sharp evaluation towards Septimus is illusional within a modernist perspective as she relies on Dr. Holmes and her narrow understanding of Septimus firmly.
Virginia Woolf highlights the controversial issues of modern thought via Septimus. His inner arguments display the matter of importance to the mankind: He, Septimus, was alone, called forth in advance of the mass of men to hear the truth, to learn the meaning, which now at last, after the toils of civilization -Greeks, Romans, Shakespeare, Darwin, and now himself -was given whole to … "To whom?" he asked aloud. "To the Prime Minister," the voices which rustled above his head replied. The supreme secret must be told to the Cabinet; first, that trees are alive; next, there is no crime; next, love, universal love, he muttered, gasping, trembling, painfully drawing out these profound truths which needed, so deep were they, so difficult, an immense effort to speak out, but the world was entirely changed by them forever. (Woolf 2003: 51) Septimus feels alone as a man due to the fact that he has nobody that could understand him. The world around him regards him as a mad man and his war trauma is underestimated. Woolf gives voice to Septimus in Mrs. Dalloway, revealing his victimization. Following the destruction in the aftermath of the Great War and other entailing issues considered important politically, it is the importance of nature and love that the Cabinet and Prime Minister should realize in Septimus's reality. Political discussions concerning the war and suggestions for a solution turns out to be illusional and pointless. While the emerging change after the Great War reveals the disillusionment experienced with false focus, it also indicates where and how civilization should be looked into.
Virginia Woolf continues narrating the indescribable essence, beauty which might be taken as the lead solution through Septimus Warren Smith flow of thoughts: So, thought Septimus, looking up, they are signaling to me. Not indeed in actual words; that is, he could not read language yet; but it was plain enough, this beauty, this exquisite beauty, and tears filled his eyes as he looked at the smoke words languishing and melting in the sky and bestowing upon him in their inexhaustible charity and laughing goodness one sharp after another of unimaginable beauty and signaling their intention to provide him, for nothing, for ever, for looking merely, with beauty, more beauty! Tears ran down his cheeks. (Woolf 2003: 16) Septimus, the only character who is acknowledged "insane" is perhaps the only one who is truly disillusioned and actually aware of the changing reality. The meaning of life, the exquisite beauty does not originate in the murderous civilization that ended up in the Great War. Rather, it is something simple and pure.
Referring to Septimus Warren Smith, the narrator in Mrs. Dalloway questions "Was there, after all, anything to make a passer-by suspect here is a young man who carries in him the greatest message in the world, and is, moreover, the happiest man in the world, and the most miserable?" (Woolf 2003: 62 (Froula 2005: 110) The world Septimus physically lives in is damaged and it renders him "alone". The fact that Septimus is branded "mad" in a society whose members are mostly illusioned bears critical role in the text. Septimus, the sole mad character in Mrs. Dalloway, represents and triggers the change and disillusionment.
An understanding of "Septimus" might be acquired through DiBattista's text which discusses Virginia Woolf and Woolf's views on Montaigne. In Imagining Virginia Woolf an Experiment in Critical Biography, DiBattista suggests that: To Montaigne alone belongs the art "of talking of oneself, following one's own vagaries, giving the whole map, weight, colour, and circumference of the soul in its confusion, its variety, its imperfection." Montaigne, reading the book of himself, counsels us that "Communication is health, communication is happiness," a message the disordered mind of Septimus Smith makes the burden of his prophecy in Mrs. Dalloway even as he enacts the dark fate of modernity -the death of the soul. (DiBattista 2009: 112-113) The world Septimus lives in is shattered under the false premises of the modernity project. Although he has a message to deliver and he is not the mad one, there is not another entity with whom he can communicate. His sole means of company that could understand Septimus is merely himself. Ending his life signifies conveying the message to the rest. As a soldier, former participant of the dark fate of modernity, Septimus keeps his sanity by isolating himself from the world that might further pollute his mind and soul.
Trust Towards Institutions as Products of Civilization
Lucrezia and Septimus follow the advice of Dr. Holmes, who is a practitioner and consult Dr. Sir William Bradshaw who owns private nursing homes in the countryside. Woolf's creation of Dr. Holmes and Sir William Bradshaw represents the mistrust towards and the ensuing rejection of human nature through institutionalization. Dr. Holmes ignores the traumatic experiences of Septimus and blames him for representing British men poorly to his foreigner wife, Lucrezia due to his mental condition and constantly suggests to indulge Septimus in various hobbies such as golf to cure his mind.
Like Dr. Holmes, the character of Sir William Bradshaw does not reveal appraisable values, either. Sir William Bradshaw, also an acquaintance of the Dalloways establishes the success of his treatment technique over the importance of proportion and conversion. Refraining from using the word "insane" or "mad", Sir William Bradshaw claims man can be out of adequate proportion and can be treated with rest and isolation from one's surroundings in his private asylum. Woolf (Bonikowski 2013: 133) As narrated in Mrs. Dalloway, "Death is defiance. Death was an attempt to communicate; people feeling the impossibility of reaching the centre which, mystically, evaded them; closeness drew apart; rapture faded, one was alone. There was an embrace in death" (Woolf 2003: 205). Having no one to communicate, Septimus realizes that he cannot disseminate the message. In an insane world where the only sane one is labelled as "insane", Septimus contemplates ending his life is the ultimate means of reflecting his message.
Virginia Woolf emphasizes the fall of the modernity project through the mental collapse of Septimus. In his so called insane condition, Septimus expresses his complaints about human nature "Once you fall, Septimus repeated to himself, human nature is on you. Holmes and Bradshaw are on you. They scour the desert. They fly screaming into the wilderness. The rack and the thumbscrew are applied. Human nature is remorseless" (Woolf 2003: 72). Woolf implies that the selfishness of human nature is the core of evil. Septimus, as an embodiment of the failure of the modernity project, experiences the fall and cannot be cured or regain his strength because the priority of mankind is power and influence, which is subsequently confirmed by the Great War. SEFAD, 2017 (38): 123-138 give it you!' he cried, and flung himself vigorously, violently down on to Mrs. Filmer's area railing. (Woolf 2003: 108) Lucrezia finds out about the unreliability of the doctors and is almost content with her marriage as she begins to form the connection that she yearns for a long time with Septimus; however, the representatives of the institutional units do not leave them on their own and Dr. Holmes visits Septimus. Although Lucrezia refrains from maintaining a confrontation between Septimus and Dr. Holmes, Holmes' patriarchal power as a man hampers the process. As Woolf underlines, the death he is planning to reach is not an end for him because the oppression and ignorant attitude towards human is a means of personal invasion towards Septimus; however, in a modernist perspective, Septimus has managed realizing his own self and he is well aware of the fact that death is a means of self-preservation. Woolf's statement related with the deliberate delay of Septimus in throwing himself out of the window before Dr. Holmes steps in emphasizes the confidence of Septimus in his selfacknowledgement. Septimus stands among the category of few people fulfilling himself and he confronts with Dr. Holmes by stating he will give it to them; what Septimus surrenders is not his whole self or identity but only his body, in other words, in their insane world where Septimus is misunderstood as an insane man, Septimus manages to preserve his soul and uses death as a solution that leads to the endless exquisite beauty, universal love.
Clarissa and Septimus as Disillusioned Characters
The life within "sane" society of destruction, patriarchal oppression and the violation of the less fortunate in the name of obtaining civilization is an illusional sanity and goodwill. Woolf presents readers someone who preserves his sanity via quitting this illusional life through the characterization of Septimus Warren Smith; however, Clarissa Dalloway is not much different from Septimus Warren Smith. Referring to the resemblance between Clarissa and Septimus in Shell Shock and the Modernist Imagination the Death Drive in Post-World War I British Fiction, Wyatt Bonikowski states that: Over the course of the novel, Woolf creates a number of structural parallels between Septimus and Clarissa, using motifs to link them in contingent ways. By bringing their stories together at the novel's conclusion, Woolf motivates the arbitrary relation between them, making their connection seem necessary. This narrative structure had long been in Woolf's mind. In 1902, 20 years before she wrote the first stories that were to become beginning of Mrs. Dalloway, Woolf wrote in a letter to Violet Dickinson of a play she was planning to write: "I am going to have a man and woman -show them growing up -never meeting -not knowing each other-but all the time you'll feel them come nearer and nearer. This will be the real exciting part" (qtd in Briggs 130).
Even though Septimus and Clarissa "meet" at the end of the novel, the "exciting part"-the sense of two people coming ever nearer but never actually meeting-is preserved through Septimus's death. (Bonikowski 2013: 134) Obviously, creating the characters of Clarissa and Septimus is a predetermined task for Virginia Woolf as she mentions in her diary years ago. Clarissa and Septimus share similarities in their seemingly different worlds. Both protagonists are criticized by the society; Septimus is criticized for being "insane" because he is considered to have lost his connection with his senses and found irrelevant by Dr. Holmes, Sir William Bradshaw and his wife; however, he is too sensitive and he maintains his privacy and prevents invasion of personal space by ending his "life". Both Clarissa and Septimus are aware of the illusional lives and fake identities that are staged in the society. Both protagonists come to the realization of the sense of disillusionment and enlightenment, however; Septimus could be asserted as the brave one since he rejects obeying the illusional reality whereas Clarissa surrenders with her illusional mask to the society although she becomes aware of the disillusionment.
Contemplating on the subliminal understanding between Clarissa Dalloway and Septimus Warren Smith, Wyatt Bonikowski states that: "It is this madness, I would like to suggest, that Woolf attempts to evoke in Mrs. Dalloway. Through Septimus's story, she makes a narrative out of the seductive allure of death and the dangerous leap of poetic intensity, both in Septimus's mad desire for communication and in Clarissa Dalloway's attempt to grasp the meaning of his suicide" (Woolf 2003: 145). Throughout the novel, Septimus searches for a means of conveying his message about the world whereas Clarissa Dalloway confronts with herself objectively step by step. Both characters experience the sense of disillusionment gradually. The mutual connection between two protagonists might be claimed intensity or restlessness as excessive as madness itself. On the mutual link between Septimus and Clarissa, Wyatt Bonikowski claims that: without fully knowing what she is doing, she approaches the "centre" she vaguely apprehends, that "thing" that matters, at the core of Septimus's story, which recedes as she approaches and which she never grasps, but whose proximity creates an intensity of feeling beyond proportion, a jouissance more than mere aesthetic pleasure. (Bonikowski 2013: 173) To Clarissa, the excessive lust towards death or the intense enjoyment imagining the death of Septimus is not due to merely aesthetic pleasure. Among the other attendants of the party, Clarissa is different from others for she manages to purify herself from her illusionary thoughts. Learning the story of Septimus does not intimidate or distance Clarissa unlike others. Though "Mrs. Dalloway" carries her socially invented mask on her face, Clarissa yearns for what Septimus Warren Smith accomplishes; to figure out the full meaning of life and to unload the extra weight freely.
At the end of the novel, the party of Clarissa Dalloway pleases all the guests. With the success of the party, Virginia Woolf underlines Clarissa's achievement in actualizing illusionary role of perfect hostess, wife and friend. Ironically, though, Clarissa is resentful due to her successful party. Considering the fact that Clarissa criticizes herself and her life time choices during the day, Woolf pairs the female protagonist's resentment with her realization of the disillusionment.
CONCLUSION
Although Mrs. Dalloway is composed as a depiction of an ordinary day in the life of an ordinary housewife, as a modernist text it implicitly discusses the disillusionment experienced by modern individuals. Vagueness and incompleteness are the concepts that contribute to the discussion of disillusionment as experienced by the characters in the text in the sense that they echo fluidity and relativity rather than traditional rigidity and certainty. By eliminating the imposition of a universal reality Woolf also reveals a disillusionment concerning positivistic universalism of modernity. Woolf chooses a unique way of bringing the disillusionment into light through the depiction of a section from the life of ordinary characters -a housewife and a former soldier-who experience a friction between the illusion imposed by the values of modernity and the reality as experienced by themselves. | 2018-12-15T04:27:06.714Z | 2017-12-18T00:00:00.000 | {
"year": 2017,
"sha1": "512997a8c01f254191e78d0d2dbd6ec8f568aa6e",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/en/download/article-file/401775",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "512997a8c01f254191e78d0d2dbd6ec8f568aa6e",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
247797268 | pes2o/s2orc | v3-fos-license | Generation of CNN Architectures Using the Harmonic Search Algorithm and Its Application to Classification of Damaged Sewer
When used as an image processing method, convolutional neural networks (CNNs) cannot verify conditions that can achieve high performance; however, they show sufficiently high performance and are used in various fields. The architecture of a CNN and various parameters determine its performance, and it is impossible to verify the number of all cases that determine the performance of the CNN. Therefore, well-known CNN models are generally used. Recently, various methods for adjusting the parameters of CNNs or generating CNN architectures have been studied in various ways. Methods using metaheuristic algorithms often focus on parameter tuning, or the use of simple hierarchical architectures. This paper proposes a method to create a CNN model with a complex CNN architecture that can be applied to different datasets using the harmony search (HS) algorithm from among metaheuristic algorithms. This study aimed to generate a CNN architecture using fewer computing resources and to verify the results. To make the CNN model in units of cells, the internal and hierarchical architecture of the cell was created based on the learning of the CIFAR image dataset through HS, and the performance was confirmed by applying it to the classification of a damaged sewer pipe image dataset.
I. INTRODUCTION
In image classification, which made machine learning popular, CNNs [1] have become a frequently used method. CNNs, which perform well in image processing, are used in various fields, and models using various layers and connection methods are released every year. Since the emergence of AlexNet [2], which increased the popularity of CNNs, other CNN models such as GoogleNet [3], VGGNet [4], ResNet [5], and DenseNet [6] have been created. Newly created CNN models have shown that the performance of CNNs can be improved by the features of each model. However, these models are CNN models produced by a person directly determining the type of layer, the connection architecture of each layer, and the parameters. Therefore, the performance of a CNN model depends on the type of problem to which it is applied. However, it is difficult for humans to design different CNN architectures according to each type of problem. When creating a CNN model, it is impossible to make the best model by checking the performance of each model and manually attempting to adjust the type of layer, the connection architecture of each layer, and the parameters. Therefore, various studies are underway to generate better CNN models. This paper proposes a method for generating CNN architectures using the harmony search (HS) [7] algorithm. This is executed using the CIFAR image dataset, and is applied to a damaged sewer pipe.
One of the methods to improve the performance of a CNN model is to tune the parameters of the layers used in the CNN.
For parameter tuning, there are methods that use genetic algorithms [8][9][10], particle swarm optimizers [11][12][13][14], or HS [15,16]. The parameter tuning method is slightly different for each study, but the commonality is that the type and order of the layers are fixed; and the kernel size, stride, padding, and some types of output channels of each layer are changed to obtain better results. Parameter tuning has the advantage that it takes less time because the CNN layer structure itself is fixed; however, it has the disadvantage that the performance cannot be greatly improved in the original CNN model as the layer type and order are fixed.
There have also been studies proposing a method for generating the CNN architecture itself. Studies on CNN architecture generation have been published using various methods such as neural architecture search (NAS) [19] that use reinforcement learning [17], and genetic CNNs [20] that use genetic algorithms (GAs) [18]. The following is a list of some of the CNN architecture generation methods that have been published: Meta-QNN [21], hierarchical evolution [22], largescale evolution [23], genetic programming CNN (CGP-CNN) [24], efficient architecture search (EAS) [25], evolving deep CNN [26], advanced neural architecture search (NASNet) [27], and CNN-genetic algorithms [28]. Each study uses a slightly different method to construct the CNN architecture. The CNN architecture generation consists of several different methods; such as fixing the overall architecture of the CNN and changing only the connection relationship between layers; adjusting both the CNN architecture and the parameters of the layers; and a method to apply the CNN architecture obtained from one dataset to another dataset. Evidently, this topic has been studied from a variety of perspectives.
Generating CNN architectures and comparing their performances is difficult. Because CNN architecture generation requires a lot of computational resources, the performance of the CNN model depends on the amount of time for which it is used--even if the same method is used. In the case of NAS, 800 graphic processing units (GPUs) were used for 28 days to generate a CNN architecture with 93.99% accuracy with the CIFAR10 dataset. Therefore, if one GPU was used, approximately 20,000 days would be required. Contrastingly, in the case of the genetic CNN, 17 GPUs were used for one day to build a model with 92.90% accuracy on the CIFAR10 dataset. Accuracy is an important factor in a CNN classification model, but considering the time and computational resources used to obtain each of them, it is difficult to conclude whether NAS or genetic CNN is the better method. In addition, because each study has differences in the learning method used, data preprocessing method, and generation purpose, etc., a given CNN architecture generation method cannot be appointed as the best method--even if it is performed on the same dataset. Each has its own advantages and disadvantages.
Using the metaheuristic algorithm in the existing CNN architecture generation methods ensures that the algorithm's population values match the layer type connection relationship, and adjusts the layer parameters. The method using the metaheuristic algorithm can check the result in less time and is easier to manufacture than the method using reinforcement learning. This is because the adjusted value is large in one model result confirmation. However, using a metaheuristic algorithm has the disadvantage that the size of the CNN architecture produced is determined according to the number of variables in the algorithm. Among these metaheuristic algorithms, the size of the CNN architecture to be generated while using HS is not fixed; in this paper, we propose using a CNN model produced in a dataset other than that used to produce the CNN architecture.
The main features of the CNN generation method using HS proposed in this study are as follows: First, the CNN architectures generated using the existing metaheuristic algorithm mainly function by changing the connection architecture of the layers, or changing the type of layer in a CNN model with a fixed connection architecture. The existing methods have the advantage that they can be easily applied to various fields, but have the disadvantage of fixing the number of variables in the algorithm because the number of layers in the CNN architecture does not change. This study demonstrates that among the different types of metaheuristic algorithms, HS can be used to generate a complex model even in CNN architecture generation, and various variables that determine the cell structure are determined using the unit (cell) used in NASNet. The internal structure of the cell is divided into three levels, each of which is composed of three layers, and a cell unit model is created through the added part between the layers.
Second, in the existing CNN architecture generation methods, the epochs performed for CNN model performance comparison are different in each study, and the performance of the CNN model is compared using the accuracy metric. In this study, we compare the performance of the CNN model throughout 20 epochs. As a feature of the proposed method and the accuracy at 20 epochs, the accuracy before 20 epochs and the number of CNN model parameters were used to compare the CNN model performance. Using this method, the accuracy does not converge at 20 epochs, and a CNN model with as few parameters as possible is selected in a CNN architecture using multiple layers.
Third, to generate a CNN architecture, it is necessary to check the training results of various CNN models; thus the larger the CNN architecture to be created, the greater the computational resources and time required to obtain an appropriate CNN model. Therefore, as it takes a long time to learn the CNN architecture of a large image, the cell structure of NASNet is adopted. This study showed that a model trained on a small sized CIFAR image dataset using a cell structure could be applied to a large sized sewer pipe image dataset to produce results.
The remainder of this paper is as follows. Related work is discussed in Section II. Section III describes the HS algorithm used to generate the CNN architecture and the method for setting the main parameters of the HS algorithm. In Section IV, we describe the generation of a CNN architecture using HS. In Section V, we describe the conditions used to generate the CNN architecture. In Section VI, we compare the CNN architecture generated using HS with that of other CNN models. Section VI presents the conclusions of this study.
A. CONVOLUTIONAL NEURAL NETWORKS
CNNs are a type of neural network method that produces good results with fewer computations than a neural network with fully connected nodes. A CNN mainly uses two layers: a convolution layer, and a pooling layer. The convolution layer uses the concept of a filter to perform convolution operations on the input data. A filter is treated as a matrix of a size called the kernel size. Convolution repeats saving after adding the multiplied value of the filter with the same number of channels to the input image consisting of multiple channels. The filter moves along the given stride in the horizontal direction in the input image, moves to the end, then moves along the given stride in the vertical direction-then repeats this process. After all the work is completed, the resulting matrix of stored values becomes an output called a feature map. The number of feature maps is specified as a parameter in the corresponding layer. The convolution process can be performed by treating the edge of the input image as additionally padded with a value of 0, referred to as padding. The main parameters used in the convolution layer are the number of output channels, kernel size, stride, and padding. The pooling layer performs in the same way as the convolution layer, but the value of the filter itself does not exist because the highest value or average value in the filter is used, instead of adding values in the convolution operation. Because there is no filter with a value in the pooling layer, the number of input channels and output channels is the same. The main parameters used in the pooling layer are kernel size, stride, padding, and filter operation types. Among the parameters of these convolution and pooling layers, kernel size, stride, and padding determine the size of the feature map; thus the size of the input and output feature maps can be increased or decreased according to the values of the three types of parameters. The convolution layer also has several names depending on how the convolution layer is used. One of them, depth-wise separable convolution, is composed of two layers: depth-wise convolution, and point-wise convolution [29]. Unlike general convolution layers, depth-wise convolution generates a feature map for each channel rather than adding all the values after multiplying the input channel by a filter. As a result, depthwise convolution has a smaller amount of computation compared to the number of output channels as the existing convolution layer. However, as the number of output channels increases, the amount of computation for the next operation increases significantly. Therefore, point-wise convolution is used to reduce the number of channels increased by depthwise convolution. Point-wise convolution is a layer whose purpose is to adjust the number of output channels using a convolution layer with a 1 × 1 kernel size. As the kernel size is small, the amount of computation is small; therefore, it is used to reduce the computation in the next layer by reducing the number of output channels. Information loss, however, occurs as the number of channels is reduced. In this study, the number of channels is multiplied by 8 in depth-wise convolution. Among the methods used in CNNs, there is one called skip connection, which was first introduced in [30]. Backpropagation training, which is the primary learning method of neural networks, has a vanishing gradient problem in deep neural networks owing to the problem its poor performance. The vanishing gradient refers to cases where the multiplication of values less than one is accumulated during backpropagation using multiplication; as the value used in training is insufficient, the training is not performed properly. Therefore, to avoid these problems, a bypass is created that does not go through certain layers, called a skip connection. ResNet [31] is a representative CNN model using skip connections, which are an effective method.
B. RELATED WORK
There have been many studies on improving the performance of CNNs without human involvement.
In the early days, methods [8][9][10][11][12][13][14][15][16] were used to adjust parameters in the layers of CNNs. These methods fixed the type of layer and the connection architecture and changed only the layer parameters to change the performance of the CNN model. However, the disadvantage was the need to use an already created CNN model.
Subsequently, a method of generating a CNN architecture was developed; and methods such as genetic CNNs [20] using a metaheuristic algorithm determined the connection architecture between layers. The CNN architecture generation method, in which the number of variables in the algorithm is fixed, uses only a small number of variables to fix the type of layer and together with the major parameters; therefore, the order of connection of each layer of a small number of variables is determined. Further, it changes the connection order of each layer, or adds skip connections to generate a CNN architecture. Generating a CNN architecture using a layer-connected architecture has the advantage that the CNN architecture itself is not complicated; thus, it can be used easily, and it does not take a long time to evaluate the performance. However, the problem is that it is difficult to create a large CNN architecture, and the layer type is fixed. In contrast, NAS [19], which uses a recurrent neural network (RNN) [32] to determine almost all architectures of CNNs, is a representative method for determining the layer type and layer connection architecture in the generation of a CNN architecture. The disadvantage of NAS is that it spends a significant amount of time generating a CNN architecture that has low accuracy. However, NASNet [27], abandoned several parameters and focused on the architecture of the CNN--showing that it can produce better results-and has influenced various studies on CNN architecture generation since then.
III. PSF-Harmony Search Algorithm
The HS algorithm [7] is a metaheuristic algorithm that uses harmony memory (HM) to store solutions with high fitness values in algorithm execution, and gradually obtains good fitness values. The characteristic of HS is that a part of the solution in HM is used for the individual solution to be checked in the next iteration. HS has three main parameters: the harmony memory consideration rate (HMCR), pitch adjustment rate (PAR), and bandwidth. When creating a new individual in HS, one of the values of the corresponding solution part stored in the HM is used according to the probability of the HMCR for each part of the solution. If part of the new solution uses the value of HM, it is added to the solution by adding a random value within the bandwidth with the probability of PAR. If a part of the new solution does not refer to the value of HM, a random value is used. Therefore, in HS, HMCR, PAR, and bandwidth are important parameters that determine exploration and exploitation performance of the algorithm. A parameter-setting-free (PSF) method [33] that automatically changes the HMCR and PAR bandwidths of HS according to the execution of the algorithm was devised; in this study, the advanced PSF-HS [34] is used. Advanced PSF-HS induces exploitation by increasing the values of two parameters; HMCR-which is the probability of bringing the value stored in HM--and PAR-which is the probability of adding a value within the bandwidth as the HS performance progresses. The advanced PSF-HS used in this study uses the HMCR, PAR, and bandwidths of equations (1)-(3) in the i-th iteration.
where fitobj is the target fitness, fitmean is the average fitness of the solutions stored in the HM at the time of execution, fitstart is the average of the solutions entered in the first HM, and fiti is the fitness of the performance of the i-th iteration--where n(val) is the number of HS variables to be performed.
IV. PROPOSED METHOD
Herein, we propose a new method for determining the fitness and creating a cell composed of multiple layers to generate a CNN architecture with few parameters.
A. FITNESS
In most studies on CNN architecture generation, the accuracy is mainly used for a given classification problem to compare the performance of each CNN model. In some studies, the accuracy was compared by performing small epochs to evaluate the performance of a CNN model. [19,21,[24][25][26] This study compared the performance of the generated CNN model at 20 epochs, and performed the final performance evaluation at 300 epochs. The fitness of the HS used in the performance comparison of the CNN model was determined by adding other factors rather than using only the accuracy of a specific epoch. Because the performance of the CNN model should be compared after performing 20 epochs, there is an inevitable difference from the learning results performed by 300 epochs. In the performance evaluation of the CNN model, to predict future performance with the result of a small number of epochs, the accuracy value at the intermediate epoch is used for fitness herein. After training the CNN model for 20 epochs, we used fitness by adding the difference between the accuracy at 10 epochs and the accuracy at 20 epochs to estimate the degree to which the accuracy converges to the accuracy obtained by 300 epochs. The learning possibility is considered after 20 epochs through the difference in accuracy at the beginning of training the CNN model. We also used the number of parameters of the CNN model for fitness. One of the goals of this study is to operate on minimal computing resources, and a model of a suitable size may be better than high accuracy. Therefore, in the HS performance, the corresponding value is added to the fitness such that the CNN model parameter has a small value. We use the fitness (4) created using these conditions.
where acci is the classification accuracy in the i-th epoch, and param is the number of parameters of the CNN model.
B. CNN ARCHITECTURE
This paper is inspired by NASNet [27] and uses the cell structure for CNN architecture generation. The cell structure uses both normal and reduction cells, and the internal structure of each cell is based on Figure 1. For the input hi, a layer-add configuration consisting of a minimum of one and a maximum of three levels is used--and in the case of Figure 1, three levels are used. The HS determines the number of output channels of a layer, and layers of the same level have the same output channels. When using a convolution layer, batch normalization and leaky rectified linear units (ReLU) are connected. Each layer enters one of the three added parts according to the value set by the HS algorithm. The added part is used as the input to the next layer and repeats. The last level adds the part that generates output hi+1 through a concatenation operation. The added part that is not connected to any layer in the middle places the input hi and 1 × 1 convolution in the middle to make the number of channels in the feature map the same as the added part at the same level. This method makes it difficult to create a skip layer by adjusting the number of channels, but it uses an unused add part to achieve a similar result. A normal cell uses the structure as it is, but for the reduction cell, a pooling layer that halves the feature map size is added immediately after concatenation. As shown in Figure 2, normal and reduction cells are connected to form a CNN architecture. Normal cells are used repeatedly as many times as set in the HS. Normal cells and reduction cells form one group, and the group repeats a set number of times in the HS to connect the input image and the output softmax.
C. HARMONY SEARCH
Herein, HS determines the number of repetitions of the cell structure in the CNN architecture generation, the type and connection of each layer inside the cell, and the number of output channels per layer. The difference from the normal HS operation is that if there is an unused value-depending on the number of levels when generating the CNN architecture--this value is not stored in the HM. If the HS algorithm attempts to retrieve a value with the probability of HMCR for the next iteration, but there is no value stored in the HM, a random value is used. The pseudocode of the HS used in this study is outlined in Algorithm 1.
V. EXPERIMENTAL DESIGN
In this study, one GPU card was used to generate the CNN architecture--the GPU card was an Nvidia GeForce GTX 1060 model. The optimizer that was used for training used stochastic gradient descent (SGD). The default learning rate was 0.01, and was multiplied by 0.1, at 101 and 201 epochs, respectively, for a total of 300 epochs. Generating the CNN architecture from the CIFAR image dataset took 10 days. The training loss function used crossentropy to determine the accuracy of the training loss from the results using the sewer pipe image dataset.
A. HARMONY SEARCH SETTING
The main parameters of HS are calculated with PSF-HS using Equations (1-3), thus there is no need to set the parameters separately. Therefore, when performing HS, a person must directly determine the input range of each variable. Each variable and input range was configured, as shown in Figure 3. Normal cell repetition was repeated 1-2 times, and normalization repetition was repeated 1-3 times. The level number is the number of levels, and determines how many levels each cell will have--it has 1-3 levels. Each level layer can have 32, 64, 128, 256, or 512 output channels, expressed as 1-5. The reduction cell reduces the size of the feature map by half by adding a pooling layer with a stride of 2 at the end of the cell structure, and expresses the type of the pooling layer as 1-4. Each layer sets the layer type with a value of 1-13, as shown in Table 1. The three added parts that each layer will be connected to are determined by a value of 1-3.
B. CIFAR IMAGE DATASET
In this study, the CIFAR image dataset [35] was used to generate the cell structure. The dataset comprised a total of 60,000 32 × 32 RGB images; 50,000 images were used for training, and 10,000 images were used for testing. The CIFAR image dataset is divided into CIFAR10 with 10 classes, and CIFAR100 with 100 classes. CIFAR10 has 5,000 pieces of training data and 1,000 pieces of test data for each class. CIFAR100 has 500 pieces of training data and 100 pieces of test data for each class. The 32 × 32 size CIFAR image dataset has been used as a benchmark dataset in various studies, and many studies have used the CIFAR image dataset in the field of CNN architecture generation. CNN architecture generation studies differ in the learning method of CNN architecture generation, epoch, GPU model, GPU days, and model generation method, etc. In CNN architecture generation studies, even if we compare the classification performance on the same dataset, no method is the best method. As it is impossible to compare the results of other studies, a specific method cannot be best for generating CNN architectures.
Herein, the results of other studies using the CIFAR image dataset are tabulated, but no comparison of each study's results is given in detail.
In this study, fitness was determined only using the test dataset without using the validation dataset. In each direction of the dataset image, four pixels with zero values were padded and randomly cropped to a size of 32 × 32, and a horizontal flip was performed with a probability of 0.5. As transforming the image dataset affects the performance results, only representative image transformation methods performed in the CNN architecture generation paper were applied.
C. SEWER PIPE IMAGE DATASET
The goal of this study was to create a model that classifies images of damaged sewer pipes. The sewer pipe image dataset consisted of 12 classes. Sewer pipe classes include an undamaged sewer pipe type consisting of three types: pipe joints, the inside of the pipe, and inverts. There are nine types of damage in the sewer pipes: longitudinal cracks, circumferential cracks, surface damage, broken pipes, lateral protruding, faulty joints, displaced joints, silt deposits, and ETC (Types of damaged sewer pipes not mentioned above). In the sewer pipe image dataset consisting of 2,000 images for each class, there is subtitle information included in the shooting, thus RGB noise was added to that part to prevent overfitting problems. The sewer pipe images were resized to 128 × 128; 19,200 images (80%) were used for training, whereas the remaining 4,800 images (20%) were used for testing. These images are shown in Figure 4 and Figure 5. In the CNN training using the sewer pipe image, 16 pixels with zero values were padded in each direction of the dataset image-was randomly cropped to a size of 128 × 128-and then a horizontal flip was performed with a probability of 0.5. A total of 100 epochs were learned for CNN architecture learning using the transformed sewer pipe image.
VI. EXPERIMENTAL RESULT
The results of this study are comparable with those of other CNN models in other studies. The results of VGGNet [4], ResNet [5], and DenseNet [6] were obtained under the same conditions as HS-CNN using the model provided by PyTorch. Among them, VGGNet uses a model with batch normalization. As for the results of VGGNet, ResNet, and DenseNet, two cases were confirmed when transfer learning was used to train a pre-trained CNN model with the ImageNet image dataset, and when transfer learning was not used. An accurate comparison of results cited in other papers is not possible because the epochs, learning methods, image preprocessing methods, and types of GPU models used are different for each paper. This study is meaningful in comparing the CNN architecture generation study results with relatively few GPU days compared to other studies.
A. CIFAR10 IMAGE DATASET
The cell and cell connection method used in the CNN architecture obtained using the CIFAR10 image dataset is shown in Figure 6. A batch size of 4 was used in training for CNN architecture generation. A fitobj of 95 was used in the advanced PSF-HS algorithm. The CNN architecture using the cell in Figure 6 repeats the normal cell twice, and the Normal-Reduction cell group also repeats twice; it is the structure of a = 2 and b = 2 in Figure 2. The results of 300 epochs of training using a batch size of 32 in the CNN architecture produced with the structure in Figure 6 are indicated by HS-CNN in Table 2. Compared with HS-CNN and VGGNet, which have the highest accuracy among the given CNN models, HS-CNN uses fewer parameters, and the accuracy is 0.15% higher. As a result, HS-CNN took only 10 GPU days and showed better results than the pre-trained CNN model.
B. CIFAR100 IMAGE DATASET
The cell and cell connection method used in the CNN architecture obtained using the CIFAR100 image dataset is shown in Figure 7. A batch size of 4 was used in training for CNN architecture generation. A fitobj of 75 was used in the advanced PSF-HS algorithm. The CNN architecture using the cell in Figure 7 repeats the normal cell twice and the Normal-Reduction cell group twice, shown by a = 2 and b = 2 in Figure 2.
The results of 300 epochs of training using a batch size of 32 in the CNN architecture constructed with the structure in Figure 7 are indicated by HS-CNN in Table 3. Compared with HS-CNN and VGGNet, which has the highest accuracy among the given CNN models, it can be seen that HS-CNN achieves a 2% higher accuracy.
C. SEWER PIPE IMAGE DATASET
Training using the sewer pipe image dataset was performed for 100 epochs, with learning rates of 0.01, of 0.001, and 0.0001 at the start, at 50 epochs, and 75 epochs, respectively. A batch size of 4 was used. In Table 4, the results of GoogleNet [3] and WideResNet [37] that do not use transfer learning are added to the results of VGGNet, ResNet, and DenseNet. In addition, the results of NASNet-A Mobile and NASNet-A Large using the CNN model created in the NASNet [27] study were added. In Table 4, the results using VGGNet and ResNet show that when using the CFIAR image dataset, the accuracy may decrease when using transfer learning instead of a significant increase in accuracy.
The image of the sewer pipe has a low RGB value, as seen in Figure 4-5, and there are many parts close to 0 in the center of the image; therefore it shows a different appearance from the ImageNet image with a relatively high RGB value. As a result, for NASNet-A Large. When performing the sewer pipe classification of NASNet-A, like other CNN models, the number of output channels of the linear layer of the final output was changed to 12, the number of classes of the sewer pipe, and the rest of the learning conditions were also used in the same way. In Table 4, the sewage pipe classification accuracy of the NASNet-A model without transfer learning was very low. NASNet-A, which showed very low accuracy compared to training loss, seems to show low accuracy due to overfitting like ResNet and DenseNet. Sewer classification accuracy of NASNet-A model using transfer learning of pretrained model with ImageNet dataset showed higher accuracy than when transfer learning was not used. It can be seen that the pre-train model of NASNet-A, which shows a significant difference in accuracy compared to other CNN models according to the use of transfer learning, has more suitable parameters for classifying sewage pipe images. However, since the sewage pipe classification accuracy using all NASNet-A models is lower than the sewage pipe classification accuracy using the VGGNet model, it can be confirmed that the NASNet-A model is not suitable for the sewage pipe image classification. Therefore, it can be seen that the CNN model based on HS-CNN proposed in this paper performs better in classification of sewage pipe images.
VII. CONCLUSION
This paper proposes a method to generate a CNN architecture from a dataset using small-sized images with HS, and to create a classification model for large-sized images using the generated CNN architecture. We generated the CNN architecture through HS using the CIFAR image dataset as a dataset of small-size images and compared the transfer learning results from other papers and known CNN models. It showed better accuracy than known CNN models. Unlike other CNN architecture generation methods, the CNN architecture using HS should be considered as a model built using computing resources with 10 GPU days. Based on the HS-CNN produced by the CIFAR image dataset, we created a classification CNN model of the sewer pipe image dataset using large-size images and confirmed the results. Because the sewer pipe image dataset is an original image dataset that has not been used in other studies, it cannot be compared with the results of other studies. Therefore, the sewer pipe image classification was compared with existing CNN models. In the sewer pipe image dataset, the existing CNN models were able to evaluate the performance of the CNN model alone, as the accuracy decreased when transfer learning was performed with the pre-trained CNN model through the ImageNet image. The CNN architecture fabricated using HS showed a classification accuracy that was at least 1.96%, and up to 5.67% higher than that of VGGNet, which has the highest accuracy among the existing CNN models.
In this study, the two CNN models created for classification of the sewer pipe image dataset using the HS and CIFAR image datasets showed high accuracy compared to other CNN models, confirming that a transferable architecture that retrieves the CNN architecture is possible. This paper shows that classification models of different image datasets can be created through CNN architecture generation using HS with few computing resources. | 2022-03-31T13:09:31.427Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "d774486c4d9905fa9b24060652da1e4438ab8422",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09738637.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "d774486c4d9905fa9b24060652da1e4438ab8422",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221589671 | pes2o/s2orc | v3-fos-license | Effects of Embryonic Inflammation and Adolescent Psychosocial Environment on Cognition and Hippocampal Staufen in Middle-Aged Mice
Accumulating evidence has indicated that embryonic inflammation could accelerate age-associated cognitive impairment, which can be attributed to dysregulation of synaptic plasticity-associated proteins, such as RNA-binding proteins (RBPs). Staufen is a double-stranded RBP that plays a critical role in the modulation of synaptic plasticity and memory. However, relatively few studies have investigated how embryonic inflammation affects cognition and neurobiology during aging, or how the adolescent psychosocial environment affects inflammation-induced remote cognitive impairment. Consequently, the aim of this study was to investigate whether these adverse factors can induce changes in Staufen expression, and whether these changes are correlated with cognitive impairment. In our study, CD-1 mice were administered lipopolysaccharides (LPS, 50 μg/kg) or an equal amount of saline (control) intraperitoneally during days 15–17 of gestation. At 2 months of age, male offspring were randomly exposed to stress (S), an enriched environment (E), or not treated (CON) and then assigned to five groups: LPS, LPS+S, LPS+E, CON, and CON+S. Mice were evaluated at 3-month-old (young) and 15-month-old (middle-aged). Cognitive function was assessed using the Morris water maze test, while Staufen expression was examined at both the protein and mRNA level using immunohistochemistry/western blotting and RNAscope technology, respectively. The results showed that the middle-aged mice had worse cognitive performance and higher Staufen expression than young mice. Embryonic inflammation induced cognitive impairment and increased Staufen expression in the middle-aged mice, whereas adolescent stress/an enriched environment would accelerated/mitigated these effects. Meanwhile, Staufen expression was closely correlated with cognitive performance. Our findings suggested embryonic inflammation can accelerate age-associated learning and memory impairments, and these effects may be related to the Staufen expression.
INTRODUCTION
Population aging constitutes a significant public health challenge globally. Aging is associated with cognitive decline, such as spatial learning and memory impairments, which are among the earliest and most striking effects of aging (Belblidia et al., 2018). Such age-associated learning and memory impairments can have a strong and negative impact on the quality of life of the affected individuals. Consequently, it is essential to understand the normal aging process as well as the mechanisms underlying cognitive decline. However, although interest in this field has grown, the mechanisms that trigger and maintain age-related diseases remain poorly understood, and approaches to alleviate brain aging remain ineffective.
The hippocampus is an organ that is susceptible to the effects of aging, and plays an important role in spatial learning and memory (Moorthi et al., 2015;Tsai et al., 2018). There is growing evidence to suggest that hippocampal synaptic function and plasticity play a key role in age-associated learning and memory impairments (Mendelsohn and Larrick, 2012). The localization of mRNAs to synapses and the subsequent regulation of local translation has been proposed as one mechanism underlying the regulation of synaptic plasticity and establishment of hippocampus-dependent learning and memory (Ule and Darnell, 2006). However, little is known about the in vivo role of RNA-binding proteins (RBPs) in RNA transport to the synapse and subsequent local protein synthesis.
Staufen, a double-stranded RBP composed of five doublestranded RNA (dsRNA)-binding domains, was initially identified in a genetic screen for maternal-effect mutants in Drosophila (Bonnet-Magnaval et al., 2016). This protein is present in neurons, where it localizes to ribonucleoprotein particles in the cell body and dendrites, and has been shown to play crucial roles in the localization, translation, and stabilization of dendritic mRNA (Goetze et al., 2006). Staufen typically interacts with specific regulatory elements on the 3 untranslated region (UTR) of mRNAs to enable their localization or regulation; these complexes then assemble into larger RNA granules, which are transported along cytoskeletal tracks by motor proteins (Sudhakaran and Ramaswami, 2017). Data reported for Drosophila, Aplysia, and the mouse all indicate that Staufen makes a crucial contribution to dendrite development, synaptic plasticity, learning, and memory (Dubnau et al., 2003;Heraud-Farlow and Kiebler, 2014;Berger et al., 2017). A recent study also found that Staufen overabundance can contribute to aberrant translation, ribostasis, and proteostasis (Gandelman et al., 2020). Collectively, these observations highlight the importance of Staufen-mediated posttranscriptional regulation in cognition.
Recent evidence has demonstrated that early exposure to adverse factors, including environmental, genetic, or a combination of both, can exacerbate age-associated cognitive impairment . This view has been encompassed in the "fetal origin of adult disease" hypothesis (Benarroch, 2013). Studies have indicated that exposure to inflammation in the embryonic period (Qin et al., 2007;Abareshi et al., 2016) or to stress in adolescence (Speisman et al., 2013;Shields et al., 2017) may be involved in the occurrence of the age-associated learning and memory impairments.
Inflammation is a commonly occurring adverse factor in early life. Several studies have reported that a close relationship exists between neuroinflammation and neuropsychiatric disorders such as memory impairment, depression, and anxiety . Administration of lipopolysaccharide (LPS), a toxic component found in the cell walls of Gram-negative bacteria, is a well-characterized and widely used model of inflammation (Boksa, 2010). Mimicking intrauterine infection and inflammation through maternal LPS exposure during pregnancy can lead not only to fetal death, growth restriction, skeletal development retardation, and preterm labor, but also to a significant increase in the synthesis and release of specific proinflammatory cytokines, such as tumor necrosis factor, interleukin-1 beta, and interleukin-6, into maternal serum, as well as markedly impair the cognitive abilities and socialbehavioral performance of offspring Badshah et al., 2016;Zhan, 2017). We previously also found that embryonic exposure to LPS-induced inflammation led to age-related spatial learning and memory impairment and corresponding neurobiochemical changes Wu Z. X. et al., 2019).
Whether exposure to stress during adolescence can influence inflammation-induced cognitive impairment is not known. Studies have revealed that exposure to adolescent stress can increase the risk of disease, such as that associated with cardiovascular and metabolic disorders (Nagaraja et al., 2016). Recent research has also shown that exposure to adolescent stress results in structural and functional alterations in the developing hippocampus, including reduced long-term potentiation (LTP), and these alterations are thought to be associated with impaired spatial learning and memory (Fujioka et al., 2006). In contrast, exposure to an enriched environment (EE) in adolescence can help prevent these effects. EEs are known to provide multisensory stimulation, which induces brain plasticity following exposure to different types of objects such as toys, tunnels, ladders, and running wheels, among others, in a spacious environment (Bhagya et al., 2017). Increasing evidence suggests that an EE can exert significant ameliorative effects on neurogenesis, synaptogenesis, and learning and memory abilities Ferioli et al., 2019;Wu C. et al., 2019). However, it remains unclear whether an EE can counteract, at least partially, the harmful effects of embryonic inflammation on age-associated learning and memory impairments.
In brief, growing evidence has suggested that exposure to embryonic inflammation can impair spatial learning and memory in the later life; however, whether the stress/an EE in adolescence can accelerate/mitigate the age-associated cognitive impairment resulting from the embryonic inflammation remains unknown, as do the potential associated mechanisms. We speculated that the Staufen protein may be involved in these impairments caused by embryonic inflammation. In this study, we first explored whether embryonic inflammation could accelerate ageassociated cognitive impairment. Subsequently, we investigated whether Staufen expression changed with age and under different treatments. Finally, we examined whether changes in Staufen expression are correlated with deficits in spatial learning and memory.
Animals and Drugs
CD-1 mice (8 weeks old, 10 males and 20 females) were obtained from Hunan SJA Laboratory Animal Co., Ltd. (NO. 43004700010146;Hunan, China). The animals were maintained at a constant temperature of 22-25 • C with 55 ± 5% humidity on a 12-h light-dark cycle (lights on at 07:00). Food and water were available ad libitum. After 2 weeks of acclimatization, female mice were paired with males at a 2:1 ratio. The presence of a vaginal plug was designated as gestational day (GD) 0. Based on our previous study (Wu Z. X. et al., 2019), during GDs 15-17, the mice received a daily intraperitoneal injection of lipopolysaccharides (LPS, 50 µg/kg) or the same volume of normal saline. To avoid stress, the offspring were only separated from their mothers at postnatal day 21, following which they were housed in polypropylene cages, 4-5 mice per cage. At 2 months of age, the offspring had exposed to a stress (S), an enriched environment (E), or an unchanged environment (CON), then assigned to five groups (LPS+S, LPS+E, LPS, CON+S, and CON, respectively). Three-month-old (3M; young) and 15-month-old (15M; aged) CD-1 mice (except those with movement disorders, hair loss, or visible tumors) were used to complete the tests described in sections "Morris Water Maze, " "Tissue Preparation, " "Immunohistochemistry, " and "Western Blotting." The timeline of the experiment is shown in Figure 1. All animal experiments were performed in compliance with the guidelines established by the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. The protocol was approved by the Center for Laboratory Animal Sciences at Anhui Medical University.
Stress
Young (2 months old) mice in the LPS+S and CON+S were exposed to a variable sequence of chronic, mild, unpredictable stressors (Sun et al., 2018). One of the following four stressors was randomly applied per day over the 4-day cycle (4 days constituted one cycle, and the experiment lasted for seven cycles): (1) Restraint: mice were placed in a net pocket made of soft wire (35 × 40 cm 2 ) to restrict their movement. Restraining time lasted for 30 min on the first day, and was extended by 10 min each day, all the while maintaining normal respiration. (2) Suspension: the mice were suspended by the tail from a crossbar (1.2 m high) for 30 min on the first day, and the duration was extended by 10 min each time, thereby sequentially increasing the stress intensity.
(3) Illumination: in the normal light-dark cycle, repeated cycles of 30 min lights on/30 min lights off were applied from 19:00 to 7: 00. (4) Fasting: the food was removed between 19:00 and 7: 00, while drinking water was provided ad libitum.
Enriched Environment
Mice in the LPS+E were reared in large cages to promote social and group activities. Different types of toys, such as pipes, plastic running wheels, and balls, were placed in the cage every week to increase the ability of the mice to avoid difficulties, their adaptation to the novel environment, and the amount of physical exercise until the behavioral examination was completed (Van Loo et al., 2004).
Morris Water Maze
The Morris water maze (MWM) was used to evaluate spatial learning and memory ability (Vorhees and Williams, 2006;Tong et al., 2015;Jafarian et al., 2019). The apparatus consisted of a circular black tank (100 cm in diameter, 30 cm in height) and an escape platform (10 cm in diameter, 24 cm in height); the tank was filled with water (20-22 • C) to a depth of 25 cm. To guide spatial navigation, the periphery of the tank was surrounded by a white curtain with three black conspicuous markers (a square, a circle, and a triangle). The test was performed four times daily, with 15-min intervals between trials, and lasted for 7 days. On day 1, the mice were placed on the escape platform for 30 s before the first trial began. Then, the mice were placed in the water facing the pool wall and allowed to swim freely for 60 s to find the escape platform (acquisition trials). If they failed to find the platform within 60 s, they were gently guided to the platform before being removed. At the end of each test, regardless of whether or not they found the platform, the mice were allowed to rest on the platform for 30 s and then put back in their home cage to keep warm. The position of the platform remained constant throughout training, whereas the starting points were randomly selected. The probe test (removal of the platform for 60 s) was performed 2 h after the last acquisition trial on the last day of the test. The time it takes for mice to reach the escape platform (escape latency) is the most commonly used measure of learning performance; however, escape latency can be affected by swimming speed, which usually declines with age (van der Staay and de Jonge, 1993). Therefore, in this study, the total swimming distance in the learning phase was used as the spatial learning ability and the percent distance swam in the target quadrant in the probe test was used as the memory performance Zhang et al., 2020). ANY-maze software (Stöelting, United States) was used to record the distance swam.
Tissue Preparation
Six mice per group were anesthetized with chloral hydrate (360 mg/kg, i.p.), and sacrificed by cervical dislocation. The brains were then rapidly removed from the skull and cut along the midsagittal plane on ice. The left hippocampus was stored at -80 • C for western blot analysis. The right hemisphere was fixed in 4% paraformaldehyde at 4 • C for 3 days, and then paraffinembedded into blocks. Continuous coronal sections (3 µm) were prepared using a Leica Microtome (Leica RM 2135, Germany) for subsequent immunohistochemistry and RNAscope assays.
Immunohistochemistry
The streptavidin-biotin-peroxidase complex (SABC) method was used for immunohistochemical staining, as previously described (Nagashima et al., 1992). After conventional dewaxing and hydration, the sections were treated with periodateinactivated enzyme for 1 min to deactivate endogenous FIGURE 1 | Timeline of the experiment. Pregnant mice received a daily intraperitoneal injection of LPS or normal saline during gestation days 15-17. At 2 months of age, the offspring were exposed either to a stressed or an enriched environment, following which they were randomly assigned to five groups (LPS+S, LPS, LPS+F, CON+S, and CON). A Morris water test was applied to 3-month-old and 15-month-old mice. The mice were then sacrificed for subsequent Staufen expression analysis. CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
Western Blotting
Western blotting was performed as previously reported (Wu T. et al., 2019;Shen et al., 2020). Hippocampal tissue was lysed in RIPA lysis buffer, and protein concentrations were determined using the bicinchoninic acid method. Protein samples were separated by 15% SDS-PAGE and then transferred onto PVDF immunoblotting membranes. The membranes were blocked with 5% dry milk containing 0.1% Tween 20 for 2 h and incubated with a primary monoclonal antibody against Staufen (1:1000; ab73478, Abcam) at 4 • C overnight. The membranes were then incubated with horseradish peroxidase-conjugated secondary antibodies (1:10,000; ZB2301, ZsBio) for 2 h at room temperature followed by chemiluminescence detection. Immunoreactive bands at 63 kDa (Staufen) and 43 kDa (beta-actin, internal standard) denoted positive expression. Densitometric quantification of the band intensities was performed using Image-J. The ratio of the optical density of the anti-Staufen antibody to that of the antibeta-actin antibody in each sample was calculated as the relative Staufen protein level.
RNAscope in situ Hybridization Assay
RNAscope in situ hybridization was performed as previously described (Domi et al., 2019;Gavini et al., 2020). The RNAscope assay was performed on formalin-fixed, paraffinembedded (FFPE) tissue. Briefly, tissue sections (3 µm) were deparaffinized in xylene, rehydrated in an ethanol series, and then incubated with 5-8 drops of H 2 O 2 for 10 min. The sections were incubated in citrate buffer (10 nmol/L, pH 6.0) at 100 • C for 15 min. An ImmEdge pen was used to create a barrier around each section. The slides were incubated for 30 min at 40 • C with the RNAscope protease plus reagent, and then with the Staufen mRNA target probes (ACD, 322381) for 2 h at 40 • C in the HybEZ hybridization oven. The sections were then serially incubated with four amplifier probes (30 min each for steps 1 and 2, 15 min each for steps 3 and 4) at 40 • C. After removing excess liquid, the TSAplus fluorescent dye was added to the slide for 30 min at 40 • C, and then the RNAscope multichannel fluorescent second-generation HRP blocker was also added to the slide for 15 min at 40 • C. Finally, the sections were counterstained with DAPI to visualize the nuclei. The slides were cover-slipped, air dried, and stored at 4 • C. Fluorescent signals from RNAscope probe hybridization were examined on a laser-scanning confocal microscope (×40 objective; Zeiss LSM 700). To visualize the entire brain section, tile scan images were obtained using an Olympus IX71 fluorescence microscope (Olympus, Tokyo, Japan) equipped with a PXL37 CCD camera (Photometrics, Tucson, AZ, United States). Fluorescence images were semi quantitatively analyzed in ImageJ. The number of dots in the Staufen mRNA-positive cells relative to the negative control was calculated as the relative levels of Staufen mRNA. The negative control was an internal standard and was used to set the light source and exposure time of image acquisition to acceptable background levels.
Statistical Analysis
The sample size was calculated by G * Power software (ver. 3.1.7, Franz Faul, Universitat Kiel, Germany) The α error was set at 0.05 and power (1-β) at 0.8 and the essential total sample size for each group in the behavioral assessments and molecular experiments was calculated as 6-8. Parametric data were expressed as means ± standard error of the mean (SEM). For the learning performance in the MWM, repeated-measures analysis of variance (rm-ANOVA) was used to analyze the learning data, with day, age, and group as independent variables. The memory percentage of distance from the MWM test and data from the western blotting and RNAscope assays were evaluated by one-way ANOVA with age or treatment as independent variables. Fisher's least significant difference test was performed to compare the differences among the groups. The correlations between the MWM performance and the relative levels of Staufen protein and mRNA in the hippocampus were analyzed using Pearson's correlation coefficient. The Statistical Package for Social Sciences (SPSS, version 20.0) was used for analyses, and significance was assumed at P < 0.05.
Learning Phase
The swimming velocity was significantly lower in the 15M mice than in the 3M mice in the CON [F (1,14) = 16.459, P < 0.01], suggesting an age-related decline in motor ability in middle-aged mice (Figure 2A). LPS, stress and EE did not significantly affect the swimming velocity of either the 3M and 15M mice (Ps > 0.05; Figures 2B,C). Thus, the swimming distance was analyzed as an indicator of learning ability. The distance swam decreased progressively and daily for all the mice combined [F (6,84) = 101.993, P < 0.01]. Furthermore, the 15M mice swam greater distances than the 3M mice both in the CON [F (1,14) = 26.595, P < 0.01; Figure 2D] and in the other treatment groups (CON+S, LPS, LPS+E, and LPS+S) (Ps < 0.01; Supplementary Figures S1A-D). There were significant differences in swimming distance among the treatment groups in both the 3M mice [F (4,35) = 18.344, P < 0.01, Figure 2E] and 15M mice [F (4,35) = 46.755, P < 0.01; Figure 2F]. Irrespective of age, mice in the LPS+S swam significantly longer distances than those in the other four groups (Ps < 0.01). In addition, for the 15M mice, the CON exhibited the shortest swimming distances among the five groups (Ps < 0.01). Moreover, the LPS swam significantly longer distances than those in the LPS+E and CON-S (Ps < 0.01); however, no significant difference in the distance was observed between the latter two groups (P > 0.05). The effect of interaction of ages × days on swimming distance was not significant [F (6,84) = 0.459, P > 0.05].
Memory Phase
The percentage of distance traversed in the target quadrant was significantly lower in 15M mice than in 3M mice in both the CON (t = 3.057; P < 0.01; Figure 2G) and the other treatment groups (CON+S, LPS, LPS+E, and LPS+S) (Ps < 0.05; Supplementary Figures S2A-D). There were significant differences in the percentage of distance among the five groups for both 3M [F (4,39) = 11.522, P < 0.01; Figure 2H] and 15M mice [F (4,39) = 23.253, P < 0.01; Figure 2I]. Irrespective of age, mice in the LPS+S exhibited a significantly lower percentage of distance than those in the other four groups (Ps < 0.01). In addition, at 15 months of age, mice in the CON showed a significantly greater percentage of distance than those in the other four groups (Ps < 0.01); meanwhile the percentage of distance was significantly lower in the LPS than in the CON+S and LPS+E (Ps < 0.05); no significant difference was observed between the latter two groups (P > 0.05).
Protein Level
The level of Staufen protein in the hippocampus was measured by immunohistochemistry and western blotting. Staufen protein staining was mainly localized to the cellular layer of the hippocampus, moreover, the immunoreactivity was more evident in 15M mice than in 3M mice, and greater in the LPS+S than the CON (Figures 3A,B and Supplementary Figures S3A,B). The 15M mice had a significantly higher Staufen protein level than the 3M mice in both the CON (t = 3.753; P < 0.01; Figure 3C) and the other treatment groups (CON+S, LPS, LPS+E, and LPS+S) (Ps < 0.01; Supplementary Figures S4A-D). There were significant differences in hippocampal Staufen protein levels among the five groups for both 3M [F (4,29) = 47.516, P < 0.01, Figure 3D] and 15M mice [F (4,29) = 127.422, P < 0.01, Figure 3E]. Regardless of age, the LPS+S had the highest, and the CON the lowest, Staufen protein levels (Ps < 0.05); moreover, Staufen protein levels were significantly higher in the LPS than in the CON+S and LPS+E (Ps < 0.05); but no significantly differences was found between the latter two groups (Ps > 0.05).
mRNA Level
Staufen mRNA expression was mainly detected in the cell layer in each hippocampal subregion (CA1, CA3, and DG) (Figures 4A,B). The 15M mice had significantly higher Staufen mRNA levels in the corresponding hippocampal subregions in the CON (CA1, t = 3.157, P = 0.010; CA3, t = 2.677, P = 0.023; DG, t = 3.835, P = 0.003. Figure 5A) than the 3M mice; a similar tendency was observed for mice in the other treatment groups (Ps < 0.05; Supplementary Figure S5A-D). A significant effect of treatment on Staufen mRNA levels was found among the five groups for both 3M and 15M mice in the CA1 [F (4,29) = 57.294, P < 0.01; F (4,29) = 132.288, P < 0.01], CA3 [F (4,29) = 323.219, P < 0.01; F (4,29) = 679.549, P < 0.01], and DG [F (4,29) = 78.072, P < 0.01; F (4,29) = 223.429, P < 0.01] subregions. Regardless of age, Staufen mRNA levels in the CA1, CA3, and DG (Ps < 0.01) subregions presented significantly higher levers in the LPS+S, followed by the LPS, with the CON showing the lowest Staufen mRNA levels. Furthermore, Staufen mRNA levels in the CA3 subregion were significantly higher in the LPS+E than in the CON+S (Ps > 0.05; Figures 5B,C). Table 1 depicts the correlation between the MWM performances and hippocampal Staufen protein levels. Staufen protein levels were positively correlated with the learning swimming distance (Ps < 0.01) and negatively correlated with the memory percentage of distance (Ps < 0.05) in all the mice combined. When the groups were separated, Staufen protein levels in the 3M LPS+S and 15M LPS+S, LPS, and LPS+E were positively correlated with the learning swimming distance (Ps < 0.05). Furthermore, Staufen protein levels in the 3M LPS+S and 15M LPS+S, LPS were negatively correlated with the memory FIGURE 2 | Morris water maze performance of CD-1 mice under different treatments. The swimming velocity in the learning phase is shown in (A-C); the swimming distances in the learning phase is shown in (D-F); and the memory percentage of distance in the target quadrant is shown in (G-I). The age effects are depicted in (A,D,G) and the treatment effects in (B,E,H) for the 3-month-old mice (3M) mice and (C,F,I) for 15-month-old (15M) mice. All values are presented as means ± SEM (n = 8 male mice/group). **P < 0.01. Different lowercase letters (a/b/c) denote significant differences, and a > b > c > d. CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
Correlations Between Cognitive Performance and Staufen Protein Levels
percentage of distance (Ps < 0.05). Supplementary Figure S6A shows scatter plots of the group that had significantly correlation coefficients in the Table 1. Table 2 depicts the correlations between the MWM performances and hippocampal Staufen mRNA levels. For all the mice combined, Staufen mRNA levels in all the hippocampal subregions were positively correlated with the learning swimming distance and negatively correlated with the memory percentage of distance (Ps < 0.05). Significantly positive correlations were found between learning swimming distance and Staufen mRNA levels in the CA1 and CA3 subregions of 3M LPS+S and 15M LPS+S, LPS, LPS+E, and CON+S mice (Ps < 0.05); and the DG subregion of 3M LPS+S and 15M LPS+S, LPS (Ps < 0.05). However, negatively correlations were found between the Staufen mRNA levels and the memory All values are presented as means ± SEM (n = 6 male mice/group). **P < 0.01. Different lowercase letters (a/b/c) denote significant differences, and a > b > c > d. CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
Correlations Between Performance and Staufen mRNA Levels
percentage of distance in the CA1 subregion of 3M LPS+S and 15M LPS+S, LPS, and CON+S mice (Ps < 0.05); the CA3 subregion of 3M LPS+S and 15M LPS+S and LPS mice (Ps < 0.05); and the DG subregion of 15 LPS+S and LPS mice (Ps < 0.05). Supplementary Figures S6B,C shows scatter plots of the group that had significantly correlation coefficients in the Table 2.
DISCUSSION
Early life is a critical developmental period, and experiences in this stage, i.e., embryonic stage and adolescence have been shown to have a long-term influence on developmental and aging processes Höltge et al., 2019). The embryonic period is associated with a greater vulnerability to bacterial and viral infections owing to an immature immune system (Khan et al., 2017;Alsaif et al., 2018). It is well documented that pregnant animals have increased sensitivity to LPS than non-pregnant ones (Kunnen et al., 2014). In this study we found that the embryonic exposure to LPS could accelerate the cognitive impairments in middle-aged mice, and adolescent stress/an EE could exacerbate/relieve this effect. Therefore, avoiding embryonic infection was of great significance to guide healthy borning and fine rearing, and avoiding stress and enriching the environment in the adolescence could further FIGURE 4 | Representative photomicrographs of Staufen mRNA levels relative to the negative control in the different hippocampal subregions of CD-1 mice of different ages exposed to different treatments. The results for the different hippocampal subregions in 3-month and 15-month-old mice are shown in (A,B), respectively. The red staining represents a positive Staufen mRNA staining. Scale bar = 50 µm. CA, cornu ammonis; DG, dentate gyrus; N-CON, the negative control group; CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment. mitigate cognitive impairment. Moreover, we also demonstrated that increased Staufen expression in the hippocampus, was correlated with the impaired cognition in the different treatment groups. To some extent, our results provide new insights into the mechanisms involved in cognitive decline resulting from embryonic inflammation.
The Effects of Embryonic Inflammation and Adolescent Psychosocial Environment (Stress or EE) on Cognition in Middle-Aged Mice
In the current study, the 15M CON showed a significantly lower swimming velocity than the 3M CON, suggesting a decline in motor ability in middle-aged mice, which is consistent with the previous results Zhang et al., 2020;Duan et al., 2020). van der Staay and de Jonge (1993) showed that if the learning ability of two mice was equal, the slower mice may need more swimming time to reach the escape platform (escape latency) than the faster mice; however, the swimming distance had no significantly difference. Consequently, we assessed the swimming distance as an indicator of spatial learning and memory in our study.
Our results showed that spatial learning and memory were impaired in the middle-aged mice. This result was consistent with those of previous studies that indicated that age-related decline in cognitive impairment begins at 12-13 months of age in this strain mouse Wang et al., 2020). Several studies have suggested that embryonic inflammation and adolescent stress can accelerate brain age-associate cognitive impairment Bhagya et al., 2017;Wang et al., 2020). In contrast, an adolescent EE can relieve the deficits in hippocampal synaptic plasticity, memory, and anxiety caused by chronic stress (Bhagya et al., 2017). Our results indicated that embryonic inflammation or adolescent stress alone does not affect spatial learning and memory in young adults; however, together, these two factors can have a synergistic, deteriorative effect on cognitive behavior. Additionally, the current results also suggested that embryonic inflammation can impair spatial learning and memory abilities in the middle-aged mice to a greater extent than the effect of adolescent stress. Furthermore, we did not observe any difference between the LPS+E and CON+S, suggesting that an EE in adolescence can mitigate the age-associated cognitive impairment resulting from embryonic inflammation.
The Effects of Embryonic Inflammation and Adolescent Psychosocial Environment (Stress or EE) on Staufen Expression in Middle-Aged Mice
Staufen, a dsRBP, was initially identified in a screen for anteriorposterior patterning mutants in Drosophila embryos. Staufen is recruited to stress granules during the stress response, and can play an important role in mRNA localization, translation, and/or stability (Lebeau et al., 2008;Zimyanin et al., 2008;Paul et al., 2018). To date, no study has investigated whether aging and embryonic inflammation can affect Staufen expression. In the current study, for the first time, we explored the changes FIGURE 5 | The relative levels of Staufen mRNA in the different hippocampal subregions of CD-1 mice of differing ages exposed to different treatments. The age effect is shown in (A) and the treatment effects for 3-month-old (B) and 15-month-old (C) mice. All values are presented as means ± SEM (n = 6 male mice/group). *P < 0.05, **P < 0.01. Different lowercase letters (a/b/c) denote significant differences, and a > b > c > d. CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
in Staufen expression (protein and mRNA) that occur as a result of aging and exposure to embryonic inflammation, or the effect of adolescent psychosocial environment (stress or an EE) on hippocampal Staufen expression following exposure to embryonic inflammation.
Studies have shown that Staufen protein levels are markedly upregulated in multiple cell and animal models of human neurodegenerative diseases, including those associated with mutations in presenilin 1, and microtubule-associated protein tau, as well as in stroke and myotonic dystrophy (Gandelman et al., 2020). Our results are consistent with those of studies showing that Staufen expression would increase following exposure to a variety of acute noxious stimuli (Bonnet-Magnaval et al., 2016). Here, our data indicated a significant effect of age, and this age-related increase in hippocampal Staufen expression (protein and mRNA) was in accordance with the behavioral change observed, i.e., impaired spatial learning and memory abilities. Moreover, treatment also exerted a significant effect on Staufen expression. Regardless of age, we found that the LPS+S showed the highest levels of hippocampal Staufen protein, followed by the LPS, whereas the CON had the lowest levels, however, no significant difference in the Staufen protein levels was found between the LPS+E and the CON+S. Collectively, these observations support that the more detrimental the factor, the higher the Staufen expression, indicating that the effect of embryonic inflammation was significantly stronger than the effect of adolescent stress, while both factors combined exerted the strongest effect. However, an adolescent EE could partially reverse the changes in Staufen expression, further illustrating that stress upregulated Staufen protein levels. Notably, the levels of Staufen mRNA was significantly higher in the CA3 subregion of the LPS+E than in the CON+S, indicating that an EE could partially reverse the change of Staufen mRNA expression resulting from the embryonic inflammation, and the effect was more obvious in the CA1 and DG subregions than in the CA3 subregion.
The Association Between Altered Staufen Expression and Cognition
Several studies have indicated that cognitive impairment is associated with impaired synaptic plasticity. For example, embryonic inflammation and adolescent stress are shown to inhibit dendritic growth, induce neuronal remodeling and impaired synaptic transmission and plasticity, and lead to cognitive impairment in the aged mice (Lesuis et al., 2019;Tellez-Merlo et al., 2019;Wang et al., 2020). In contrast, an adolescent EE can partly reverse behavioral and synaptic abnormalities *Denotes significant correlation coefficients (*P < 0.05, **P < 0.01). CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
resulting from embryonic inflammation (Andoh et al., 2019). Under normal circumstances, Staufen is implicated in the transport and regulation of dendritic mRNA, so downregulation of Staufen expression can lead to a significant reduction in the number of dendritic spines and miniature excitatory postsynaptic currents, which are generally assumed to contribute to impaired synaptic plasticity (Goetze et al., 2006;Popper et al., 2018). However, following exposure to acute stress, Staufen protein levels increase, thereby increasing cellular sensitivity to apoptosis. Analogously, Staufen overabundance contributes to aberrant translation, ribostasis, and proteostasis (Ravanidis et al., 2018;Gandelman et al., 2020). This indicates that both knockdown and overexpression of Staufen can impair synaptic plasticity, and suggests that the relationship between Staufen expression and synaptic plasticity is complex (Timmerman et al., 2013). To date, no studies have investigated the correlation between Staufen expression and memory. Our results suggest that the cognitive impairment induced by the embryonic inflammation may be related to changes in Staufen protein levels. Consistent with this hypothesis, our correlation analysis indicated that Staufen expression (protein and mRNA) was significantly correlated with cognitive ability in all the treatment groups. These findings provided the first evidence that Staufen expression is associated with impaired spatial learning and memory, as observed in the MWM test. Notably, this correlation was also age-dependent and treatment-related. A positive correlation was found between Staufen protein levels and the learning swimming distance, while a negative correlation was recorded between Staufen protein levels and the memory percentage of distance. These results suggested that the increase in hippocampal Staufen protein levels was associated with the observed age-associated learning and memory impairments following exposure to embryonic inflammation. Moreover, the pattern of correlation between cognitive performance and Staufen mRNA or protein levels was similar, further supporting that the changes in Staufen levels occured at the level of transcription. Specifically, a positive correlation was found between the learning swimming distance and Staufen mRNA levels in the CA1, CA3 subregions in all the treatment groups (LPS+S, LPS, LPS+E, and CON+S), and the DG subregion in the LPS+S and LPS. In contrast, a negative correlation was recorded between the memory percentage of distance and Staufen mRNA levels in the CA1, CA3, and DG subregions in the LPS+S and LPS; and in the CA1 subregion of the CON+S. These results further suggest that the impaired cognitive performance induced by exposure to embryonic inflammation may be attributable to increased Staufen transcription, which then leads to increased translation. This occurs preferentially in the CA1 subregion, followed by the CA3 subregion, and lastly in the DG subregion, an effect that was dependent on the intensity of the adverse stimulus. Meanwhile, Staufen mRNA levels might be more related to impaired learning ability.
In brief, our study is the first to report that exposure to embryonic inflammation and adolescent stress could, respectively, and accumulatively aggravate the age-associated learning and memory impairments, while an adolescent EE could ameliorate the changes resulting from embryonic inflammation. Secondly, to the best of our knowledge, this study is the first to report that age and embryonic inflammation can enhance the hippocampal Staufen expression at both the protein and mRNA levels, while adolescent stress/an EE can partially increase/reverse this effect. Thirdly, our results also indicated that the changes in hippocampal Staufen expression were closely correlated with impaired spatial learning and memory abilities, especially during "pathological" aging. This suggests that the learning and memory impairment resulting from embryonic inflammation may be related to changes in Staufen protein levels.
Our study also had several limitations. Firstly, it has recently been revealed that left-right anatomical and functional differences exist in the rodent hippocampus (Sakaguchi and Sakurai, 2020). In our study, a considerable amount of brain tissue was needed for the experiment, and because the unilateral hippocampal tissue available to us may not have been enough to meet the needs of the experiment, we did not account for these differences. Nonetheless, to ensure consistency in the experiment, all brain tissue was prepared in the same manner. Secondly, because we designed the experiment with a focus on the effects of embryonic inflammation, and the factors that would aggravate or alleviate these effects, so we did not set up an LPS+S+E group to investigate the compound effect. *Denotes significant correlation coefficients (*P < 0.05, **P < 0.01). CA, cornu ammonis; DG, dentate gyrus; CON, untreated control group; LPS, lipopolysaccharide treatment group; S, group of mice exposed to stress; E, group of mice exposed to an enriched environment.
We will further enrich our groups in our subsequent study. Thirdly, we must admit that we can only find this phenomenon, but we didn't invest the mechanism underlying age-associated learning and memory impairments, but we have provided a new insight into the mechanism of cognitive impairment caused by embryonic infection, and we still need to further clarify it in our future research.
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Center for Laboratory Animal Sciences at Anhui Medical University.
AUTHOR CONTRIBUTIONS
Y-FW conceived and designed the study, performed the experiments, and drafted the manuscript. Y-MZ and H-HG performed the experiments. C-YR and Z-ZZ performed the behavioral test and collected the data. LC and FW designed the study and performed the statistical analysis. G-HC designed the study and revised the manuscript. All authors read and approved the final manuscript.
FUNDING
This work was financially supported by the National Natural Science Foundation of China (81370444 and 81671316), the Natural Science Foundation for the Youth of China (81301094), and the Natural Science Foundation for the Youth of Anhui Province (1708085QH182). This funding played important roles in the design of the study and collection, analysis, interpretation of data, and in writing the manuscript. | 2020-09-11T13:10:17.854Z | 2020-09-11T00:00:00.000 | {
"year": 2020,
"sha1": "45fbb102689f0fdec68ac2074f5e81079768c3b2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2020.578719/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45fbb102689f0fdec68ac2074f5e81079768c3b2",
"s2fieldsofstudy": [
"Psychology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
231985794 | pes2o/s2orc | v3-fos-license | Subword Pooling Makes a Difference
Contextual word-representations became a standard in modern natural language processing systems. These models use subword tokenization to handle large vocabularies and unknown words. Word-level usage of such systems requires a way of pooling multiple subwords that correspond to a single word. In this paper we investigate how the choice of subword pooling affects the downstream performance on three tasks: morphological probing, POS tagging and NER, in 9 typologically diverse languages. We compare these in two massively multilingual models, mBERT and XLM-RoBERTa. For morphological tasks, the widely used ‘choose the first subword’ is the worst strategy and the best results are obtained by using attention over the subwords. For POS tagging both of these strategies perform poorly and the best choice is to use a small LSTM over the subwords. The same strategy works best for NER and we show that mBERT is better than XLM-RoBERTa in all 9 languages. We publicly release all code, data and the full result tables at https://github.com/juditacs/subword-choice .
Introduction
Training of contextual language models on large training corpora generally begins with segmenting the input into subwords (Schuster and Nakajima, 2012) to reduce the vocabulary size. Since most tasks consume full words, practitioners have the freedom to decide whether to use the first, the last, or some combination of all subwords. The original paper introducing BERT, Devlin et al. (2019), suggests using the first subword for named entity recognition (NER), and did not explore different poolings. Kondratyuk and Straka (2019) also use the first subword, for dependency parsing, and remark in a footnote that they tried the first, last, average, and max pooling but the choice made no difference. Kitaev et al. (2019) report similar findings for constituency parsing, but nevertheless opt for reporting results only using the last subword. Hewitt and Manning (2019) take the average of the subword vectors for syntactic and word sense disambiguation tasks. Wu et al. (2020) use attentive pooling with a trainable norm for news topic classification and sentiment analysis in English. Shen et al. (2018) use hierarchical pooling for sequence classification tasks in English and Chinese.
Here we show that for word-level tasks (morphological, POS and NER tagging), particularly for languages where the proportion of multi-subword tokens (i.e. those word tokens that are split into more than one subword) is high, more care needs to be taken as both pooling strategy, and that the choice of language matters. We demonstrate this clearly for European languages with rich morphology, and in Chinese, Japanese and Korean (CJK). Similar to subword pooling, the choice of the lowest layer, the topmost one, or some combination of the activations in different layers has to be made. Here our main focus is subword pooling, but we do discuss layer pooling to the extent it sheds light on our main topic. We observe that the gap between using the first and the last subword unit is larger in lower layers than in higher ones.
We describe our data and tasks in Section 2, and the subword pooling strategies investigated in Section 3. Our results are presented in Section 4, and in Section 5 we offer our conclusions.
Our main contributions are: • we show that subword pooling matters, the differences between choices are often significant and not always predictable; • XLM-RoBERTa (Conneau et al., 2019) is slightly better than mBERT in the majority of morphological and POS tagging tasks; while mBERT is better at NER in all languages; • the common choice of using the first subword is generally worse than using the last one for morphology and POS but the best for NER; • the difference between using the first and the last subword is larger in lower layers than in higher layers and it is more pronounced in languages with rich morphology than in English; • the choice of subword pooling makes a large difference for morphological and POS tagging but it is less important for NER; • we release the code, the data and the full result tables.
Tasks, languages, and architectures
We investigate pooling through three kinds of tasks.
In morphological tasks we attempt to predict morphological features such as gender, tense, or case.
In POS tasks we predict the lexical category associated with each word. In NER tasks we assign BIO tags (Ramshaw and Marcus, 1995) to named entities. We chose word-level, as opposed to syntactic, tasks because they can be tackled with fairly simple architectures and thus allow for a large number of experiments that highlight the differences between subword pooling strategies. Our experiments are limited only by the availability of standardized multilingual data. We use Universal Dependencies (UD) (Nivre et al., 2018) for morphological and POS tasks, and WikiAnn (Pan et al., 2017) for NER. We pick the largest treebank in each language from UD and sample 2000 train, 200 dev and 200 test sentences for the morphological probes and up to 10,000 train, 2000 dev and 2000 test sentences -often limited by the size of the treebank -for POS. We chose languages with reasonably large treebanks in order to generate enough training data, making sure we have an example from each language family, as well as one from European subfamilies since their treebanks tend to be very large. We use 10,000 train, 2000 dev and 2000 test sentences for NER. Preprocessing steps are further explained in Appendix A. Our choice of languages are Arabic, Chinese, Czech, English, Finnish, French, German, Japanese, and Korean. UD's gold tokenization is kept and we run subword tokenization on individual tokens rather then the full sentences.
Morphological tasks UD assigns zero or more tag-value pairs to each token such as VerbForm=Ger for 'asking'. We define a probe as a triplet of language, tag, POS , i.e. we train a classifier to predict the value of a single tag in a sentence in a particular language. 1 The task English, VerbForm, VERB would be trained to predict one of three labels for each English verb: finite, infinite or gerund. We pick 4 tasks that are applicable to at least 3 of the 6 languages where the task makes sense (there are no morphological tags for Chinese and Japanese, and Korean uses a different tagging scheme). Table 1 lists the probing tasks.
Part-of-speech tagging assigns a syntactic category to each token in the sentence. Usually treated as a crucial low level task to provide useful features for higher level linguistic analysis such as syntactic and semantic parsing. Universal POS tags (UPOS) are available in UD in all 9 languages.
Named entity recognition is a classic information extraction subtask that seeks to identify the span of named entities mentioned in the sentence and classify them into pre-defined categories such as person names, organizations, locations etc. NER was the only token level task explored in the original BERT paper Devlin et al. (2019).
Architectures BERT and other contextual models use subword tokenizers that generate one or more subwords for each token. In this study we compared mBERT and XLM-RoBERTa, two Transformer-based large scale language models with support for over 100 languages. We pick these two since they are architecturally similar (both have 12 layers and the same hidden size) making our comparison easier. mBERT was trained on Wikipedia while XLM-RoBERTa was trained on CommonCrawl (Wenzek et al., 2020). Both models have been extensively applied to English and multilingual tasks, but generally at the sentence or sentence pair level, where subword issues do not come to the fore. mBERT uses a common wordpiece vocabulary with 118k subword units. When a word is split into multiple subword units, each token that is not the first one is prefixed with ##. XLM-RoBERTa's vocabulary was trained in a similar fashion but with 250k units and a special start symbol (Unicode lower eights block) instead of continuation symbols. Each word is prefixed with this start symbol before it is tokenized into one or more subword units. These start symbols are often then tokenized as single units, particularly before Chinese, Japanese and Korean characters, therefore artificially increasing the subword unit count. We indicate the proportion of words starting with a standalone start symbol along with other tokenization statistics in Table 2.
As Table 2 shows, the number of subword tokens is highly dependent on the language. English words are only split in 14.3% (resp. 16.9%) of the time by the two models, while in many other languages more than half of the words are tokenized into two or more subword units. We hypothesize that this is due to the combination of the characteristics of the English language and its overrepresentation in the training data and the subword vocabulary.
We also observe that the two models' tokenizers work in very different ways. Out of the 2800 morphological test examples, only 58 are tokenized the same way and 51 of these are not split into multiple subwords. Only 7 words that are in fact tokenized, are tokenized the same way. Although the full tokenization is rarely the same, the first and the last subwords are the same in 45.5% and in 44.7% of the cases.
Subword pooling
We test 9 types of pooling methods listed in Table 3 and grouped in three broad types. The first group uses the first and last subword representations in some combination. In F+L pooling the mixing weight is the only learned parameter. The second group are parameter-free elementwise pooling operations. Table 3: Subword unit pooling methods. u first and u last refer to the first and the last units respectively.
Method Explanation Params
The last two methods rely on small neural networks that learn to combine the subword represen-tations. Our subword ATTN has one hidden layer of 50 neurons with ReLU activation and a final softmax layer that generates a probability distribution over the subword units of the token. Similarly to self-attention, these probabilities are used to compute the weighted sum of subword representations to produce the final token vector. The LSTM uses a biLSTM (Hochreiter and Schmidhuber, 1997) that summarizes the 768-dimensional vectors (the hidden size of both models) into a 50-dimensional hidden vector in each direction, which are then concatenated and passed onto the classifier. These two are considerably more complicated and slower to train than the other methods, but ATTN works well for morphological tasks, and LSTM for POS tagging in CJK languages. Shen et al. (2018) found hierarchical pooling beneficial, but they investigated sentence level tasks where the subword stream is much longer than in the word-level tasks we are considering (words are rarely split into more than 4 subwords) and hierarchical pooling has better traction.
Layer pooling effects Both mBERT and XLM-RoBERTa have an embedding layer followed by 12 hidden layers. The only contextual information available in the embedding layer is the position of the token in the sentence. Hidden activations are computed with the self-attention layers, therefore in theory have access to the full sentence. We ran our experiments for each layer separately as well as for the sum of all layers. For all tasks, as we move up the layers, results also move up or down in tandem. As exhaustive experiments considering different combinations of layers were computationally too expensive for our setup, and would significantly complicate presentation of our results, we pick a single setting for all experiments by computing the best expected layer for each task as where L is the set of all layers, l i is the ith layer, and A(l i ) is the development accuracy at layer i.
As Figure 1 shows, the expected layers are almost always centered around the 6th layer. Therefore, with the exception of comparing FIRST and LAST, which we analyze in greater detail in 4.1, we chose the 6th layer to simplify the presentation. Probing setup Every experiment is trained separately, with no parameter sharing between the tasks and the experiments. We probe the morphology on fixed representations with a small MLP (multilayer perceptron) with a single hidden layer of 50 neurons and ReLU activation. We train the same model for POS tagging and NER on top of each token representation. We keep the number of parameters intentionally low, about 40k, to avoid overfitting on the probing data and to force the MLP to probe the representation instead of memorizing the data. We do note, however, that ATTN and LSTM increase the number of trained parameters to 77k and 330k respectively. We run each configuration 3 times with different random seeds. The standard deviation of results is always less than 0.06 for morphology and less than 0.005 for POS and NER. Further details are available in Appendix B.
Choosing the size of the LSTM LSTM is our subword pooling method with the most parameters. The number of parameters scales quadratically with the hidden dimension of the LSTM. We pick this dimension with binary parameter search on morphology tasks. Our early experiments showed that increasing the size over 1000 showed no significant improvement, and a binary search between 2 and 1024 led us to choose a biLSTM with 100 hidden units.
Results
Our analysis consisted of two steps. We first performed the FIRST and LAST tasks at each layer (see Figure 2). Based on the results of this, we picked a single layer, the 6th, to test all 9 subword pooling choices. The full list of results on the 6th layer is listed in Appendix C.
Layer pooling
We find that although LAST is almost always better than FIRST, the gap is smaller in higher layers. We quantify this with the ratio of the accuracy of LAST and FIRST at the same layer. Figure 2 illustrates this ratio for a few selected morphological tasks and POS and NER for all 9 languages. We split the morphological tasks into two groups, Finnish tasks and other tasks. Finnish, Case, NOUN shows the largest gap in the lower layers, LAST is 8 times better than FIRST. We observe smaller gaps in other tasks. POS shows a fairly uniform picture with the exception of Korean, where FIRST is worse in all layers and both models. Lower layers in mBERT show a larger gap in Czech and the same is true for Chinese and Japanese in XLM-RoBERTa. NER shows little difference between FIRST and LAST except for the first few layers, particularly in Chinese and Korean. To interpret these results, keep in mind that CJK tokenization is handled somewhat arbitrarily by XLM-RoBERTa, particularly in the first subword (c.f. Table 2).
Morphology
We present the results of 14 morphological probing tasks (see Table 1) and 9 subword pooling strategies (see Table 3) using the 6th layer of each model. mBERT vs. XLM-RoBERTa Averaging over all tasks, XLM-RoBERTa achieves 85.7% macro accuracy while mBERT achieves 83.9%. On a perlanguage basis, XLM-RoBERTa is slightly better than mBERT except for French. Figure 3 shows our findings. The two models generally perform similarly with the exception of French and Finnish: mBERT is almost always better at French tasks, while XLM-RoBERTa is always better at Finnish tasks. Similar trends emerge when looking at the results by subword pooling method. XLM-RoBERTa is always better regardless of the pooling choice but the difference is only significant (p < 0.05) for MAX and SUM. 2 These findings suggest that XLM-RoBERTa retains more about the orthographic presentation of a token, and it uses tokenization that is closer to morpheme segmentation, hence performing better at inflectional morphology, which is most often derivable from the word form alone.
First or last subword? As Figure 4 shows, with the exception of the Arabic, Case, N task, LAST is always better than FIRST. We find the largest difference in favor of LAST in Finnish and Czech. Table 4 lists all tasks where the difference between FIRST and LAST is larger than 20% along with the only counterexample (where the difference is about 10% in the other direction). These findings are likely due to the fact that Finnish and Czech exhibit the richest inflectional morphology in our sample.
The exceptional behavior of Arabic case may relate to the fact that case often disappears in modern Arabic (Biadsy et al., 2009). When this occurs the first token, being closest to the previous word, may provide a more reliable indicator, especially if that word was a preposition. Given the complex distribution of Arabic case endings, our sample is too small to ascertain this, and the results, about 75% on a 3-way classification task, are clearly too far from the optimum to draw any major conclusion (note that on Finnish case, a 12-way classification task, we get above 94% 3 ).
Other pooling choices While FIRST is clearly inferior in morphology, the picture is less clear for the other 8 pooling strategies. As Figure 5 illustrates, ATTN is better than all other choices for both models but its advantage is only significant over a few other choices. We observe larger -and more often significant -differences in the case of mBERT than in XLM-RoBERTa. We plot Finnish morphological tasks separately since the effect is so pronounced that presenting them on the same plot would render the scaling uninformative for the other cases. S is the sum of all layers. Note that we do not have a strongly prefixing language due to the lack of available probing data. for each token in the test data for morphology. Table 5 lists the proportion of tokens where ATTN assigns the highest weight to the first, last or a middle token, or the token is not split by the tokenizer. The last subword is weighted highest in more than 80% of the cases. The only task where the last subword is not the most frequent winner is Arabic, Case, N , where the first is weighted highest in 60% of the tokens by both models. These findings are in line with the behavior of FIRST and LAST.
POS tagging
We train POS tagging models for 9 languages with 9 subword pooling strategies. We evaluate the models using tag accuracy.
mBERT vs. XLM-RoBERTa As with morphological probing tasks, XLM-RoBERTa is slightly better than mBERT (95.4 vs. 94.6 macro average). We also observe that the choice of subword makes less difference than it does in morphological probing. Figure 6 shows that experiments in one language tend to cluster together regardless of the where the difference is statistically significant. ATTN is better than all other choices, therefore its row is green. FIRST is omitted for clarity as it is much worse than the other choices.
subword pooling choice except for a few outliers: FIRST for Chinese and Korean is much worse in both models. The same result can be observed in Japanese, to a lesser extent though. Languagewise we find that XLM-RoBERTa is much better at Finnish and somewhat worse in Chinese but the two models generally perform similarly. Choice of subword. As with morphology FIRST the is the worst choice, but the effect is not as marked for POS tasks. In Figure 6 we observe 3 outliers, XLM-RoBERTa, FIRST for Chinese and FIRST for Korean for both models. The only consistent trend is that XLM-RoBERTa is clearly better for Finnish regardless of the choice of subword pooling. The picture is less clear for other languages.
We split the analysis into CJK and non-CJK languages. Figure 7 and Figure 8 show a comparison for non-CJK languages and CJK languages respectively. The difference between choices is generally much smaller than for morphology. FIRST is the worst choice both for CJK and non-CJK languages. Interestingly one of the best choices for morphol-ogy, LAST, is the second worst choice for POS tagging, while one of the worst for morphology, LSTM, is one of the best for POS tagging. We hypothesize that this is due to overparametrization for morphology. POS tagging is a much more complex task that needs a larger number of trainable parameters (recall that LSTM parameters are shared across all tokens).
Named entity recognition
As Figure 6 shows, in NER the choice of subword pooling makes far less difference than in morphology. In terms of models, mBERT has a clear advantage over XLM-RoBERTa when it comes to NER. The difference between the two models is generally larger than the difference between two subword choices within the same language. The smallest difference between the two models appears to be in Czech, Finnish and German, which all have rich, partially agglutinative, morphology. This fits with our earlier findings that showed that XLM-RoBERTa might be better at handling rich morphology. Overall FIRST and the related F+L as well as LSTM come out as winners, the differences are rather small and often not statistically significant for CJK.
Discussion
Throughout our extensive experiments we observed that pooling strategies can have a significant impact on the conclusions drawn from probing experiments. When considering multiple typologically different languages, the strength of the conclusions drawn from experiments can be weakened by considering a single pooling option. Our recommendation for NLP practitioners is to try at least three subword pooling strategies, particularly for tasks in languages other than English. FIRST and LAST usually gives a general picture -as a third control we recommend ATTN and LSTM. More complicated tasks such as POS or NER tagging may require LSTM with many parameters, while tasks that rely more on the orthographic representation such as morphology tend to benefit from ATTN.
One of the greatest attractions of the current generation of models is that they do away with laborintensive feature engineering. Currently, subword pooling acts as the little finderscope mounted on the side of the main telescope to get it to point in the right region, but over the long haul we expect the systems to develop in a way that pooling also becomes part of the end to end process.
Our methodology is only limited by the availability of data. It would be interesting to extend these study with languages that use prefixes too such as Indonesian or Swahili.
Conclusion
The key takeaway from our work is that performance on lower level tasks depends on the way we pool over multiple subword units that belong in a single word token. This is more of an issue in languages other than English, where a significantly larger proportion of words are represented by multiple subword units.
Morphological and POS tasks are both probing word-level attributes, but the results show huge disparity: for the morphological tasks FIRST pooling is the worst strategy, and ATTN is the best, while for POS tagging ATTN is almost as bad as FIRST, the best being LSTM. The NER task is intermedi-ary between word-and phrase-level, and subword pooling effects are less marked, but still statistically significant (see the full result tables in the Appendix).
UD's train set. We sample the sentences in a way that avoids overlaps in target words between train, dev and test splits, in other words, if a word is the target in the train set, we do not allow the same target word in the dev or test set. A target word is the word that needs to be classified according to some morphological tag. We also limit class imbalance to 3:1 at max. This results in the removal of rare tags such as a few of the numerous Finnish noun cases. These restrictions and the size of the treebanks do not allow generating larger datasets.
A.2 POS dataset
We use the largest treebank in each language for POS. The only preprocessing we do is that we filter sentences longer than 40 tokens. Since this results in an uneven distribution in the training size, we limit the number of training sentences to 2000. We note that experiments using 10,000 sentences are underway but due to resource limitations, we were unable to include them in this version of the paper.
A.3 NER dataset
NER is sampled from WikiAnn. WikiAnn is a silver standard large scale NER corpus and the number of sentences is over than 100,000 in each language. We deduplicated the dataset and discarded sentences longer than 40 tokens or 200 character in the case of Chinese and Japanese. WikiAnn annotates Chinese and Japanese at the character level. We aligned this with mBERT's tokenizer and retokenized it. Due to memory constraints, we had to cut off the training data size at 10,000.
B Training details
Each classifier is trained separately from randomly initialized weights with the Adam optimizer (Kingma and Ba, 2014) with (lr = 0.001, β 1 = 0.9, β 2 = 0.999) and early stopping on the development set. We report test accuracy scores averaged over 3 runs with different random seeds.
We ran about 14,000 experiments on GeForce RTX 2080 GPUs which took 7 GPU days. We cache mBERT's and XLM-RoBERTa's output when possible. We used PyTorch and our own framework for experiment management. We release the framework along with the final submission. | 2021-02-23T02:15:35.808Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "188cd686fb2200f237f688dbda7f64ffc75e67ac",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.eacl-main.194.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "1cd94d2b84afbaf2d9c37af4300cfb6a9cb1aad2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253422759 | pes2o/s2orc | v3-fos-license | WARS1, TYMP and GBP1 display a distinctive microcirculation pattern by immunohistochemistry during antibody-mediated rejection in kidney transplantation
Antibody-mediated rejection (ABMR) is the leading cause of allograft failure in kidney transplantation. Defined by the Banff classification, its gold standard diagnosis remains a challenge, with limited inter-observer reproducibility of the histological scores and efficient immunomarker availability. We performed an immunohistochemical analysis of 3 interferon-related proteins, WARS1, TYMP and GBP1 in a cohort of kidney allograft biopsies including 17 ABMR cases and 37 other common graft injuries. Slides were interpreted, for an ABMR diagnosis, by four blinded nephropathologists and by a deep learning framework using convolutional neural networks. Pathologists identified a distinctive microcirculation staining pattern in ABMR with all three antibodies, displaying promising diagnostic performances and a substantial reproducibility. The deep learning analysis supported the microcirculation staining pattern and achieved similar diagnostic performance from internal validation, with a mean area under the receiver operating characteristic curve of 0.89 (± 0.02) for WARS1, 0.80 (± 0.04) for TYMP and 0.89 (± 0.04) for GBP1. The glomerulitis and peritubular capillaritis scores, the hallmarks of histological ABMR, were the most highly correlated Banff scores with the deep learning output, whatever the C4d status. These novel immunomarkers combined with a CNN framework could help mitigate current challenges in ABMR diagnosis and should be assessed in larger cohorts.
Short-term renal allograft survival has considerably increased over past decades, thanks to improvements in immunosuppressive strategies. In contrast, long-term allograft survival has not increased proportionately and is now a major issue 1 . The leading cause of kidney allograft failure is antibody-mediated rejection (ABMR), considered to be involved in about two-thirds of cases 2 . Antibody-mediated rejection is primarily an endothelial disease mediated by donor-specific antibodies (DSA), which target human leukocyte antigens (HLA) or non-HLA antigens. DSA binding to endothelial cells leads to recruitment of inflammatory cells and injuries (from activation to cell lysis), resulting in histological lesions of microvascular inflammation: glomerulitis and peritubular capillaritis. These two lesions are graded from 0 to 3 (g and ptc scores, respectively), according to the 2019 Banff classification 3 . Moreover, DSA can activate the classical complement pathway and lead to C4d deposits on peritubular capillaries, which can be revealed by immunohistochemistry in a kidney allograft biopsy. Thus, the 2019 Banff classification retains DSA detection in the serum, histological microvascular inflammation and C4d deposits as the hallmarks of ABMR diagnosis 3 . Not all three criteria are required, as proposed surrogate markers allow several combinations to be accepted (e.g. C4d negative ABMR may be diagnosed with a significant microvascular inflammation in addition to DSA detection, and ABMR without detectable DSA may be diagnosed with microvascular inflammation and C4d deposits). Nevertheless, the diagnosis of active ABMR remains complex, due to our limited understanding of the full dynamic range of ABMR and the known limitations of the current criteria 4 . Indeed, the morphological scores still lack inter-observer reproducibility, even between experienced nephropathologists [4][5][6] . A recent study only showed a mild to moderate reproducibility for the glomerulitis and peritubular capillaritis scores, with Cohen's Kappa of 0.39 and 0.38, respectively 7 . C4d deposits are highly specific to an active antibody-mediated mechanism, but are known to be negative in up to 50% of ABMR cases 5,8 . The DSA criterion has at least two limitations: (i) the heterogeneity among centers in the exhaustivity of their testing and (ii) the growing evidence of the involvement of antibodies targeting non-HLA antigens 9 , which are not easily routinely tested. In addition, a mechanism of microvascular inflammation has recently been described, which is not mediated by antibodies but by NK cells 10 . Finally, validated molecular classifiers have been added as a surrogate marker for an ABMR diagnosis since 2015 11 , although currently they are not widely available and are still struggling to be applied in current global practice.
Treatment of ABMR primarily aims at removing circulating DSA, blocking their effects and/or reducing their production. Glucocorticoids, plasma exchange and intravenous immunoglobulins remain the basis of current therapy 12 . Because this treatment is complex, burdensome and sometimes associated with complications, such as infection and thrombosis, optimizing the diagnostic performance of ABMR by pathologists is a major and primary need.
In a previous study, we analyzed the glomerular proteome modifications during active ABMR compared to stable grafts, using laser microdissection combined with tandem mass spectrometry 13 . We described 77 dysregulated proteins in glomerulitis and highlighted 3 interferon-related proteins, which displayed an overexpression by immunohistochemistry in glomerular endothelial cells during ABMR: WARS1, TYMP and GBP1. Proteomics results suggested their robustness with respect to chronicity and C4d status. Furthermore, through an exploratory approach, we noticed that WARS1, TYMP and GBP1 displayed a microcirculation staining pattern by immunohistochemistry in ABMR cases ( Fig. 1), highlighting not only inflammatory but also endothelial cells in both glomeruli and peritubular capillaries.
In the last decade, deep learning-based computer vision surged as one of the best opportunities for more quantitative and reproducible histopathologic evaluations, as well as reordering pathologists' priorities, by reducing time-consuming and delegable tasks to algorithms. In oncology, deep learning-based approaches have been described not only for diagnostic and prognostic applications [14][15][16] , but also for prediction of molecular alterations 15,17,18 . In the area of kidney transplantation, recent studies have highlighted the value of deep learning for the identification of abnormal (i.e. lesional) allograft biopsies from morphological slides 19 , the prediction of early and long-term graft survival based on baseline and 12-month post-transplant biopsies 20 , the assessment of the C4d staining 21,22 and the quantitative evaluation of tubulo-interstitial inflammation 23 .
Herein we performed an immunohistochemical analysis of WARS1, TYMP and GBP1 in a selected cohort of kidney allograft biopsies including common graft injuries encountered in routine practice. The aims of the study were to (i) assess the potential value of the microcirculation pattern in the diagnosis of ABMR, as interpreted by four nephropathologists, (ii) evaluate their suitability for a deep learning-based interpretation and classification and (iii) describe the overall expression pattern of WARS1, TYMP and GBP1 in kidney transplantation by immunohistochemistry.
Materials and methods
Selection of the cohort. This is a single-center, retrospective, descriptive study analyzing selected kidney allograft biopsies. All cases consisted of renal allograft biopsies, formalin-fixed and paraffin embedded, already performed for diagnosis purposes from August 2011 to February 2016 at the Bordeaux University Hospital. Diagnoses were in accordance with the 2017 Banff classification. Chronic ABMR was defined by light microscopy (≥ cg1b). With the exception of recurrent glomerulopathies, immunofluorescence study with antibodies targeting IgA, IgG, IgM, C3, Kappa and Lambda was negative for all cases. C4d status was assessed by immunofluorescence. As required by the local institution's ethics board, patients for whom a renal biopsy was eligible were contacted and had the legal time of one month to express their opposition. The study was conducted Immunohistochemical study and analysis. For immunohistochemistry, 2.5 μm thick sections were performed, dewaxed and rehydrated. Antigen retrieval was performed in a 1 mM Tris-EDTA pH = 9 solution. All staining procedures were performed in an automated autostainer (Dako-Agilent, Santa Clara, United States) using standard reagents provided by the manufacturer. Three commercial primary antibodies were used from the manufacturer Abcam, targeting thymidine phosphorylase (TYMP, mouse, clone P-GF.44C, dilution 1:200), tryptophan-tRNA ligase, cytoplasmic (WARS1, rabbit, clone EPR3423, 1:3000) and guanylate-binding protein 1 (GBP1, mouse, clone OTI1B2, 1:50). The sections were incubated with the corresponding antibody for 45 min at room temperature. EnVision Flex/horseradish peroxidase (Dako-Agilent) was used for signal amplification, revealed by 3,3'-diamino-benzidine (Dako-Agilent). The slides were counterstained with hematoxylin, dehydrated and mounted. Each immunohistochemical assay contained a negative (buffer, no primary antibody) and positive control (transplant nephrectomy with chronic active ABMR lesions). Slides were interpreted by four . They were unaware of the diagnosis and should assess each case as either positive or negative for an active ABMR diagnosis, based on the recognition of a microcirculation staining pattern (Fig. 1). Specifically, for the CD34/CORO1A double staining, the ImmPRESS Duet Double Staining Polymer Kit was used (MP-7714, Vector Laboratories, Burlingame, United States), with the CD34 antibody (Leica-Novocastra, mouse, QBEnd/10, dilution 1:100) and the coronin-1A antibody (CORO1A/TACO, Abcam, rabbit, EPR19467-36, 1:3000).
Deep learning analysis for classification from virtual slides. Deep learning for computer vision was used to train models for the binary classification ABMR/Other diagnosis for each antibody. The overall analytical strategy is illustrated in Fig. 2. All analyzed slides were anonymized and digitized into the ndpi format using a Hamamatsu NANOZOOMER 2.0HT at the objective × 20 (resolution 0.46 μm/pixel). Using the QuPath 0.2.3 software 24 , the renal parenchyma was manually annotated for each slide, defining the regions of interest. Each region of interest was then segmented into square tiles of 512 × 512 pixels. Tiles were numbered for each case and exported into the jpeg format, according to the Aachen protocol for Deep Learning Histopathology 25 .
Figure 2.
Overall deep learning-based analytical strategy of the study. A deep learning approach was used to build models for the sequential binary classification ABMR versus Other diagnosis for each antibody. Briefly, each whole slide image, one per patient and per antibody, was cropped in square tiles and, from these, two models were trained for a sequential binary classification. Firstly, a convolutional neural network, namely the pre-trained Resnet50V2 architecture, was trained at the tile level and secondly a random forest classifier was trained at the patient level (i.e. whole slide image), based on the output of model 1 for all tiles of a considered patient. Internal validation was performed for the evaluation of models' performance, using a threefold cross-validation and by maintaining data split for both the training of models 1 and 2. Abbreviation: ABMR, antibody-mediated rejection. Created with BioRender.com. www.nature.com/scientificreports/ Deep learning models were trained using the keras library with the TensorFlow backend 26,27 . As illustrated in Fig. 2, two models were trained for a sequential binary classification ABMR versus Other diagnosis. Firstly, a convolutional neural network (CNN, model 1) was trained at the tile level and secondly a random forest classifier (model 2) was trained at the patient level (i.e. whole slide image), based on the output of model 1 for all tiles of the considered patient. Internal validation was carried out to estimate models' performance, using a threefold crossvalidation. To ensure a balanced split of both tiles and patients throughout both models 1 and 2, stratified folds were created using the StratifiedGroupKFold function of the scikit-learn library. Five repeated cross-validation procedures were conducted to estimate the overall performance variability of the whole classification. Due to the small amount of available data and the lack of a holdout set, hyperparameter tuning was reduced to a minimum, to allow an honest estimate of overall performance. Hyperparameter tuning was empirically performed from the first fold of the first iteration of the cross-validation with the WARS1 antibody and hyperparameters were then identically set for all neural network models of the cross-validation and all antibodies without further tuning. The area under the receiver operating characteristic curve (AUC) was set as performance metric.
Scientific Reports
Transfer learning was performed for the training of model 1, using the pre-trained network Resnet50V2, available in the keras library. The Resnet50V2 model was loaded with the trained weights based on the ImageNet dataset, excluding the final classification layer. Instead, a GlobalAveragePooling and a Dense layer of 1 neuron were added, the latter with a sigmoid activation function. To make the model more generalizable, i.e. more robust against image variations, we applied image data augmentation during training. As such random image alterations were performed such as horizontal and vertical flips, rotation and channel shifts. To account for the imbalance of the dataset between ABMR and Other diagnosis cases, we used a weighted binary cross-entropy loss function. Thus, a misclassification in the underrepresented ABMR class gave a higher error than the majority "Other diagnosis" class. The model 1 was firstly trained for one epoch with all convolutional layers kept frozen (weights non-trainable), with an Adam optimizer, a learning rate of 1e-04 and a batch size of 64. Secondly, the last two convolutional blocks (blocks 4 and 5) were unfrozen for fine-tuning and trained for 10 epochs, with a learning rate of 1e-05. The weights of the epoch achieving the best AUC in the validation set were restored.
Because each patient had a different number of tiles, 12 variables were created based on the output of model 1 for all tiles of a considered patient. Seven variables were defined by simple descriptive statistics: mean of all tiles' predictions, median, minimum and maximum values, standard deviation, quantiles 25 and 75. In addition, to better assess the consistency of in situ expression, we added 5 variables reflecting the consistency of local expression pattern. For this, we performed a one-dimensional average pooling from all ordered tiles of each patient, with a pool size of 10 and a stride of 5. The 5 variables were: minimum and maximum values, quantiles 25 and 75 and standard deviation. Overall, these 12 variables enabled an input for model 2 of the same shape for each patient, regardless of the number of tiles, compatible with most machine learning methods. Model 2 consisted of a random forest classifier, build with the scikit-learn library, with 100 trees, trained while maintaining the train/validation split for all three folds of the cross-validation. For each trained model 2, 50 iterations were performed and the mean of the validation AUC was retained. Similarly, best threshold was computed (closest topleft method) and corresponding sensitivity and specificity were averaged. Performance for each cross-validation iteration were then averaged, with weighted means calculated for sensitivity and specificity respectively based on the proportion of ABMR and Other cases from each fold. Finally, performance from the 5 repeated crossvalidation procedures were averaged.
To allow for a visual explanation of the CNN classification, we used the Gradient-weighted Class Activation Mapping (Grad-CAM) approach 28 . Briefly, this method uses the final convolutional layer of a trained model to produce a localization map (heatmap), highlighting important regions in the image for class consideration. The heatmap is then superimposed onto the original image. Because Resnet50V2 is not directly suited for this implementation, we used the Xception architecture instead, which we trained in a similar manner as Resnet50V2. The last convolutional layer was set to the "block14_sepconv2_act" layer.
Software and statistical analysis. All deep learning models were trained using the keras library with the TensorFlow backend 26,27 , using either a Tesla T4 or a P100 as graphics processing units. Area under the curves were calculated using the scikit-learn library. All other statistical analyses were performed using the R software, version 4.1.1 29 . Cohen's kappa was computed for the evaluation of inter-observer reproducibility between two pathologists, and Light's kappa for the overall inter-observer reproducibility between all four pathologists. Plots were performed using the ggplot2 package, version 3.3.5, and correlation analyzes with the cor.test function. In order to easily compare pathologists and deep learning interpretations of immunomarkers, a majority rule was applied to pathological interpretations, where each case was classified according to the report of most pathologists. In case of ties, the interpretation of the pathologist B.C. was retained.
Results
Main clinical, biological and histological characteristics of included patients. Overall,54 patients with corresponding kidney allograft biopsy were retrospectively included in this study, including 17 with an active ABMR diagnosis and 37 differential diagnoses commonly encountered in kidney transplantation. From the 17 active ABMR cases, five were C4d positive by immunofluorescence and seven displayed chronic antibody-mediated glomerular injuries (≥ cg1b). All ABMR cases had anti-HLA DSA in their serum, with a median [IQR] mean fluorescence intensity of the immunodominant DSA of 4715 [2500-7500]. Eleven of 17 patients had de novo DSA. The 37 cases of differential diagnoses consisted of: T cell-mediated rejections (n = 6), infections (3 polyomavirus nephropathies and 2 acute pyelonephritides), acute tubular injuries (n = 5), recurrent or de novo glomerular nephropathies (3 IgA nephropathies and 2 membranous nephropathies), non-humoral thrombotic microangiopathies (n = 5), isolated C4d positivity (n = 3), chronic ABMR without activity (g0 ptc0, www.nature.com/scientificreports/ n = 5) and stable graft cases in ABO incompatible transplantation (one year protocol biopsies without acute lesion, n = 3). Main clinical, biological and histological characteristics are displayed in Table 1.
Performance of pathologists for ABMR diagnosis with each antibody. All slides were interpreted by four nephropathologists (B.C., A.V., M.R. and JP.DVH.). They were unaware of the diagnosis and should assess each case as positive or negative for an active ABMR diagnosis based on the recognition of a microcirculation staining pattern. The microcirculation staining was defined as a positive staining of one or both microcirculation compartment (i.e. glomerular and/or peritubular capillaries), with a diffuse pattern for WARS1 and TYMP, while a focal pattern was considered for GBP1 ( Fig. 1). This definition was based on the initial study that revealed these proteins by mass spectrometry, including 21 ABMR and 8 stable graft cases, which was used as a training cohort for the pathologists 13 .
Performance of the pathologists are summarized in Tables 2 and 3 (see also Supplemental Tables S1-S4 for more details). Overall, TYMP had the best diagnostic performance in this cohort, with a mean sensitivity (Se) of 88% (± 9), mean specificity (Sp) of 86% (± 5), and a substantial agreement (Light's κ = 0.73). WARS was slightly less sensitive and specific (mean Se = 80% ± 11, Sp = 81% ± 5) but also with a substantial agreement (κ = 0.64). Finally, GBP1 had the lowest sensitivity (mean Se = 60% ± 6), with a mean specificity of 90% ± 3 and a substantial interobserver reliability (κ = 0.68). While applying a majority rule for pathologists' interpretation, 11 of 12 cases of C4d negative ABMR were properly identified with TYMP, 7 of 11 with WARS1 and 6 of 11 with GBP1 (Supplemental Table S5). There was no significant association between immunostain positivity and the C4d status: p = 0.24 for WARS1, p = 0.45 for TYMP and p = 0.57 for GBP1 (Fisher's exact test, Table 3). Of note, there was no obvious morphological difference of staining in active ABMR cases depending on the C4d status. As displayed in Table 3, false positives were mainly due to (i) some infection and T cell-mediated rejection (TCMR) cases, where a marked and diffuse interstitial inflammation led to a misleading endothelial positivity on peritubular capillaries or (ii) chronic antibody-mediated rejection cases thought to be non-active according to the Banff classification, i.e. without microvascular inflammation, g0 ptc0 (see also Supplemental Figs. S1 and S2). False negatives were mainly due to staining judged as too focal and/or too weak.
Deep learning-based classification, visual interpretation and correlation with the Banff scores.
We assessed the suitability of deep learning for the diagnosis of ABMR with the immunomarkers WARS1, TYMP and GBP1. The overall analytical strategy is illustrated in Fig. 2. Briefly, we used a convolutional neural network (CNN)-based pipeline for the binary classification of the immunostains as ABMR or other diagnosis, analyzed after cropping whole slide images into multiple square tiles. Internal validation was performed to assess model performance using 5 iterations of a threefold cross-validation (see also Supplemental Table S6 for an exhaustive description of the process of model performance evaluation). Table 2 displays the performance of each antibody for an ABMR diagnosis. For WARS1, the mean (± standard deviation) area under the curve (AUC) in the validation sets was 0.89 (± 0.02), with a mean sensitivity of 84% (± 4) and specificity of 92% (± 2). For TYMP, mean AUC was 0.80 (± 0.04), with mean Se = 77% (± 5) and Sp = 84% (± 3). As for GBP1, mean AUC was 0.89 (± 0.04), with mean Se = 88% (± 6) and Sp = 86% (± 6). Like pathologists, false positives mainly concerned some TCMR and non-active chronic ABMR cases (Table 3). Indeed, the comparison of pathologists' interpretation (majority rule) and the deep learning approach showed a substantial agreement for WARS1 (κ = 0.73, p = 8.8E−08) and an almost perfect agreement for TYMP (κ = 0.83, p = 2.7E−09). However, only a fair agreement was seen for GBP1 (κ = 0.31, p = 0.01), where the deep learning strategy had a remarkably better sensitivity than pathologists, but displayed less specificity (especially 3 false positives of TCMR cases).
We then used the Gradient-weighted Class Activation Mapping (Grad-CAM) approach 28 , to allow a visual interpretation of the deep learning approach, by exploring important regions used by the CNN for image classification. Figure 3 shows illustrative examples of native tiles and corresponding heatmaps in some of the most confident image classifications as ABMR cases for each antibody (see also Supplemental Fig. S3 for tiles associated with the "Other diagnosis" class). For each antibody, the CNN interpretation supported the microcirculation staining pattern of the ABMR classification, focusing on capillaries lined by moderately to strongly stained endothelial cells, sometimes associated with inflammatory cells. This interpretation was particularly manifest for peritubular capillaries, but rarer for glomerular capillaries.
Overall expression pattern of WARS1, TYMP and GBP1 in kidney allograft biopsies. In summary, the overall expression pattern of WARS1, TYMP and GBP1, as observed in this cohort of kidney allograft biopsies, is displayed in Table 4. As illustrated in Figs. 1 and 4, all three antibodies showed a cytoplasmic and, to a certain extent, nuclear positivity. The constitutive staining, as observed in stable graft cases, consisted of a weak and often segmental endothelial cell positivity for WARS1. With TYMP, a few inflammatory cells and atrophic tubules were constitutively positive in such cases, while no specific staining was observed with GBP1. All three antibodies stained inflammatory infiltrates, but with various intensity and pattern (diffuse or focal), Table 2. Overall performance of the pathologists and of a deep learning-based classification approach in the diagnosis of ABMR with the WARS1, TYMP and GBP1 antibodies by immunohistochemistry. Slides were interpreted by four nephropathologists (B.C., A.V., M.R. and JP.DVH.). They were unaware of the diagnosis and should assess each case as positive or negative for an active ABMR diagnosis based on the recognition of a microcirculation pattern of staining. For each antibody, pathologists were trained using the immunostains obtained from the initial study that revealed these proteins by mass spectrometry 13 . Light's Kappa are provided for estimation of inter-observer reliability. A deep learning approach was used to build models for the binary classification ABMR versus Other diagnosis for each antibody. Two models were trained for a sequential binary classification. Firstly, a convolutional neural network (Resnet50V2) was trained at the tile level and secondly a random forest classifier was trained at the patient level (i.e. whole slide image), based on the output of model 1 for all tiles of a considered patient. Internal validation was performed for the evaluation of models' performance, using 5 iterations of a threefold cross-validation. Average results of the models' performance on the validation set are displayed. ABMR Antibody-mediated rejection, WARS1 Tryptophan-tRNA ligase, cytoplasmic, TYMP Thymidine phosphorylase, GBP1 Guanylate-binding protein 1, Se Sensitivity, Sp Specificity, AUC Area under the receiver operating characteristic curve, P Pathologist, SD Standard deviation. www.nature.com/scientificreports/ whatever the renal compartment (glomeruli, tubules, interstitium or vessels). TYMP showed a diffuse and strong staining in inflammatory cells, while the staining was moderate and more focal with WARS1 and GBP1. Injured tubules (tubulitis and acute tubular injuries) were consistently stained with TYMP, more focal with WARS1 and GBP1. Endothelial staining of peritubular capillaries was observed with all three antibodies in cases of adjacent interstitial infiltrate, but glomerular endothelial cells were usually negative in this setting (Supplemental Fig. S1). As already mentioned, a diffuse endothelial staining, also called microcirculation staining pattern, was mostly found in ABMR. This microcirculation staining could be displayed on one or both microcirculation compartment (i.e. glomerular and/or peritubular capillaries) depending on the cases, and was diffuse for TYMP and WARS and more focal for GBP1.
Discussion
Antibody-mediated rejection is the leading cause of allograft failure in kidney transplantation and as such is one of the major causes of the lack of improvements in long-term allograft survival. The gold standard of ABMR diagnosis is the morphological examination of an allograft biopsy. Despite several revisions to the Banff classification, ABMR diagnosis remains challenging, with limited inter-observer reproducibility and efficient immunomarker availability. While overestimating rejection can lead to excessive treatment, unnecessary follow-up biopsies and exacerbate the burden of patient anxiety, underestimation can lead to treatment delays and ultimately a worse graft outcome. In this context, highlighting immunomarkers of microvascular inflammation, combined with a deep learning framework, could help to mitigate these challenges. Herein we described for the first time, to our knowledge, the expression pattern of WARS1, TYMP and GBP1 by immunohistochemistry in kidney transplantation, and showed that they distinctively highlight the microcirculation during ABMR, as identified by both the pathologists and a deep learning framework, with promising diagnostic value. TYMP, WARS1 and GBP1 are IFNγ-induced proteins, which we found enriched by mass spectrometry at a protein level in glomeruli with antibody-mediated injuries, as compared to stable graft controls 13 . WARS1 and GBP1 are also among the most relevant rejection transcripts described by whole-biopsy microarray analysis in kidney transplantation. They are especially described as universal rejection-associated transcripts (both ABMR and T cell-mediated rejection, TCMR), enhanced in parenchymal, endothelial cells and macrophages 30 . Moreover, recent studies using single-cell RNA sequencing strategies in kidney rejection highlighted that WARS1 transcript was particularly upregulated during ABMR in monocytes, especially the CD16a + subpopulation 31 , and in endothelial cells and cycling cells 32 . Wu et al. also showed, in a case of mixed rejection, an upregulation of GBP1 Table 3. Interpretation results of WARS1, TYMP and GBP1 antibodies by immunohistochemistry for predicting active ABMR by pathologists and deep learning. To easily compare pathologists and deep learning interpretations of the immunomarkers, a majority rule was applied on pathological interpretations, where each case was classified according to the report of most pathologists. In case of ties, the interpretation of the pathologist B.C. was retained. As for deep learning analysis, the three folds of one iteration of the crossvalidation was used to classify samples. Please note that this iteration performance is logically slightly different than the average performance displayed in Table 3. Variations in total number of cases are due to insufficient remaining material for interpretation. ABMR Antibody-mediated rejection, WARS1 Tryptophan-tRNA ligase, cytoplasmic, TYMP Thymidine phosphorylase, GBP1 Guanylate-binding protein 1, cABMR Chronic antibodymediated rejection, SG Stable graft, TMA Thrombotic microangiopathy, TCMR T cell-mediated rejection, PVN Polyomavirus nephropathy, APN Acute pyelonephritis, ATI Acute tubular injuries, GN Glomerulonephritis. www.nature.com/scientificreports/ in endothelial cells, monocytes, cycling cells and some epithelial cells, especially from the proximal tubule. As for TYMP, an upregulation was seen in monocytes, B cells, cycling cells and to a lesser extent in the proximal tubule 32 . Indeed, we found an overexpression of these 3 proteins by immunohistochemistry in several cell-types at a protein level during ABMR: inflammatory cells, but also injured cells like tubular cells, especially with TYMP during acute tubular injuries or tubulitis-related injuries (infections and TCMR). Endothelial cells of peritubular capillaries showed an overexpression of WARS1, TYMP and GBP1 in context of nearby interstitial inflammatory infiltrate, overexpression which was rarely found in glomerular endothelial cells in this setting. More importantly, endothelial cells from one or both microcirculation compartments (glomeruli and/or peritubular capillaries) showed an overexpression of these 3 proteins in cases with ABMR features. We defined this as a microcirculation staining pattern. This finding appears relevant from a pathophysiological point of view, as ABMR is in essence a disease of endothelial cells targeted by circulating DSA. Moreover, our deep learning-based approach represented an unbiased approach of markers interpretation and further supported the microcirculation pattern in active ABMR. Indeed, the CNN focused on microvascular structures for its decision process, especially on peritubular capillaries. The relative rarity of glomerular sections (about 13% of the tiles contained a glomerulus), explained in part that the tiles most associated with the ABMR class did not frequently contain a glomerulus.
In this study, WARS1 and TYMP were the most suitable antibodies for pathologist interpretation in the diagnosis of ABMR, achieving reasonable performance with a substantial inter-observer reliability. In addition, we have produced a "proof-of-concept" of the usefulness of a deep learning strategy with microcirculation immunomarkers in ABMR diagnosis, with performance of a similar magnitude than pathologists. This finding is of importance, as deep learning can theoretically suppress inter-observer variability of interpretation, one of the greatest burdens on pathologists. A recent large-scale study of kidney transplant biopsies showed good performance of deep learning from morphological slides for the classification normal/disease biopsies (mean AUC of 0.83). However, lower performing results were seen for the distinction universal rejection versus other transplant injuries (mean AUC of 0.61) 19 . The addition of the immunomarkers WARS1, TYMP and/or GBP1 could be of interest in this context.
Although all active ABMR cases had anti-HLA DSA in this study, WARS1 and TYMP showed a diffuse microcirculation pattern in most C4d negative cases, highlighting their potential interest in this context. As non-anti-HLA DSA are still not routinely tested, the diagnosis of such ABMR cases with no detectable anti-HLA DSA ultimately relies on C4d in daily practice, which is not known to perform well in this setting 33,34 , with up to 86% of C4d-negative cases in a recent transcriptomic study 35 . By highlighting a diffuse endothelial staining by immunohistochemistry, WARS1 and TYMP could be of great interest in these cases and this needs to be assessed in further studies. Moreover, WARS1 and TYMP negativity in ABO-incompatible subnormal biopsies is a promising result, in these patients where C4d deposits are constitutively present and thus nonspecific to an active antibody-mediated process.
In this study, the most frequent false positive cases were some chronic ABMR without morphological activity (g0 ptc0) and some cases with multifocal tubulo-interstitial inflammation such as infections and TCMR. At least in part, these latter cases could have been avoided by pathologists by considering peritubular capillary staining as nonspecific near these areas of tubulo-interstitial inflammation. Considering the deep learning strategy in these cases, their limited number did not allow us, at this stage, to train a specific model to distinguish ABMR from TCMR and/or infection biopsies, that would have been optimal for model performance. As for the chronic ABMR cases, it could either represent true false positives, or evolving endothelial injuries without morphological microvascular inflammation, which could require additional molecular studies to settle. False negative cases were mainly due to staining judged as either too focal and/or too weak by pathologists.
Our study has some limitations. Firstly, although including relevant differential diagnoses encountered in kidney transplantation, this immunohistochemical study was performed on a small-scale and single-center cohort. This explains why performance displayed a quite high dispersion (standard deviation), as for example, a difference in classification of a single ABMR case could change the performance of about 6%. Further studies on Table 4. Main observed expression patterns with the WARS1, TYMP and GBP1 antibodies in kidney allograft biopsies, with a focus on ABMR condition. The first line refers to the cell type where the staining is analyzed, while the second line refers to a condition. Interstitial infiltrate can notably refer to T cell-mediated rejection processes as well as infections. Please note that other cell types are sometimes also stained but are not displayed here as such stains appeared less consistent. Indeed, glomerular epithelial cells were sometimes stained with TYMP in ABMR, and also mesangial cells during glomerulitis lesions. ABMR Antibody-mediated rejection, SG Stable graft, ATI Acute tubular injury, WARS1 Tryptophan-tRNA ligase, cytoplasmic, TYMP Thymidine phosphorylase, GBP1 Guanylate-binding protein 1. www.nature.com/scientificreports/ large-scale unselected cohort will be required to have a more accurate estimate of the performance of these antibodies in ABMR diagnosis. Secondly, other kidney compartments than endothelial cells were obviously stained during some superimposed polymorphic tissue injuries such as tubulo-interstitial inflammation and acute tubular injuries. Sometimes such stains restricted microcirculation analysis to the glomerular compartment, especially with TYMP, in cases with severe tubulo-interstitial inflammation with de facto "uninterpretable" peritubular capillaries. This finding could limit their potential as immunomarker despite, in this cohort, such stains did not significantly lead to false negatives. Thirdly, the thrombotic microangiopathy cases showed more chronic than Figure 4. Illustration of the immunostains WARS1, TYMP and GBP1 obtained in the main analyzed differential diagnoses of ABMR, including acute tubular injury (a-c), T cell-mediated rejection (d-f), polyomavirus nephropathy (g-i) and stable graft in ABO-incompatible transplantation (j-l). Injured tubular cells showed cytoplasmic positivity with variable intensity, with a mild to moderate staining with WARS1 (a) and GBP1 (c), and more strongly and consistently with TYMP (arrow, b). There was no diffuse endothelial staining in such cases of acute tubular injury (a-c). In T cell-mediated rejection (d-f) and polyomavirus nephropathy (g-i), interstitial infiltrate is moderately to strongly stained, as well as nearby injured tubular cells (asterisks). Endothelial cells of peritubular capillaries were frequently stained in these areas, but exceptionally glomerular endothelial cells. Afar from these areas of tubulo-interstitial inflammation, note the absence of significant endothelial staining of the microcirculation, similarly to constitutive staining (arrows). (j-k): In this stable graft case, a mild positivity is observed with WARS1 and GBP1 in tubular cells. Some sparse inflammatory cells are strongly stained with TYMP, and a moderate staining is observed in a few tubular cells. No overt and diffuse endothelial staining is observed whatever the antibody. Abbreviations: WARS1, tryptophan-tRNA ligase, cytoplasmic; TYMP, thymidine phosphorylase; GBP1, guanylate-binding protein 1; SG, stable graft; ABOi: ABO-incompatible kidney transplantation; ATI, acute tubular injury; TCMR: T cell-mediated rejection; PVN: polyomavirus nephropathy. www.nature.com/scientificreports/ acute features, which could have favored markers negativity. Fourthly, considering the deep learning analysis, we cannot conclude about the generalization of the displayed models' performance. Indeed, the small-scale status of the cohort did not allow us to assess performance in an independent holdout set, which would have been optimal to ensure a robust evaluation. Still, internal validation was performed using a threefold cross-validation to ensure at least an honest estimate. Moreover, as already mentioned, the aims of the deep learning analysis were to support pathological findings with an unbiased approach for interpretation and to assess the suitability of such strategies for future studies rather than deploying a turnkey model. Finally, the histological scores of the Banff classification, with their own limitations, were used for inclusion rather than an external standard such as validated molecular classifiers. Future studies should assess unselected cohorts of kidney allograft biopsies, to better reflect the inter-individual heterogeneity of routine cases, allow a better estimate of the immunomarkers' performance and focus on molecular-classified cases where C4d is non-indicative of an ABMR process (C4d negative ABMR, isolated C4d).
To conclude, this study displays the immunohistochemical expression profile of three interferon-related proteins, WARS1, TYMP and GBP1 in kidney transplantation. We highlighted a singular expression pattern of microcirculation staining in antibody-mediated rejection, revealed by both nephropathologists and a deep learning-based strategy, and deemed to reflect interferon-related endothelial stress during ABMR. This pattern displayed promising diagnostic value in a selected cohort, especially in C4d negative ABMR cases, one of the blind spots of the current Banff classification when no DSA is detectable. Future studies should specifically assess these antibodies in this context. | 2022-11-10T15:02:00.075Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "76cea4b79aa1e991cac6eb24e3b9e8302ef73051",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "76cea4b79aa1e991cac6eb24e3b9e8302ef73051",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4373430 | pes2o/s2orc | v3-fos-license | Asia Pacific Observatory on Health Systems and Policies
Drawing on published work from the Asia Pacific Observatory on Health Systems and Policies, this paper presents a framework for undertaking comparative studies on the health systems of countries. Organized under seven types of research approaches, such as national case-studies using a common format, this framework is illustrated using studies of low- and middle-income countries published by the Asia Pacific Observatory. Such studies are important contributions, since much of the health systems research literature comes from high-income countries. No one research approach, however, can adequately analyse a health system, let alone produce a nuanced comparison of different countries. Multiple comparative studies offer a better understanding, as a health system is a complex entity to describe and analyse. Appreciation of context and culture is crucial: what works in one country may not do so in another. Further, a single research method, such as performance indicators, or a study of a particular health system function or component, produces only a partial picture. Applying a comparative framework of several study approaches helps to inform and explain progress against health system targets, to identify differences among countries, and to assess policies and programmes. Multi-method comparative research produces policy-relevant learning that can assist countries to achieve Sustainable Development Goal 3: ensure healthy lives and promoting well-being for all at all ages by 2030.
The Pacific Community (SPC) joins the Asia Pacific Observatory on Health Systems and Policies
The Asia Pacific Observatory on Health Systems and Policies (APO) is pleased to welcome the Pacific Community (SPC) as a full member. The collaboration, which is co-sponsored by the Department of Foreign Affairs and Trade (DFAT) of the Government of Australia, will enable the SPC and the APO to achieve their objectives to generate evidence for policy, and develop knowledge products to improve health systems and policies on health and health care, in the Asia Pacific Region. As a member of the APO, SPC will work towards its fulfilment of its aim to improve policies on health and health care in the Asia Pacific Region. Having SPC as a board member will support the development of knowledge products and generate evidence on health policies and systems in the Pacific region. There will be greater opportunity to leverage the experience and knowledge of experts from the region, along with the experience and support of the APO in generating relevant products.
APO HAS A NEW CHAIRPERSON
The APO bids farewell and thanks the Philippines for their role as Chair to the APO Board from 2019-2021. The APO would also like to take this opportunity to welcome Australia as the new chair of the APO board in January 2022.
FUNDING UPDATE
The secretariat is working with the Global Fund to get funding for work in LAOS PDR. Discussions on this are ongoing.
PUBLICATION UPDATE 2020-21 COVID-19 Health System Response Monitor
In 2020, the APO launched the COVID-19 Health System Response Monitor. The series of publications present a systematic approach to collect and collate country policy responses to COVID-19 to help with understanding the global COVID-19 response and allow easy comparison of activities at national and sub-national levels. The COVID-19 HSRM reports are updated to document the changes in strategies taken by countries to respond to COVID-19. The HSRM series is the result of a collaboration between the APO, the European Observatory on Web: www.healthobservatory.asia I email: apobservatory@who.int Health Systems and Policies (OBS) and WHO through its regional offices for Europe, Eastern Mediterranean, South East Asia and Western Pacific as well as Headquarters. COVID-19 HSRM publications launched between 2020-21 are: Health System in Transition Reviews (HiT) Sri Lanka health system review Sri Lanka achieved strong health outcomes over and above what is commensurate with its income level. The country has made significant gains in essential health indicators, witnessed a steady increase in life expectancy among its people, and eliminated malaria, filariasis, polio and neonatal tetanus. At the same time, Sri Lanka's health system faces challenges arising from a rapidly ageing population, and the need to address the burden of non-communicable diseases which currently contributes to nearly 75% of deaths in the country.
Comparative Country Studies
Integrated care for chronic diseases in Asia Pacific countries Ageing populations, increasing burden of chronic diseases, the need to manage chronic conditions are some factors which have triggered the need for, and implementation of, integrated care models in the Asia Pacific region. This CCS presents findings from a scoping review, and case studies on integrated care programmes from six countries in the Asia Pacific region. Read more Moving towards culturally competent, migrant-inclusive health systems: a comparative study of Malaysia and Thailand Malaysia and Thailand have taken different approaches to developing a migrant-inclusive health system. By featuring two countries at different stages of development of migrant-inclusive health systems, the case studies highlight there is no "one size fits all" solution, and that different policy options can be considered.
Policy Brief
Use of e-health programmes to deliver urban primary health-care services for noncommunicable diseases in middle-income countries This policy brief presents a synthesis of the insights gained from systematic reviews of the published scientific as well as grey literature and in-depth interviews in four MICs -Chin a, Nepal, Philippines and Kenya. MICs have to deal with an increasing burden of NCDs but have different demographic structures, socioeconomic status and e-health development landscape. The focus of this brief is on the use of e-health at the PHC level for NCD management in urban settings. Read more | 2018-04-03T06:02:05.313Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "57e23471855d938d316c7cc163a59642ddf6d6b3",
"oa_license": "CCBYNCSA",
"oa_url": "https://openresearch-repository.anu.edu.au/bitstream/1885/250865/1/seajph2018v7n1_chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e7096bf43127b87b79845f3e1d5cbe4da5b2d2b9",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
248836293 | pes2o/s2orc | v3-fos-license | Examining the Relationship Between Product Market Competition and Chinese Firms Performance: The Mediating Impact of Capital Structure and Moderating Influence of Firm Size
This study emphasized the relationship between the Chinese companies’ product market competition and organizational performance. This article explored the mediating effect of capital structure and the moderating impact of firm size in achieving better performance of Chinese companies. This study employed a sample of 2,502 Chinese firm observations and identified that market competition positively influenced firm performance. Additionally, capital structure partly mediated the relationship between product market competition and firm performance. Similarly, the present study also tested the moderating effect of firm size (both small and large) on the association between product market competition and firm performance. The results showed that moderating large businesses affects the nexus between product market competition and firm performance. Conversely, small firms’ moderating role revealed a substantial adverse impact on the association between product market competition and firm performance. These findings contribute to the literature on the complex implications of market competition on business firms’ performance. The results provide insightful and practical implications for future research directions.
INTRODUCTION
Many organizations consider corporate competitive strategies to be a strategic imperative in an increasingly competitive global environment. Corporate competitive strategies are those corporate plans firms use to increase market share (Franko, 1989), competitive advantage (Hitt et al., 1996), and improve firm performance (Giroud and Mueller, 2011). Businesses incorporate competitive strategies as an essential tool to achieve their objectives. Many scholars have long been interested in investigating corporate strategy, and they focused on examining the relationship between product market competition and firm performance (Raith, 2003;Pant and Pattanayak, 2010;Sheikh, 2018;Javeed et al., 2020). Existing studies revealed mixed findings on this relationship. Previous studies (Raith, 2003;Pant and Pattanayak, 2010;Sheikh, 2018) reported that competition positively and significantly affects firm performance . However, another study showed that business competition forms a competitive setting for businesses, decreasing pricing power and thus leading to low profits . The consequences of the impact differ significantly due to the different data, time, and various performance measurements. Consequently, academics recommend that the intervening mechanism between product market competition and firm performance be studied to uncover whether and how product market competition affects firm performance (Sheikh, 2018). However, few studies have examined the intermediate link between product market competition and company performance, leading the authors to suggest that intervention variables should be explored in future studies. Another study (Michaelides et al., 2019) debated that the impact of competition may be limited or enhanced depending on the organizational environment. In reality, several aspects of the corporate business environment may moderate or mediate the relationship between product market competition and firm performance, such as organizational capital structure, firm size, growth orientation, and ask requirement (Blundell et al., 1999;Guney et al., 2011;Ammann et al., 2013;Dang et al., 2018). For example, researchers have called for more research on the contingencies-moderators and mediating mechanism affecting the product market competition and firm performance relationship. Therefore, we explore this question by examining how product market competition affects firm performance through the two mediating and moderating mechanisms: capital structure and firm size.
We focus on capital structure and firm size as intervening mechanisms in this study because prior literature indicates that these two variables are significant predictors of organizational values (Hillman et al., 2007;Gul et al., 2011). Research has theorized and empirically found that capital structure is a valuable resource that enables a business to generate higher firm value. Additionally, scholars considered firm size a critical underlying mechanism between product market competition and firm performance, constraining or facilitating business activities such as decision making and firm innovation process (Li and Chen, 2018).
Capital structure works as a valuable source for earning higher profits by producing high-quality or value products for the competitive markets (Boubaker et al., 2018). This study examines the direct impact of market competition on capital structure and investigates the mediating role of capital structure on the association of market competition to attain firm performance. Existing literature debated that capital structure cannot be neglected in a competitive organizational environment (Jiraporn et al., 2012). According to the authors' knowledge, no study is available that investigates the mediating impact of capital structure on the association of market competition and firm performance. Therefore, this is the first study highlighting the role of capital structure on the relationship between market competition and firm performance. However, the present study emphasizes the moderating effect of firm size based on the following arguments. First, firm size is important for organizational performance and management. Managerial productivity increase with firm size (Zona et al., 2013;Dang et al., 2018). For example, larger firms are more advanced and well-organized to respond to market changes for achieving the desired profit.
This study focused on examining product market competition and firm performance in the Chinese economy. Next, it investigated capital structure as a mediating factor in this relationship. Furthermore, it used the firm as a moderating factor to study the connection between product market competition and firm performance. Using the GMM model, the results revealed that the product market competition positively connects with firm performance, and capital structure partially mediates this relationship. Furthermore, small firms negatively affect this relationship, while large firms positively moderate the connection between product market competition and firm performance. Therefore, it is essential to consider this variable to evaluate the relationship between market competition and firm performance.
In brief, this study offers two significant contributions to the association between market competition and firm performance. First, we test whether capital structure mediates the market competition and firm performance association. Second, this study tests the moderating role of firm size on this relationship because, before this, most studies investigated only product market competition and firm performance (Yuan et al., 2019). This study promotes the role of market competition in China and other developing economies.
The research classification specifies the subsequent sections. Section "Literature Review" introduces the literature, theoretical framework, and hypothesis development. Section "Research Data, Sample, and Methods" provides detailed information on sample and variable selection. It describes applicable econometric techniques. Section "Results and Discussion" provides the results of this research and discussion. Finally, Section "Discussion" summarizes the research and policy implications. Figure 1 displays the conceptual framework of the study.
The Relationship Between Product Market Competition and Firm Performance
Several theoretical literature debated that strong product market competition leads to better understanding and improved performance. As market competition builds a better reputation, it provides competitive benefits, and these advantages are consequences of increased firm values. As the main goal of an enterprise is to achieve higher financial returns, attaining a sustainable competitive advantage plays a vital role (Dunk, 2007). Accordingly, many corporate strategies aim to gain a sustainable competitive advantage. The competitive scenario builds an excellent image for businesses in a competitive market and provides competitive advantages (Saeidi et al., 2015). Competitive advantage enables the firm to gain higher firm value by offering superior products and services. Additionally, the literature indicated that product market competition is a powerful force that solves agency conflicts between owners and managers and reduces the managerial slack, leading to increased firm values (Hermalin, 1992;Mnasri and Ellouze, 2015;Abbas et al., 2019b;Javeed et al., 2020). Managers in competitive industries face more bankruptcy risk than those in concentrated industries do. Therefore, managers are prompted to make the best and worthy decisions due to the fear of losing their jobs Aman et al., 2021b;Fu and Abbas, 2021). Another study contended that pressure is essential for enhancing firm performance (Ammann et al., 2013). Empirical studies showed the positive and negative association between product market competition and firm performance. For instance Ammann et al. (2013) explained that competition works as the desired tool for shareholders because it influences top management to work hard. Therefore, it reduces agency costs, increasing profitability.
For instance, a study of 670 United Kingdom companies from 1972 to 1986 showed that competition was positively correlated with organizational performance. Additionally, various researchers have studied the relationship between corporate performance and competition. A previous study (Januszewski et al., 2002) examined the association between product market competition, corporate governance in Germany by selecting 500 firms from 1986 to 1994. Their findings showed that product market competition positively affects productivity and strong competition compels businesses to convert resources into social profit maximization (Fernández-Kranz and Santaló, 2010). Moreover, previous research evidence that competition is positively related to firm performance (Okada, 2005), while other studies identify a non-linear link between competition and firm values (Tingvall and Poldahl, 2006;Inui et al., 2012). Consequently, some previous works concluded that there is an inverted U-shaped association between competition and firm profit.
When referring to some literature examined in emerging economies, we noticed that their results are conducive to the positive connection between product market competition and performance. For example, Javeed et al. (2020) investigated the association between product market competition and firm performance and selected 147 Pakistani firms' data over 2008-2017. Their findings showed that product market competition has positive effects on company performance. Other studies have tested the link between product market competition and firm value, and the results revealed the positive connection between product market competition and firm value concentration (Blundell et al., 1999). Sattar et al. (2020) also reported a positive relationship between firm value and product competition. Based on the literature review, we established the Hypothesis as follows: H1: Product market competition has a positive effect on firm performance.
The Relationship Between Product Market Competition and Firm Performance With Mediating Effects of Capital Structure
Numerous experimental studies have revealed that the effect of firm market competition on firm performance may change based on the strength of the corporate debt structure. These studies conclude that market competition has a complex interaction with the firm's debt structure. For instance, Brander and Lewis (1986) recommend that capital structure or debt allows organizations to compete in highly competitive environments. Furthermore, it is always a top priority for managers to increase the leverage, reducing agency problems between managers and shareholders, and leading to higher profits. Thus, capital structure is vital for business growth and to achieve the defined firm strategic objectives.
Capital structure and company performance have attracted much empirical debate, and the results of empirical studies are mixed. For example, advocates of agency theory believe that a company's capital structure negatively impacts financial performance (Margaritis and Psillaki, 2010;Chintrakarn et al., 2014). However, limited liability and disciplining effect proposed a positive impact of leverage on performance (Brander and Lewis, 1986;Fosu, 2013). Capital structure is considered outside funding that permits a firm to advance more products, thus positively impacting firm performance (Jiraporn et al., 2012). In contrast, leverage will enable organizations to participate more aggressively in the market due to limited liability. A previous study debated that different conditions impact the profit of such planned behavior on the type of competition and product features (Wanzenried, 2003). It recommends that leverage effects could fail to increase leveraged business effectiveness. Previous literature suggests that leveraged firms could suffer significant competitive disadvantage in product markets (Chevalier, 1995;Wanzenried, 2003).
Capital structure is an organizational plan that enables a company to gain a competitive benefit by adding other products to increase company performance (Desai et al., 2003). Therefore, we hypothesize that Capital structure is an essential mediating variable in understanding how market competition is related to firm performance. This is because the intermediary variable plays a vital role in organizational sciences. It is useful to examine the association between product market competition and firm performance by adding mediation and intermediator, and how and why one variable impacts another (Franko, 1989). Existing literature has empirically found that managers prefer high leverage, which raises profits and positively impacts firm performance (Fosberg, 2004). Other studies have also found that market competition is positively related to durable firm performance (Abor, 2007). Furthermore, existing literature demonstrates that leverage opens up opportunities for rivalry predation in concentrated product markets, increasing firm growth and profits (Chevalier, 1995;Dasgupta and Titman, 1998;Fosu, 2013). Based on the above-stated literature review, we propose the following hypothesis: H2: Capital structure mediates the relationship between product market competition and firm performance.
H2a: Product market competition is negatively associated with capital structure.
H2b: Capital structure positively mediates the relationship between market competition and firm performance.
The Relationship Between Product Market Competition and Firm Performance With Moderating Effects of Firm Size
The literature demonstrates that the association between product market competition and firm performance has produced mixed outcomes (Fosu, 2013;Sheikh, 2018;Javeed et al., 2020). However, minimal evidence shows why these results are so different in the literature. Sheikh (2018) explained that some organizational factors might not allow the firm to achieve the benefits of market competition. The literature recognized firm size as one of the moderating mechanisms, which may modify business activities to fulfill their objectives, such as managers' interests, firm improvement, and decision making (Januszewski et al., 2002;Li and Chen, 2018). This study investigates whether firm size plays a moderating role in improving or constraining the effect of product market competition and firm performance. However, no empirical research found how and why firm size might enhance or restrain the association of product market competition and firm performance. Therefore, we employ firm size as a moderating element to explain why observed conclusions on the product market competition and firm performance are seemingly contradictory.
Existing literature on market competition demonstrates that large firms have strong market reputations and more assets to produce new products. Although theoretical recommendations were made that market competition may change over firm size (Dang et al., 2018), these suggestions lead to possible pressure between firm size and market competition as they are linked with firm performance (McWilliams et al., 2006). For instance, larger firms have more resources and market reputation than smaller firms do. Additionally, they are more skilled in producing new products and achieving desired goals (Damanpour, 2010;Zona et al., 2013).
In general, larger firms are more advanced and wellorganized to respond to market changes (Rajan and Zingales, 1995). However, smaller firms have low resources, and their organizational structure is not well-organized. Additionally, smaller firms with insufficient resources cannot produce according to market changes. They tend to utilize accessible resources to increase their performance (Baker and Hall, 2004). Based on this discussion, we hypothesize that firm size is an important variable to understand how market competition increases firm performance (Yang and Zhao, 2014). Furthermore, it discloses how and why one variable affects another (Baron and Kenny, 1986). Thus, we establish the following hypothesis: H3: Firm size moderates the association between product market competition and firm performance.
H3 (a):
There is a positive relationship between product market competition and firm performance when firm size is large.
H3 (b):
There is a negative association between product market competition and firm performance when firm size is small.
RESEARCH DATA, SAMPLE, AND METHODS
We collected data from the Chinese stock market and accounting research (CSMAR) database and spans 2012-2017 due to lack of data and missing values. The study considered a panel dataset from Chinese listed firms over 2012-2017. The reason behind the data period (2012-2017) are as follows, For instance, the china securities regulatory commission (CSRC) 2006 has focused on the improvement of organizational structure as a priority (Li and Chen, 2018). In response to deepening market development, Chinese firms have gradually implementing corporate governance structures, especially many measures adopted (Conyon and He, 2011). Second, we excluded the financial crises duration (2008-2009) as mentioned (Kirkpatrick, 2009;Kahle and Stulz, 2013). For the study analysis we focused on state-owned enterprise firms as we collected data from CSMAR database which includes only publicly listed firms rather than all Chinese firms (Liu et al., 2018). This study contains a sample of manufacturing firms' annual observations, and the data filtering techniques have been employed; therefore, it ignores organizations with incomplete data. Additionally, we select firms with at least 3 consecutive years' statistics for the GMM regression analysis. Finally, we used 417 firms covering 6-year data, with 2,502 firms' years' observations. We chose the Chinese economy because it has achieved immense success amongst emerging countries and has done detailed work on the role of corporate governance.
The reason behind the data period (2012-2017) are as follows, For instance, the china securities regulatory commission (CSRC) 2006 has focused on the improvement of organizational structure as a priority (Li and Chen, 2018). In response to deepening market development, Chinese firms have gradually implementing corporate governance structures, especially many measures adopted (Conyon and He, 2011). Second, we excluded the financial crises duration (2008-2009) as mentioned (Kirkpatrick, 2009;Kahle and Stulz, 2013). For the study analysis we focused on state-owned enterprise firms as we collected data from CSMAR database which includes only publicly listed firms rather than all Chinese firms (Liu et al., 2018).
Product Market Competition
Product market competition is the main independent variable in our study. In terms of operations, the degree of product market competition means the monopoly, oligopoly, or competitiveness of the company. Existing studies have used different techniques to measure market competition, such as the Herfindahl Hirschman Index (HHI) and Boone Index (Fosu, 2013). Previous research has shown that HHI is the best measure of market competition among other available methods (Zou et al., 2015). Past research has also shown that companies usually compete based on their sales, indicating the industry's competition in terms of revenue (Zou et al., 2015). Additionally, many scholars have used the HHI to calculate industry competition (Jain et al., 2013;Michaelides et al., 2019). Following (Michaelides et al., 2019;Javeed et al., 2020), we use each company's total squared market share in the industry to calculate market competition based on the total sales of the industry.
Firm Performance
In this study, we use company performance as our dependent variable. Existing literature showed that various methods could be used to calculate company performance, such as return on assets (ROA), return on investment (ROI), return on equity (ROE), Tobin Q, and dividends payable . However, we use ROA and ROE measures as our dependent variables. Some studies (Hutchinson and Gul, 2004;Javeed et al., 2020) reported that accounting-based measurement methods are most suitable for corporate governance research because they can easily track the company's ability to manage its value. By adding to this debate, Bhagat and Bolton (2008) indicated that a higher ROA reflects the organization's aptitude on asset efficiency and shareholder value.
Furthermore, ROA discloses firm production related to management when using assets. Therefore, based on previous research, we calculated the ROA by the ratio of the firm's net profit to total assets . While the calculation method of ROE is the ratio of operating profit to shareholder's equity (Bhagat and Bolton, 2008), ROE is mainly used for corporate governance-related research and research related to corporate governance. According to the shareholders' perspective, return on equity has better tested the company's business performance (Brown and Caylor, 2009).
Control Variables
This study used different control variables to obtain more relevant results: growth, current ratio, and innovation. Companies with high growth expectations will have realistic opportunities for future profits and flexibility in choosing future investments, so the rate of return may be positively correlated with growth. Sales growth (Growth), a proxy for growth opportunities, was measured as changes in the company's sales revenue (King and Santor, 2008). The current nature of assets (liquidity) can improve the company's solvency; therefore, the relationship between debt and current ratios can be positive or negative. The current ratio is calculated by dividing total existing assets by total current liabilities (Guney et al., 2011). We use R&D expenses (research and development expenses) for the innovation measurements as a proxy for innovation. Scholars believe that innovation reflects management decisions; allocating resources to produce more products, and previous literature has proven that the company's R&D intensity is an appropriate proxy for the firm's innovation (O'Brien, 2003;Miller and Del Carmen Triana, 2009). Consistent with this, innovation is measured by the intensity of R&D. We use this as the company's reported R&D expenditure divided by sales (Miller and Del Carmen Triana, 2009).
Moderating and Mediating Variables
Existing literature indicates intervening variables in the relationship between gender diversity and firm performance (Fosu, 2013). To address the concerns, we investigated the moderating and mediating role of firm size and capital structure on the association between product market competition and firm performance. This study used the capital structure as a mediating variable to investigate their mediating role in the association between product market competition and firm performance. For the mediation analysis, the capital structure may be defined in various ways. Previous work debates that the definition of the capital structure depends on the study objective (Rajan and Zingales, 1995). In this study, we define capital structure as the ratio of total debt to total assets.
In the corporate finance literature, firm size is a crucial variable. Existing studies used firm size as a control variable in all studies of corporate finance. However, this study used moderating variables based on the identified current literature gaps that firm size should be studied in the association between product market competition and firm performance. Firm performance is not the same at different firm sizes. Previous work demonstrates other methods to measure firm size, such as total sales, natural log of total assets, and market equity assets (Dang et al., 2018). Following a current study, we used firm size as a log of total sales (Dang et al., 2018).
Empirical Examination
In the regression model, when there is a correlation between the error terms, the variables face endogeneity problems. Similarly, these problems may also result from automatic regression or missing variables, measurement errors, and autocorrelation errors (Singh et al., 2018). According to the rules of econometrics, if there is only one endogenous variable in the research model, appropriate techniques need to be applied to solve the endogenous problem . Endogeneity correlates explanatory variables with error terms (Cannella et al., 2008). The formation of the panel dataset places limitations using OLS (ordinary least square model) because it leads to biased estimation and unobserved heterogeneity . For example, dealing with historical company information, such as unobservable and observable company characteristics, leads to endogenous issues (Kang and Zardkoohi, 2005). Unnoticeable heterogeneity, dynamic, and simultaneity endogeneity are multiple causes of endogeneity. According to scholars, about 90% of the research published in reputable journals has not yet fully discussed the issue of endogeneity (Hamilton and Nickerson, 2003;Antonakis et al., 2010;Javeed et al., 2020). Therefore, literature suggested that it needs to tackle endogenous problems.
Existing studies argued that there are many techniques to solve the endogenous problems in panel data. The literature reported that control variables could solve the three-factor effect (Li, 2016). Additionally, the lagged independent variable is a crucial method to overcome the simultaneous issues. Nevertheless, to eliminate causality, tool change technology is considered a top priority. Moreover, lagged dependent variable techniques can deal with solid historical information such as unobservable and observable effects. After studying all approaches to deal with the endogeneity issue, like using variables to control the firm fixed effects, third-factor effects, lagged independent variables, and GMM or dynamic models controls upward. Downward bias, and in certain situations in OLS assessment (Li, 2016), most scholars propose the generalized method of moments (GMM). The GMM model is a superior technique to overcome endogenous problems (Wintoki et al., 2012). In this study, we used GMM model heteroscedasticity and autocorrelation, and endogeneity issues. Arellano and Bond (1991) first proposed the GMM, explicitly used for panel data (Arellano and Bond, 1991). For dynamic panel data, the causality of primary sight usually changes over time. In this case, this technique is suitable for the lag of the predictor variable as an independent variable. Therefore, the lag value of the predictor variable is considered as a tool to overcome endogeneity. Furthermore, the GMM model overcomes endogeneity by "inner altering the data" -when a variable's initial value is subtracted from the current value, the change implies a statistical situation (Wooldridge, 2016). Finally, GMM is a suitable approach for controlling endogenous problems than other methods. GMM has a higher effect on coefficient correction (Javeed et al., 2021).
We used some specific tests to check whether data is appropriate for examination before applying the GMM model. First, this study used a variance inflation factor (VIF) test to confirm the multicollinearity issues in data. The results of the VIF test guaranteed that there are no multicollinearity issues in this study. Next, this study applied the Wald test to check for heteroscedasticity. The outcomes of the Wald test display no heteroscedasticity in this data. The study used the Sargan test for instrument validity and over-identifying restrictions. Sargan test outcomes confirm the validity of instrumental variables. This study established the data for serial autocorrelation by applying an AR (1) and AR (2) test and concluded no serial autocorrelation. Finally, we tested the data for endogeneity problems and found that our data have endogeneity issues. This study incorporated the GMM model to explore endogeneity problems between variables and error terms. The study examined the consistency of the GMM model with previous research, which stated that GMM is the best model among other statistical analytical techniques (Singh et al., 2018). Hence, the GMM model is a superior method with the maximum power to deal with endogeneity . All instrument tests show that the weak instruments do not affect the study's specifications, and instrumental variables perform well. Table 1 describes the descriptive statistics and VIF of all dependent and independent variables of this study (Abbas et al., 2019a,c). Panel A shows the descriptive statistics of the data, and this study applied ROA and ROE as dependent variables based on the total 2,502 observations used in 5 years (Hussain et al., 2019. All details of descriptive statistics of all variables are given in panel A.
Descriptive Statistics
Panel B of Table 1 presents the VIF. Multicollinearity of the coefficients may lead to higher standard errors and makes the inference difficult and biased (Mamirkulova et al., 2020;Paulson et al., 2021;Zhou et al., 2021;Li et al., 2022). Therefore, to trace down the multicollinearity in this study, VIF confirms the absence of multicollinearity (Aman et al., ,b, 2021a. Average values of all dependent and independent variables are lower than 10, confirming that our data are free from multicollinearity (Abbasi et al., 2021;Azadi et al., 2021;Local Burden of Disease HIV Collaborators, 2021;Wang et al., 2021). Previous studies stated that the value of VIF higher than five might indicate that a specific variable suffers multicollinearity (Hair et al., 2006;. Panel B of Table 1 describes the details of the VIF analysis outlined below.
Analysis of Hypotheses 1 and 2
Hypothesis 1 showed that market competition and firm performance have a positive relationship. Thus, Table 2 reports the coefficient values of product market competition and firm performance. Model 1 displays that the coefficient values of HHI are0.619 and 0.300, respectively, with a 1% significant level with both performance measurements ROA and ROE. These results supported H1 of this study; there is a substantial and positive connection between product market competition and firm performance. Additionally, our outcomes are consistent with previous studies (Pant and Pattanayak, 2010;Ammann et al., 2013;. Hypothesis 2 showed the mediating impact of capital structure on the association of market competition and firm performance. Model 2 specifies the outcomes of Hypothesis 2, which discloses the relationship between product market competition and capital structure. Model 2 describes that the GMM regression coefficient value is -0.0224 with a 1% significant level. This result showed that there is a negative association between capital structure and market competition. These outcomes confirmed our hypothesis from the other studies (Fosu, 2013). Therefore, leveraged businesses might suffer a substantial competitive difficulty in product markets because of high debt costs (Fosu, 2013). Additionally, Model 3 presents the consequences of the association between market competition and firm performance with the mediating effects of capital structure. Model 3 indicates the interaction term of HHI with both performance measurements ROA and ROE, respectively. Model 3 shows that the value of the coefficients of HHI is 0.0858 and 1.703 at 1% of significant level. These results supported H2. Table 3 displays the outcomes of Models 4 and 5, which describe the association between product market competition and firm performance with the moderating role of firm size (small and large firm). Model 4 specifies the interaction term of small firms (S * HHI), with both performance measures. It shows the significant and negative coefficient value of S * HHI is -0.0676 and -0.2204, respectively. This result showed a statistically significant and negative connection between the small firm, capital structure, and firm performance. These outcomes indicate that small firms negatively moderate the relationship between product market competition and firm performance. The study results are consistent with other (Porter and Kramer, 2006;Javeed et al., 2021) outcomes, which stated that small firms have a low growth rate and profitability.
Analysis of Hypothesis 3
Additionally, small organizations have limited products, and their managers do not consider innovative products, leading to low profits. Model 5 indicates the coefficient value of L * HHI is 0.1507 and 0.0405, statistically significant at 1% level. The positive coefficient value of L * HHI showed the positive role of large firms on market competition and firm performance. Large firms have more innovative and differentiated products that lead to higher profits. The results of Model 4 and 5 provide support for the proposed H3. See Table 3 below.
DISCUSSION
H1 confirms the positive link between product market competition and firm performance, and previous literature supports our results . Additionally, participating more in market competition can improve the company's financial performance by establishing an excellent organizational image (Porter and Van der Linde, 1995). Companies in developing economies aim to improve market reputation and are considered unique, thereby bringing higher profits. The competition of developing economies forces companies to create innovative products and gain a firstmover advantage to maximize profits. Another study stated that high competition puts pressure on managers and makes them enthusiastic about making the company profitable when performing tasks (Raith, 2003). Thus, managers have limited opportunities to use firm resources for personal use in a highly competitive marketplace.
H2 indicates that capital structure mediates the association between product market competition and firm performance. Capital structure positively mediates this relationship, and outcomes are consistent (Fosberg, 2004). Existing literature demonstrated that leverage permits firms to compete in highly competitive environments, increasing shareholders' benefits and leading to higher profits. Therefore, the capital structure allows a firm to gain competitive advantages by adding more products to achieve strategic objectives and maximize profit (Desai et al., 2003). Additionally, existing literature demonstrates that competitive benefit uncovers opportunities for opposition predation in a concentrated product marketplace. Thus, it leads to an increase in firm growth and profits. Previous studies have found the same outcomes and support our study (Abor, 2007).
H3 reports that firm size moderates the association between product market competition and firm performance. Product market competition and firm performance have a positive association with large firms. Our results are consistent with other studies' (Porter and Kramer, 2006) outcomes, stating that large firms participate in CSR (other stakeholders) activities and create entry barriers for small businesses. This delivers higher benefits for large organizations. Furthermore, large firms always compel top firm executives to form a differentiated strategy for profit maximization. However, product market competition and firm performance have a negative association with small firms. Our results are consistent with (Porter and Van der Linde, 1995), which highlighted that the growth of small businesses and new product developments are slow, leading to a decline in profitability. Small businesses have limited products with the same revenue margin. Additionally, small firms use poor quality raw materials for making products and do not show innovative behavior leading to lower profitability (Dechezleprêtre and Sato, 2017). Furthermore, Raith (2003) added that small organizations keep managers more relaxed, decreasing profitability. The main benefits of competitive firms are that they have developed "global immunity" to the crisis by working in a highly turbulent environment for a long time, allowing them to remain resilient during the crisis (Iorember and Jelilov, 2018;Dabwor et al., 2020;Iorember et al., 2020;Maqsood et al., 2021;Mubeen et al., 2021;Liu et al., 2022). Government organizations and business firms encountered competitive environment in energy consumption demands, innovative products, global trade, and economic growth (Iorember and Jelilov, 2018;Iorember et al., 2019Iorember et al., , 2021Usman et al., 2019Usman et al., , 2020. The pandemic has developed challenges to meet renewable energy usage, human capital and quality environment Jelilov et al., 2020;Iorember et al., 2021). Globally, the pandemic has posed household income inequalities, unemployment, and monitory policy shocks (Philip and Iorember, 2017;. Business firms have faced challenges to protect their employees' as vaccines availability for everyone was not guaranteed Su et al., 2020Su et al., , 2021aIslam et al., 2021b). Companies encountered various challenges in the pandemic crisis (Anser et al., 2020;Akhtar et al., 2021a,b;Islam et al., 2021a). Business firms' have seen tough competition to survive in the crisis (Akhtar et al., 2019bSiddiqi et al., 2019. Tourism and travel firms faced turbulent business environment to maintain their business growth in the competitive product market competition (Akhtar et al., 2019a;Ali et al., 2020;Ashraf et al., 2020;. Additionally, the companies benefited from their preparedness during the COVID-19 pandemic to safeguard their employees' health with protective measures (Mohammadi et al., 2021;Shoib et al., 2021;Soroush et al., 2021;Liu et al., 2022). This ensures the entire industry's sustainability, while all the advantages come from "competitive activities." Another advantage is that competitive organizations allow companies to meet modern customers and environmental requirements that arose before the COVID-19 crisis and intensified during this period (additional services and digital solutions) (Pouresmaeil et al., 2019;Fattahi et al., 2020;Ilinova et al., 2021;Lebni et al., 2021). Further, competitive advantages allowed us to determine that they belong to "core competencies" and could be considered the basis for further growth in fertilizer companies. The key conclusion is that the competitive advantage could ensure the supply chain's resilience and contribute to further growth. However, during a crisis, it is necessary to create core competencies to ensure growth. Thus the impact of the COVID-19 pandemic crisis on competitive fertilizer businesses is not crucial compared to other industries. The main factors of such resilience were: first, competitive firms play an important role and have always been turbulent. Therefore, they have some "immunity" to disturbances. Second, competitive organizations are strong and mature (Xu et al., 2021). This leads to the situation when the strength of competitors provides resilience to the entire industry. Third, competitive companies aim to create value for customers, shareholders, and society. However, competitive businesses transform by developing innovative tools, solutions, and technologies for growth (Zhou et al., 2021;Ge et al., 2022;Rahmat et al., 2022).
CONCLUSION
Many scholars have explored the direct relationship between product market competition and firm performance. Some positive relationships were found (Ruiz-Porras and Lopez-Mateo, 2011;Van Reenen, 2011;Javeed et al., 2020), while others were negative or U shape (Januszewski et al., 2002;Bloom et al., 2010;Ko et al., 2016). Thus, the literature did not explain the negative, positive, or neutral relationship between product market competition and firm performance. Furthermore, existing research demanded that omitted mediators and moderators be examined for studying the real effects of product market competition on firm performance (Guney et al., 2011;Sheikh, 2018). Consequently, based on these logical and rational claims, this study fills the literature gap by adding two associated variables, capital structure, and firm size, to show why and how product market competition influences company performance. After employing the GMM model, we concluded that product market competition and firm performance positively correlate.
The outcomes of our study show that competitive firms make innovative products and have limited probabilities to use firm resources for private benefits in a competitive market. This study examines the mediating impact of capital structure and finds that the capital structure positively mediates the relationship between product market competition and firm performance. The debt structure of firms uncovers opportunities for firms. Thus, it leads to an increase in firm growth and profits. Moreover, we investigated the moderating impact of firm size and found that large firms positively impact the association between product market competition and firm performance. As large organizations are well-reputed, they compel top firm executives to form a differentiated strategy for profit maximization. Additionally, the study tested small firms' effects on the connection between product market competition and firm performance. This study found that small firms negatively affect the association of product market competition and firm performance. Moreover, it is stated that small firms do not show innovative behavior, leading to lower profitability (Dechezleprêtre and Sato, 2017).
Based on this research, we provided suggestions for companies, decision-makers, and developed and developing economies to improve company performance. Our results will help the company attract owners, stakeholders, and investors to contribute to the competitive environment. The role of market competition and debt structure will be the focus of companies to increase profits. Our findings are helpful to decision-makers in formulating strategies in the industrial sector related to creating a competitive environment.
This study has several limitations. First, answer to the question how firm size moderates the product market competition and firm performance relationship is still unclear, which needs further theoretical development and thus helps us understand the mechanism of the moderating role in the relationship. Second, it is based on a sample of Chinese companies. A significant limitation may be China's unique institutional environment. China is the second-largest economy globally due to its unique capital market and state intervention in the corporate sector. Therefore, the results of this study may not be generalized to other economies. However, few studies have been conducted from multiple aspects. On the contrary, these limitations provide the potential for future research and may help understand the link between competitive pressures and company performance. Future research could investigate the role of governance structure, corporate model, corporate governance, and finance allocation by selecting more data and sectors to understand the relationship between product market competition and company performance.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
RM and JA conceptualized the idea, contributed to study design, completed the entire article, including introduction, literature, discussion, conclusion, and edited the original manuscript before submission. DH reviewed and approved the final edited version and approved the submitted version. SR and WB have significantly helped and provided major contributions in revising this manuscript. They have also provided contribution to resource to make possible this manuscript. All authors reviewed and approved the final edited version and approved the submitted version.
FUNDING
National Natural Science Foundation of China (72172042) has funded this research article. | 2022-05-18T13:23:29.940Z | 2022-05-17T00:00:00.000 | {
"year": 2021,
"sha1": "9beb11b4dac2fa29e22e16a2df58e229240d50cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "9beb11b4dac2fa29e22e16a2df58e229240d50cd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54706952 | pes2o/s2orc | v3-fos-license | The Impact of Patient to Nurse Ratio on Quality of Care and Patient Safety in the Medical and Surgical Wards in Malaysian Private Hospitals : A Cross-sectional Study
Background and objective: Nursing shortage and inadequate hospital nursing jeopardized quality of care and patient safety. This study aims to predict the impact of patient to nurse ratio on quality of care and patient safety in medical and surgical wards in Malaysian private hospitals. Methods: Cross-sectional data collected by questionnaire from 652 nurses working in the medical and surgical wards in 12 private hospitals participated in the study. Stratified simple random sampling performed to invite small size (less than 100 beds), medium size (100-199 beds) and large size (over than 200) hospitals’ nurse to participate in the study, which allow also nurses from all shift to participate in the study from the participated hospitals. Results: Nurses with higher ratio of patients have greater negative association on quality of care and patient safety. However, this negative association significantly associated with patient safety, whereas insignificantly associated with quality of care. Conclusions: Staffing level inconsistently associated with quality of care and patient safety, so there is at least one intervening process factor mitigate the negative impact of nursing shortage on quality of care in Malaysian hospitals. However, nurses delivering care for 11-15 patients and nurses delivering care for more than 15 patients had significant negative impact on both quality of care and patient safety at a p<0.05 significance level compared with those caring for less than 5 patients.
Introduction
To 'Err Is Human' report by an Institute of Medicine (IOM), stated that 98,000 deaths occurred annually as result of medical errors in United States (IOM, 2000).Adverse events occurred for 17% of all admitted patients in Australia (Wilson et al., 1995).In European countries there are inadequate hospital nursing to fulfill the rising demand in the health care facilities which in turn associated negatively with quality of care and patient safety (Hinno, Partanen, & Vehviläinen-Julkunen, 2011).However, in Malaysia increasing demand and cost of care with lack of resources threaten the performance of Malaysian health system (Country health plan MOH, 2011).Whereas, the mismatch between patient flow and staffing level lead to increase workload which in turn threatens the performance of care (Boyer et al., 2012).Furthermore, according to the Annual Report of Ministry of Health (MOH) Malaysia (2011) the performance of nurses working in private hospitals lower than nurses working in public hospitals (MOH, 2011).Thus, the main purpose of the study is to explore the effect of staffing adequacy on quality of care and patient safety in Malaysian private hospitals.
Adequate staffing is important to sustain quality of patient care (Aiken, Clarke, & Sloane, 2002;Needleman, Buerhaus, Mattke, Stewart, & Zelevinsky, 2002;Newhouse, Himmelfarb, & Morlock, 2013).Study conducted in 12 European country and US found that, better working environment and low patient to nurse ratio improve quality of care, patient safety and patient satisfaction (Aiken et al., 2012).Furthermore, higher nursing ratio in the department increase their time spending with patients which in turn affect the outcome of patients care (Brooten, Youngblut, Kutcher, & Bobo, 2004).Thus, adequate staffing level is required to improve quality of patient care (Newhouse et al., 2013).However, one study conducted in China found that high patient nurse ratio negatively associated with quality of care and job outcome but not associated with patient care outcome (You et al., 2013).This, finding raise the interest of investigating the impact of patient to nurse ratio in one more East Asian country like Malaysia.However, , it is difficult to identify the causal relationship between high workload and quality of patient care, and should consider patients and providers related factors which affecting the treatment process and outcome (Hillner, Smith, & Desch, 2000).Thus, data was obtained from nurses in the medical and surgical wards of the private hospitals in Malaysia to determine the impact of staffing adequacy on quality of patient care and patient safety in order to control patient related factors affecting the outcomes.Furthermore, medical and surgical wards were chosen because they deliver multidisciplinary level of care: medical cardiology, oncology, gastroenterology, nephrology, urology, orthopedics, and ear, nose and throat treatment (Coetzee, Klopper, Ellis, & Aiken, 2013).It can be concluded that the main purpose of the study is to investigate the impact of patient to nurse ratio on quality of care and patient safety in medical and surgical wards in Malaysian private hospitals.
Design
A cross-sectional survey conducted at individual nurse level of analysis of nurses working in the medical and surgical wards in Malaysian private hospitals.
Sampling
Stratified simple random sampling performed to collect data from nurses working in the medical and surgical wards.A total 652 nurses working in the medical and surgical wards in 12 private hospitals participated in the study.The stratified random sampling offers more homogeneity within the stratum and higher heterogeneity among group of strata which produce a "mirror image of the population" (Sekaran & Bougie, 2010).The simple random sampling of each stratum ensure each hospital and nurse have an equal chance to be chosen randomly (Sekaran & Bougie, 2010).The inclusion criteria of hospitals are those hospitals registered in the Association of Private Hospitals of Malaysia.While the inclusion criteria of nurses are those nurses' licensed and registered by the MOH Malaysia and delivering direct inpatient care in the medical and surgical wards.According to the current nursing literature number of beds used to evaluate the hospital size to small size (less than 100 beds), medium size (100-199 beds) and large size (over than 200) hospitals (Gok & Sezen, 2013;Lee &Yang, 2009).
Operationalization and Measurement
Quality of care according to the IOM is ''the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge'' (IOM, 2000).The outcome quality reflects the end result of interventions and treatment of the caring (Harvey, 2004).The outcome quality had two dependent variables of the study: quality of care and patient safety.Items used of measuring these variables are internationally validated (Aiken et al., 2012;Coetzee et al., 2013;Van Bogaert, Clarke, Vermeyen, Meulemans, & Van de Heyning, 2009).
The quality of patient care was measured by asking the nurses to grade the overall quality of care in the last shift and in the last year (Van Bogaert, Meulemans, Clarke, Vermeyen, & Van de Heyning, 2009).Furthermore, nurses were asked whether they will recommend the hospital to their friends and family if they need hospital care or as good place to work (Coetzee et al., 2013).
Patient safety in the study refers to prevent any potential harm or adverse events for hospitalized patient (Groene et al., 2010).The adverse events are unexpected patient harm or negative consequences related to patient hospitalization other than his disease process (Weingart et al., 2011), and this also called hospital acquired conditions.For instance, adverse events could be hospital acquired infection which also called nosocomial infection.In addition, pressure ulcer, patient fall, medication errors and readmission also considered as adverse events (Weingart et al., 2011;Welton, 2008).According to the current nursing literature the common events in the medical and surgical wards are: nosocomial infection, patient falls, medication error and patient and family complaints (Laschinger & Leiter, 2006;Van Bogaert et al., 2014).One item from the Agency for Healthcare Research and Quality survey of patient safety used to rate the overall patient safety in their unit using five point Likert scale (Aiken et al., 2012;Coetzee et al., 2013;You et al., 2013).The second measure of patient safety of this study is the adverse events which include: nosocomial infection, pressure ulcer, patient fall, medication errors, readmission and patient and family complaints (Laschinger & Leiter, 2006;Van Bogaert et al., 2014;Weingart et al., 2011).Thus, nurses were asked to report their degree of agreement of the overall rating of patient safety and their rating of the frequency of the adverse events.
The patient-nurse ratio was calculated by asking the nurses to indicate how many patients directly assigned under his care on last shift (Aiken et al., 2012;Coetzee et al., 2013;You et al., 2013), and the number of patient was chosen from four categories: less than 5 patients, 5-10 patients, 11-15 patients and over than 15 patients.The lower ratio indicate more favorable nursing staffing (Aiken et al., 2012).
Back-To-Back translation is performed to make sure that questionnaire free of mistakes, wrong wording or changing in the meaning.Furthermore, questionnaire translation to the local language (Bahasa Melayu) by local expert and then back translation by another expert to ensure the conceptual and vocabulary equivalence of the questionnaire items (Sekaran, 2003).
Data Analysis
Multiple regression analysis performed using the SPSS software version 21.0 to investigate the impact of patient to nurse ratio on quality of care and patient safety at significance level p<0.05.A pilot study conducted to test the internal consistency in order to check the instrument adequacy and the soundness of questionnaire by measuring Cronbach's alpha coefficient.The Cronbach's alpha coefficient of the quality of care and patient safety was 0.75 and 0.85 respectively, which were above the recommended level of Cronbach's alpha coefficient 0.70 (Pallant, 2011;Sekaran & Bougie, 2010).
Results
A total 1055 of registered nurses working on the medical and surgical wards in the 12 participating hospitals were invited to participate in the study.A total 807 questionnaires were returned back, so 76.4% is the total response rate of participant nurses in the study.However, 155 respondents were excluded from the study because they not met the inclusion criteria (nurses not working in the medical and surgical wards), and others have fixed response to of all questionnaire items, or left it blank.Thus, the usable respondents remained are 652 to proceed in data analysis, representing 61.8% response rate.
The demographical characteristics as shown indicated that 99.0% of nurses participated in the study were Malaysian, and 97.6% of nurses were female.Furthermore, the Malay nurses have the higher proportion of participants in the study 60.0%, while 21.6% Chinese, 14.2% Indian and 2.2% others (the others included Thais and Filipinos nurses).Most of the nurses were between the age 25 and 30 years 43.7%.The majority of respondents hold diploma 84.6% while 10.3% had bachelor degree in nursing and 5.1% other (the others included nurses had associate degree of nursing or higher education).In term of job title the majority of nurses were staff nurses 90.4%, while only 6.2% are assistant nurses and 3.4% nurses others (the others included the community nurses, in-charge nurse) and all of them delivering direct inpatient care.The status of employment of nurses in the study were 98.1% full time while 1.9% were part time nurses.The majority of nurses were working in large size hospitals 72.2%, while 16.5% working in medium size hospitals and 11.3% working in small size hospitals.
The patient-nurse ratio show that 37.0% of nurses working in Malaysian private hospitals provided care for more than 15 patients under their care in one shift, whereas 24.0% caring for 11-15 patients, 31.2%caring for 5-10 patients and 7.7% caring for less than 5 patients, under their assignment in the last shift they worked.Furthermore, 23.3% nurses participated in the study working in the medical ward, 26.1% in the surgical ward, 7.4% in the general ward, 31.9% in the multidisciplinary ward and 11.3% others (the others refer to endoscopy, oncology, cardiology and cardiothoracic wards that nurses provide direct inpatient care), and all of these wards deliver a multidisciplinary level of care.
The patient-nurse ratio construct included four categories, thus j -1 dummy variables to capture all information for each category compared with the reference group (Cohen, Cohen, West, & Aiken, 2003;Hardy, 1993;West, Aiken, & Krull, 1996).The reference group should be the group which is expected to score highest or lowest in relation with the dependent variable (Cohen et al., 2003;Hardy, 1993;West et al., 1996).Moreover, the reference group, should be well defined in order to clearly interpret the regression result, and should not be "others category" (Cohen et al., 2003;Hardy, 1993;West et al., 1996).Thus, the reference group in the study was nurses caring with less than 5 patients.
Multiple regression analysis of the effect of patient to nurse ratio on the outcomes of care were explored two regression models.The first model explored the effect of patient to nurse ratio on quality of care, whereas the second regression model was explored the effect of patient to nurse ratio on patient safety.
Model 1: Patient to Nurse Ratio and Quality of Care
Patient to nurse ratio dimension included three dummy variables.Table 1 provided the result of multiple regression analysis of its impact on quality of care in order to test the hypothesis: The alternative hypothesis H1: Patient to nurse ratio is associated with quality of care.
The null hypothesis H1 0 : Patient to nurse ratio is not associated with quality of care.The result of the regression analysis as shown in Table 1 revealed that F = 2.61 and P value = 0.05 indicated that the study failed to reject the null hypothesis H1 0 .So, the relationship of patient nurse ratio on quality of care is not significant.The R 2 indicated that nurse to patient ratio variable predict 0.01 of variances in quality of care and not significant at level p<0.05.However, the unstandardized coefficient of the three dummy variables indicated that increasing patient to nurse ratio is negatively associated with quality of care, and by increasing number of patients assigned under each nurse this negative impact is increasing -0.12, -0.21, and -0.22 respectively.Furthermore, nurses delivering care for 11-15 patients (B=-0.21,t=-2.13,p=0.03) and over than 15 patients (B=-0.22,t=-2.38,p=0.02) have significant negative impact on quality of care at a p<0.05 significance level compared with those caring with less than 5 patients.
Model 1: Patient to Nurse Ratio and Patient Safety
Table 2 provided the result of multiple regression analysis of its impact on patient safety in order to test the hypothesis: The alternative hypothesis H2: Patient to nurse ratio is associated with patient safety.
The null hypothesis H2 0 : Patient to nurse ratio is not associated with patient safety.The result of the regression analysis as shown in Table 2 revealed that F = 2.73 and P value = 0.04 indicated that the study reject the null hypothesis H2 0 .So, the relationship of patient nurse ratio on patient safety is significant.The R2 indicate that nurse to patient ratio variable predict 0.01 of variances in patient safety and significant at level p<0.05.Moreover, the unstandardized coefficient of the three dummy variables indicated that increasing patient to nurse ratio is negatively associated with patient safety, and by increasing number of patients assigned under each nurse this negative impact is increasing -0.18, -0.25, and -0.26 respectively.Furthermore, nurses delivering care for 11-15 patients (B=-0.25, t=-2.44, p=0.02) and over than 15 patients (B=-0.26,t=-2.66,p=0.01) have significant negative impact on patient safety at a p<0.05 significance level compared with those caring with less than 5 patients among nurses working in medical and surgical wards in Malaysian private hospitals.Thus, it can be concluded that the hypothesis H2 supported, while H1 not supported.
Discussion
Regression analysis of the effect of patient to nurse ratio as shown in both Table 1 and Table 2 indicated insignificant impact on quality of care and significant negative impact on patient safety at p<0.05 significance level, respectively.These findings inconsistent with the previous studies (Aiken et al., 2012;Boyer et al., 2012;Brooten et al., 2004;Coetzee et al., 2013).Boyer et al. (2012) found that mismatch between patient flow and staffing level lead to increase workload which in turn lowers the outcomes of care.In addition, one study conducted in 12 European country and in US found that, low patient to nurse ratio enhance quality of care, patient safety and patient satisfaction (Aiken et al., 2012).Thus, adequate staffing is required to improve quality of patient care (Aiken et al., 2002;Needleman et al., 2002;Newhouse et al., 2013).However, a study conducted in China found that high patient to nurse ratio negatively associated with quality of care and job outcome but not associated with patient care outcome (You et al., 2013).Furthermore, other studies found inconsistent relationship between patient to nurse ratio and the outcomes (Needleman et al., 2002).These inconsistent findings supported this study finding and show the importance to investigate an intervening variable with opposite sign suppressing the relationship of patient to nurse ratio on quality of care.
High patient to nurse ratio indicate unfavorable nursing staffing affecting the outcomes (Aiken et al., 2012), and this finding is consistent with the present study.The unstandardized beta coefficient reported in Table 1 revealed that increasing patient to nurse ratio is negatively associated with quality of care, and by increasing number of patients assigned under each nurse this negative impact is increasing -0.12, -0.21, and -0.22 respectively.Similarly, it is negatively associated with patient safety, and by increasing number of patients assigned under each nurse this negative impact is increasing -0.18, -0.25, and -0.26 respectively.Furthermore, nurses delivering care for 11-15 patients and nurses delivering care for over than 15 patients have significant negative impact on both quality of care and patient safety at a p<0.05 significance level compared with those caring with less than 5 patients.Thus, assigning more than 10 patients under one nurse is threatening quality of care and patient safety in the medical and surgical wards in Malaysian private hospitals.However, it is difficult to identify the causal relationship between high hospital volume and quality of patient care, and should consider patients and providers related factors, this affect the treatment process and outcome (Hillner et al., 2000).The present study controlled the variances of the patient related factors affecting the outcomes.Future research suggested will be control the process factors affecting the outcomes of care.For instance, nursing task oriented and patient and family involvement in care mitigate the negative impact of nursing shortage (You et al., 2013).Moreover, low patient to nurse ratio in the ward increases the time spent with patients which in turn affect the outcome of patients care (Brooten et al., 2004).Thus, the strong argument for future research is to explore the effect of patient centeredness on the relationship of nursing shortage on quality of care and patient safety in Malaysian private hospitals.
Conclusion
Inadequate staffing, nursing shortage and high patient to nurse ratio can jeopardize quality of care and patient safety in the medical and surgical wards in Malaysian private hospitals.However, there is at least one process factor with positive sign mitigate this negative impact of nursing shortage on quality of care in Malaysian private hospitals.The R 2 of the regression result indicated that only 1% of variances of quality and patient safety explained by the nurse to patient ratio.Thus, the authors proposed focus on process intervening factors affecting the outcomes of care as a remedy of improving quality of care and patient safety.
Table 1 .
Regression result of patient to nurse ratio on quality of care
Table 2 .
Regression result of patient to nurse ratio on patient safety | 2018-12-14T18:50:16.404Z | 2015-04-02T00:00:00.000 | {
"year": 2015,
"sha1": "b42eb7a318138743ff61bff40e11b268ef5980f5",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ass/article/download/47177/25514",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b42eb7a318138743ff61bff40e11b268ef5980f5",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253377039 | pes2o/s2orc | v3-fos-license | Estimations and Control of Julia Sets of the SIS Model Perturbed by Noise
The estimations and control of Julia sets of the SIS(susceptible-infectious-susceptible) model under noise perturbation are studied. At first, a discrete SIS model is introduced, and the effects of additive and multiplicative noises on the fractal characteristics of the SIS model are discussed. Then, estimations of the Julia sets of the SIS model under additive and multiplicative noise perturbations are given, respectively. At last, the feedback control method is used to set appropriate controllers to realize control of the Julia set, and the influence of noise on the Julia set of the SIS model is reduced. The reason why this method is effective is also explained.
Introduction
With the continuous development of human civilization, the medical and health care level has been greatly improved compared with the past. However, there are still new infectious diseases posing challenges to humanity. In 2020, the outbreak of COVID-19 took many innocent lives and brought unprecedented impacts on the global economy and trade. To better prevent and control infectious diseases, many researchers should analyze them from a mathematical point of view. In fact, infectious disease models have been studied since the 20th century [1][2][3]. New mathematical models are built based on classical models to study the transmission mechanism and coping strategies of infectious diseases, computer viruses, and rumors in groups. Fatmawati et al. [4] considered a novel fractional model to investigate the (tuberculosis) TB model dynamics with two age groups of humans. Liu et al. [5] proposed a mechanism considering the co-evolution between information states and network topology simultaneously. Alshammari and Khan [6] established a complex SIR epidemic dynamics model based on nonlinear incidence and nonlinear recovery considering the impact of available hospital beds and reduction interventions on the spread of infectious diseases. In recent years, many scholars have focused on the stability and existence of periodic solutions, the equilibrium position of equilibrium points, and the search for bifurcation points. Pastor and Vespignani [7] studied epidemic dynamics in bounded scale-free networks with soft and hard connectivity cut-offs. Amine et al. [8] proposed the global dynamics of a SIRI epidemic model with latency and a general nonlinear incidence function. Khan et al. [9] formulated a new mathematical model for the dynamics of COVID-19 with quarantine and isolation. On account of the effect of limited treatment resources on the control of epidemic disease, a saturated removal rate is incorporated into Hethcote's SIR epidemiological model by Zhang and Suo [10].
Nature is full of randomness, the human society is. In some economic and physical models, stochastic perturbation exists widely, leading to the complexity and uncertainty of some factors in the models. When the influence brought by these disturbances cannot be ignored, designing the system according to the deterministic theory will make the system behavior deviate from the original requirements, which requires people to analyze the model in an uncertain sense. Zhang and Chen [11] discussed the H ∞ control problem for a class of nonlinear stochastic systems with both state-and disturbance-dependent noise. Ugrinovskii and Petersen [12] studied existence and optimality properties of socalled guaranteed cost controllers for an uncertain system subject to structured uncertainty. They consider the effects of random disturbances on the system. In addition, noise also has a significant influence on chaos and fractals. Argyris and Andreadis [13] studied the influence of noise on a mathematical model which contains the coexistence of chaotic attractors. Inspired by the study of noise of the Mandelbrot map in two parameter deformation families, Negi and Rani [14] introduced a new noise criterion and analyzed its effect on the usual and superior Mandelbrot maps. Wang et al. [15] researched on the structural characteristic and the fission-evolution law of the generalized Mandelbrot set (generalized M set in short) perturbed by composing noise of additive and multiplicative, analyzed the effect of random perturbation to the generalized M set. Wang et al. [16] researched the structural characteristic and the fission-evolution law of four different kinds of generalized Julia set (generalized J set in short) with different parameter c, analyzed the effect of random perturbation to the generalized J set, and illuminated the stability of the generalized J set. Due to the widespread existence of random disturbance, filtering is vital to observe the true value of the model better. Fridman et al. [17] considered the problem of robust H 2 estimation of a combination of states of a stationary linear system with time delays. Nkwayep et al. [18] developed an integrated Kalman filter (EnKf) method to estimate immeasurable state variables and unknown parameters in COVID-19 models. Nguang and Shi [19] considered the problem of designing a delay-dependent robust H ∞ filter for time delay Takagi-Sugeno fuzzy models. Inspired by the denoising effect of filtering in the noise-affected system, estimations of the Julia sets of the noise-affected model are given to observe better the impact of noise on the overall shape of the Julia set of the SIS(susceptible-infectious-susceptible) model. After Mandelbrot published an epoch-making paper entitled "How Long Is the Coast of Britain?" in the 20th century [20], people realized that fractals could be used to explain some irregular or not smooth figures or sets. More and more scholars devoted themselves to the study of fractals. Gujar and Bhavsar [21] considered the generalized transformation function z → z α +C for generating fractal images. The symmetries of Julia sets of Newton's method is investigated by Yang [22]. Sun and Zhang [23] studied the forced Brusselator model from the fractal viewpoint. Zhang et al. [24] introduced a visualization of Julia sets of the complex Henon map system with two complex variables. In recent years, some scholars have tried to study the fractal characteristics of some fractional models and discuss the fractal dynamics of models from the perspective of fractional order. Sun and Liu [25] introduced the fractional Potts model on diamond-like hierarchical lattices. Wang et al. [26] investigated the structures and properties of the spatial Julia set generated by a fractional complex Lotka-Volterra system with noise. In nature and science, fractals are still a new field, and it has more functions and significance to be studied. Therefore, the fractal characteristics of the SIS model under noise disturbance are considered.
Due to the importance of the SIS model in the infectious disease model, the fractal characteristics of the SIS model under noise disturbance are discussed. Firstly, the discrete form of the SIS model is given, and the Julia set of the SIS model is made according to it. Secondly, additive and multiplicative noise effects on Julia sets of discrete-form SIS models are considered respectively. Thirdly, to more clearly observe the impact of noise on the overall shape of the Julia set of the SIS model, estimations of the Julia sets of the SIS model under the influence of noise are given. Finally, according to the matrix disturbance theory, two controllers are designed using the feedback control method to control the Julia set of the model. After control, the Julia set of the model has a larger attractive domain and becomes more stable under noise interference.
The SIS model in discrete form and the Julia set
In the SIS model, the birth rate and death rate are not considered, and the population is divided into infected and susceptible people. Susceptible people have a certain chance to become infected after exposure to the virus, and infected people become susceptible after effective treatment. The two types of people will carry out transformation between each other [27], thus where x is the infected persons, y is the susceptible persons, c is the transmission rate of infected persons contacting susceptible persons, and u is the effective cure rate of infected persons becoming susceptible persons.
Discretized the model (1) and approximate difference form of the SIS model is obtained of course, the smaller Δt is, the closer (2) is to (1). After the transformation, then we have where a = cΔt and b = uΔt. Using x n+1 for x(t + Δt), y n+1 for y(t + Δt), x n for x(t), y n for y(t), then (3) becomes x n+1 = ax n y n − bx n + x n , y n+1 = −ax n y n + bx n + y n . Definition 2.1 [23,24] The filled-in Julia set of function f on the complex plane is defined as: The image of the Julia set of the SIS model is given in Fig.1 when a = 0.008, b = 0.016. In all the images that follow in this article, we always set a = 0.008, b = 0.016. area. In this case, at least one of the number of infected persons and the number of susceptible persons tends to infinity [28].
Julia sets of the SIS model perturbed by noise
In real life, there will always be noise to disrupt the model due to the presence of random interference. In some cases, the effect of noise on the model cannot be ignored. In the following discussion, the impact of different additive and multiplicative noises on the fractal characteristics of the SIS model will be observed by adjusting parameters to change the noise.
The SIS model perturbed by the additive noise
Additive noise has nothing to do with the designed signal, but always interferes with the designed signal. Noise is present whether there is a signal or not. Some random factors, such as seasonal changes, population movements, etc., may lead to random changes in the number of infected and susceptible people with the disease. The SIS model with the additive noise [15] is x n+1 = ax n y n − bx n + x n + μ n , y n+1 = −ax n y n + bx n + y n + ω n , where a and b are still described above, μ n is the noise obeying N (θ 1 , σ 2 To better observe the influence of additive noise on Julia sets of the SIS model, normal additive noises with different mean and variance are added to the model. Images of each Julia set of the SIS model with different additive noises are shown in Fig. 2. It can be seen from Fig. 2 that when the model is affected by normal additive noise with mean zero and slight variance, the Julia set of the SIS model and the filled-in Julia set do not change basically ( Fig. 2(a)). With the increase of noise variance, the Julia set of the SIS model under the influence of noise is no longer a series of curves and is composed of a large number of irregular points (Fig. 2(b) and 2(c)). However, when the mean value of noise is not 0 and its value is significant, the overall shape of the Julia set changes significantly ( Fig. 2(e), 2(d) and 2(f)).
The SIS model perturbed by the multiplicative noise
The relationship between multiplicative noise and the signal is multiplicative. If the signal exists, it exists. If the signal does not exist, it does not exist. The randomness of multiplicative noise is considered to be caused by the time-varying or nonlinearity of the system, and it has stronger time-varying and anti-filtering properties than additive noise. Assuming that the cure rate will increase or decrease due to virus mutation or vaccine development, the multiplicative noise model [15] can be obtained as follows.
x n+1 = ax n y n − bx n + x n + α n x n , y n+1 = −ax n y n + bx n + y n − α n x n , where α n is the noise obeying N (θ 3 , σ 2 3 ) , which is independent of x n and y n .
Normal multiplicative noise with zero mean and different variance is added to the model to better observe the effect of multiplicative noise on the Julia set of the SIS model. Images of each Julia set of the SIS model with different multiplicative noises are shown in Fig. 3.
It can be seen from the images that under the influence of multiplicative noise, the Julia set of the SIS model changes to some extent, and some randomly distributed points show an unstable state of the Julia set of the SIS model. It's not difficult to see that the noise has a more significant impact on the Julia set near the x − axis than the Julia set in the rest of the image ( Fig. 3(a) and 3(b)) because the multiplicative noise random terms is related to x n . When the variance of multiplicative noise is gradually increased, similar to the influence of additive noise on the model, the Julia set of the SIS model becomes more and more unstable (Fig. 3(a), 3(b), 3(c) and 3(d)).
In general, with the increase of variance of both additive noise and multiplicative noise, the Julia set of the SIS model tends to expand outward and squeeze inward, indicating that the influence of noise will make the Julia set of the SIS model become "unstable". When the added noise variance is slight and the mean is zero, the overall shape of the Julia set will not change much. When the noise variance is further increased, the Julia set is composed of many points instead of smooth curves.
We will analyze the effect of the Julia set perturbation on epidemics from the fractal perspective. The filled-in Julia set can be thought of as stable domains within which epidemics do not suddenly become uncontrollable. The Julia set can be seen as the "threshold" at which the epidemic will not break out, but once this "threshold" is exceeded, the epidemic will get out of control. The Julia set after noise perturbation is in a "random" state, and the "threshold" becomes uncertain, which is very unfavorable for our prediction and control.
Estimations of Julia Sets of the SIS Model under Noise Perturbation
Because in the iterative process of the initial value points of the SIS model, the output is affected by the input and the noise disturbance, which makes the output Julia set cannot be accurately observed. To better observe the Julia set, we suppress the noise signal and increase the smoothness of the Julia set of the model perturbed by noise, thereby realizing the estimations of the Julia sets of the SIS model disturbed by noise.
Estimations of Julia Sets of the SIS Model under Additive Noise Perturbation
Write (4) in vector form where ϕ n = x n y n , F(ϕ n ) = ax n y n − bx n + x n −ax n y n + bx n + y n , ξ n = μ n ω n . In this subsection, let μ n be the noise obeying N (0, σ 2 1 ), ω n be the noise obeying N (0, σ 2 2 ). Referring to Hu's construction in [29], we design where ϕ k+1|k is the one-step prediction of ϕ k at moment k and ϕ k|k is the estimation of ϕ k at moment k, N is the number of runs of the whole model and ϕ i k+1 is the i the run of the whole model at ϕ k+1 , g is a constant with 0 < g < 1.
The explanation is given below. It is natural to use F(ϕ k|k ) to get the one-step prediction of ϕ k+1|k at moment k according to [29]. Next we explain how to get ϕ k+1|k+1 . For each Julia set to be iterated at the initial point, namely x 1 y 1 . In this case, to make sure it's unbiased, we directly take the value of ϕ 1|1 here (ϕ 1|1 = x 1 y 1 ). By adding noise, the observed value ϕ 2 , ϕ 3 , · · · , ϕ n with noise can be obtained. Run the whole model N times at ϕ 1 to get ϕ k 2 , ϕ k 3 , · · · , ϕ k n , where k = 1, 2, · · · , N . According to Gaussian's least squares estimate, the most likely value of ϕ n|n is at monent k + 1, ϕ k+1|k is used to construct ϕ k+1|k+1 . One way is to carry out weighted processing, then we have When μ n ∼ N (0, 15 2 ) and ω n ∼ N (0, 15 2 ), the Julia sets of SIS model affected by additive noise before and after estimation are given in Fig. 4.
Estimations of Julia Sets of the SIS Model under
Multiplicative Noise Perturbation Write (5) in vector form where ϕ n = x n y n , F(ϕ n ) = ax n y n − bx n + x n −ax n y n + bx n + y n , ξ n = α n x n −α n x n . In this subsection, let α n be the noise obeying N (0, σ 2 3 ).
Referring to Hu's construction in [29], we design ⎧ ⎨ ⎩ ϕ k+1|k = F(ϕ k|k ), where ϕ k+1|k is the one-step prediction of ϕ k at moment k and ϕ k|k is the estimation of ϕ k at moment k, N is the number of runs of the whole model and ϕ i k+1 is the i th run of the whole model at ϕ k+1 , g is a constant with 0 < g < 1.
When α n ∼ N (0, 0.08 2 ), the Julia sets of SIS model affected by multiplicative noise before and after estimation are given in Fig. 5.
As can be seen from the image comparison ( Fig. 4(a) and 4(b), Fig. 5(a) and 5(b)), the number of some random points in the photo is significantly reduced after estimation, which indicates that the noise signal is suppressed. In addition, the smoothness of the Julia set of the model perturbed by noise is improved, and we can more clearly observe the effect of additive noise and multiplicative noise on the overall shape of the model, respectively.
Control of Julia set of the SIS model under noise disturbance
In a chaotic system, feedback control can effectively control the chaotic system to the unstable equilibrium point or periodic solution, and the control effect has strong robustness under weak noise interference [30].
In the following, we apply the feedback control method to the noise-affected system to better control the Julia set.
The original model of the system is x n+1 = ax n y n − bx n + x n , y n+1 = −ax n y n + bx n + y n .
Let f (x n , y n ) = ax n y n − bx n .
Substitute (7) into the original system (6) and we get (x * , y * ) is a equilibrium point of the system if and only if Then we have The equilibrium points of the system are The following first considers the control of the original model, then adds noise, and uses images to show whether the Julia set of the controlled model has good robustness.
Linear feedback control
Here, we only consider the case where a = 0, b = 0 and the equilibrium point (x * , y * ) = (0, b a ) is considered.
In the system, the first coordinate of the equilibrium point is set to 0 in the hope that the number of infected persons will approach 0 as far as possible under control, and the second coordinate is set to b a for the convenience of subsequent eigenvalues calculation.
Add controllers to the system (8), then we get Using linear feedback controllers Then we get Consider the following mapping The Jacobian matrix of (12) is At the equilibrium point (x * , y * ) = (0, b a ), the Jacobian matrix of (12) becomes The eigenvalues of the matrix (13) are It is well known that |λ 1,2 | < 1 is one of the conditions to guarantee the stability of the equilibrium.Then 0 < k < 2. Under linear feedback control, the Julia sets of the SIS model affected by noise are given in Fig. 6. Among them, Fig. 6(a) 6(b) are the effect images of the controlled system perturbed by the additive noise when μ n ∼ N (0, 15 2 ); ω n ∼ N (0, 15 2 ). Fig. 6(c) 6(d) are the effect images of the controlled system perturbed by the multiplicative noise when α n ∼ N (0, 0.08 2 ).
Nonlinear feedback control
Here, the equilibrium point (x * , y * ) = (0, b a ) is considered and a = 0, b = 0 The controlled system is Using nonlinear feedback control, the controllers are designed as Put (15) into (14) and we obtain Consider the following mapping The Jacobian matrix of the system (17) is At the equilibrium, the Jacobian matrix of the system (17) becomes The eigenvalues of the matrix (18) are Then we get |λ 1,2 | < 1, that is 0 < k < 2 to guarantee the equilibrium to be stable.
Under nonlinear feedback control, the Julia sets of the SIS model affected by noise are given in Fig. 7. Among them, Fig. 7(a) 7(b) are the effect images of the controlled system perturbed by the additive noise when μ n ∼ N (0, 15 2 ); ω n ∼ N (0, 15 2 ), Fig. 7(c) and 7(d) are the effect images of the controlled system perturbed by the multiplicative noise when α n ∼ N (0, 0.08 2 ). Fig. 6(b), 6(d) and Fig. 7(b) 7(d) show that some random points in the Julia set of the SIS model perturbed by noise are significantly reduced after feedback control, indicating that the Julia set of the system has better robustness after control. In addition, the filledin Julia set becomes larger. The points in the filled-in Julia set will not tend to infinity under iteration, which means that after control, there is a larger stable region for infectious diseases. In this region, the number of infected people is relatively stable and will not rapidly become uncontrollable. In conclusion, the controllers adopted achieve the desired control effect.
The effectiveness of the feedback control method used in this model is explained below. It is easy to calculate that the eigenvalues at the fixed point (0, b a ) are 1. Under noise disturbance, the modulus of the eigenvalues of the jacobian matrix at (x * , y * ) may be greater than or less than 1. Whether the fixed point is attractive or not may change in the iteration process. Still, the Julia set is the boundary of the attractive domain for attractive fixed points, which finally leads to the unstable state of the Julia set. By the feedback control method, the control coefficient k can be reasonably taken so that the modulus of the eigenvalues of the controlled system is less than 1. According to matrix per-turbation theory, the system's eigenvalues will slightly change when subjected to small perturbations. The system's stability will not change when its equilibrium point is of a non-central type. In other words, in this Fig. 7 The Julia sets of SIS model perturbed by noise after nonlinear control system, the feedback control method is used to make the modulus of the eigenvalues of the controlled system less than 1, so that (x * , y * ) is the attractive equilibrium point to reduce the interference of noise on the Julia set of SIS model. From the comparison of images( Fig. 6(b) and 6(d), Fig. 7(b) and 7(d)), it can be seen that after feedback control, the control effect of additive noise is better than that of multiplicative noise. This is because the random terms of additive noise have nothing to do with x n and y n and thus have little effect on the Jacobian of the system. Unlike additive noise, the random term selected when multiplicative noise is added is related to x n , so the Jacobian matrix of the system will also be strongly influenced by the random term. In this way, the modulus of the eigenvalues of the controlled system may be greater than 1, so the equilibrium point is not stable.
Conclusion
This paper mainly introduces the estimations and control of the Julia set of the SIS model under noise disturbance. It is very realistic and meaningful to present the SIS model in discrete form and discuss the perturbation of noise on its fractal characteristics. To observe the effect of noise on the overall shape of the Julia set of the SIS model, we present estimates for the Julia set of the SIS model perturbed by noise. In addition, two kinds of controllers are set up according to the feedback control method. The image results show that the controllers can effectively control the fractal characteristics of the model and increase the anti-interference of the model. Finally, the reasons why the feedback control method is effective for the model are explained. | 2022-11-07T16:14:23.287Z | 2022-11-05T00:00:00.000 | {
"year": 2022,
"sha1": "f072143690e50f32295c894519124a8983ac2a80",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11071-022-08048-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e4235fb5fd529ab42f5c1548f88589a2229cd75",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262842199 | pes2o/s2orc | v3-fos-license | Increased LPS-Induced Fever and Sickness Behavior in Adult Male and Female Rats Perinatally Exposed to Morphine
As a result of the current opioid crisis, the rate of children born exposed to opioids has skyrocketed. Later in life, these children have an increased risk for hospitalization and infection, raising concerns about potential immunocompromise, as is common with chronic opioid use. Opioids can act directly on immune cells or indirectly via the central nervous system to decrease immune system activity, leading to increased susceptibility, morbidity, and mortality to infection. However, it is currently unknown how perinatal opioid exposure (POE) alters immune function. Using a clinically relevant and translatable model of POE, we have investigated how baseline immune function and the reaction to an immune stimulator, lipopolysaccharide, is influenced by in utero opioid exposure in adult male and female rats. We report here that POE potentiates the febrile and neuroinflammatory response to lipopolysaccharide, likely as a consequence of suppressed immune function at baseline (including reduced antibody production). This suggests that POE increases susceptibility to infection by manipulating immune system development, consistent with the clinical literature. Investigation of the mechanisms whereby POE increases susceptibility to pathogens is critical for the development of potential interventions for immunosuppressed children exposed to opioids in utero.
As a result of the current opioid crisis, the rate of children born exposed to opioids has skyrocketed.Later in life, these children have an increased risk for hospitalization and infection, raising concerns about potential immunocompromise, as is common with chronic opioid use.
Opioids can act directly on immune cells or indirectly via the central nervous system to decrease immune system activity, leading to increased susceptibility, morbidity, and mortality to infection.However, it is currently unknown how perinatal opioid exposure (POE) alters immune function.Using a clinically relevant and translatable model of POE, we have investigated how baseline immune function and the reaction to an immune stimulator, lipopolysaccharide, is influenced by in utero opioid exposure in adult male and female rats.We
Introduction
The exponential increase in opioid use in the United States, particularly among women of reproductive age, has resulted in a surge of infants exposed to opioids in utero.Most of these infants will experience opioid withdrawal at birth (neonatal opioid withdrawal syndrome; NOWS), requiring an extended stay in the neonatal intensive care unit (Kocherlakota, 2014).
Gestation and the early postnatal period are critical periods of immune system development in humans and rodents (Georgountzou and Papadopoulos, 2017); however, little is known about the long-term consequences of perinatal opioid exposure on immune function.In adults, chronic opioid use is associated with suppression of the peripheral immune system, including decreased natural killer cell cytotoxicity (Beilin et al., 1996(Beilin et al., , 1992(Beilin et al., , 1989;;Fecho and Lysle, 1999;Nelson et al., 2000;Novick et al., 1989;Sacerdote et al., 1997;Yokota et al., 2004), reduced macrophage phagocytosis (Casellas et al., 1991;Lugo-Chinchilla et al., 2006;Tomassini et al., 2004;Tomei and Renaud, 1997), and altered proinflammatory cytokine production (Clark et al., 2007;Madera-Salcedo et al., 2011;Stoll-Keller et al., 1997;Wang et al., 2011).Clinical chart review suggests that infants exposed to opioids in utero are similarly at an increased risk of infection and rehospitalization (Arter et al., 2021;Uebel et al., 2015;Witt et al., 2017), suggesting parallel suppression of immune function.Chronic opioid-induced deficits in antibody production are of particular concern to opioid-exposed infants (Bussiere et al., 1992;Eisenstein et al., 1993;Taub et al., 1991), as this response is an essential component of the adaptive immune system and serves to form immunological memories of previous exposure.Indeed, deficits in antibody production increase susceptibility to infection, as the adaptive immune system is unable to recognize and eliminate pathogens (Barmettler et al., 2018).
The periaqueductal gray (PAG), a critical neural substrate in opioid signaling (Loyd et al., 2008), has been implicated in the central immunosuppressive effects of opioids (Gomez-Flores and Weber, 2000).Direct administration of morphine into the PAG suppresses a number of immune cell functions, including natural killer cell cytotoxic activity, lymphocyte proliferation, and cytokine production (Gomez-Flores et al., 1999;Weber and Pert, 1989).Furthermore, morphine has a stimulatory effect on PAG microglia to induce cytokine release in a Toll-like receptor 4 (TLR4)-dependent manner (Bokhari et al., 2009;Eidson and Murphy, 2013;Lee et al., 2018;Zhang et al., 2020Zhang et al., , 2011)).Morphine has also been shown to suppress peripheral immune cell activity in a TLR4-dependent manner (Zhang et al., 2020).Therefore, morphine is poised to impact the immune system at multiple levels, promoting neuroinflammation centrally, and immunosuppression and increased susceptibility to pathogens peripherally.Systemic administration of lipopolysaccharide (LPS), which mimics a Gram-negative bacterial infection, is one of the most common experimental immune models (Lasselin et al., 2020).LPS is a pathogen-associated molecular pattern (PAMP) that binds primarily to the innate immune receptor TLR4, leading to peripheral cytokine production (Zampronio et al., 2015).
These cytokines act primarily within the hypothalamic median preoptic area to induce prostaglandin E2 synthesis, promoting fever via increased brown adipose tissue metabolism and vasoconstriction (Hart, 1988;Machado et al., 2020;Saper and Breder, 1994;Zampronio et al., 2015).LPS-induced fever is accompanied by a characteristic set of behaviors associated with sickness, including anorexia, lethargy, and reduced grooming (Hart, 1988).Both fever and sickness serve to engage the immune system to restrict pathogen growth and reduce metabolic demand to facilitate fever (Hart, 1988); therefore, any alterations in the course of a typical fever and sickness response may increase the severity of infection.Counterintuitively, immunosuppression generally leads to elevated febrile response to LPS (Miñano et al., 2004;Tavares et al., 2006Tavares et al., , 2005) ) or infection (Oude Nijhuis et al., 2002) and is often the only sign of infection in immunosuppressed patients (Pizzo, 1999).Although the mechanism by which immunosuppression augments infection-induced fevers is currently unknown, it is thought that immunosuppression leads to an inability of the body to properly mount a cytokine response to infection, leading to an elevated, centrally-mediated fever response (Oude Nijhuis et al., 2002).
Limited preclinical evidence has associated perinatal opioid exposure with immune dysregulation after LPS exposure (Hamilton et al., 2007;Shavit et al., 1998).Previous studies examining the impact of perinatal opioid exposure on immune function utilized dosing paradigms that fail to mirror the clinical profile of women who use opioids while pregnant.The present studies were conducted to address this gap using a clinically relevant and translatable model of perinatal opioid exposure (POE).We hypothesize that perinatal exposure to opioids results in immune system dysregulation, both centrally and peripherally, leading to an increased immune response to LPS. (Harder et al., 2023).See Figure 1 for a description of the dosing paradigm.Pumps were programmed to deliver morphine at 10 mg/kg three times a day.One week after morphine initiation, females were paired with sexually-experienced males for two weeks to induce pregnancy.Morphine exposure to the dams continued throughout gestation, with doses increasing weekly by 2 mg/kg until 16 mg/kg was reached.Dams continued to receive morphine after parturition, such that pups received morphine indirectly.Beginning at P5, morphine dosage was decreased by 2 mg/kg daily until P7, when the dose reached 0 mg/kg.Control rats were implanted with pumps filled with sterile saline.No differences were noted in maternal behavior of morphine vs.
Methods
vehicle dams (Harder et al., 2023).Pups were weaned at P21 into treatment-matched cages, where they remained until adulthood.
Figure 1.Schematic of the perinatal opioid exposure dosing paradigm.Created with Biorender.
2.3 iButton implantation.At P75, male and female rats born to mothers exposed to morphine (MOR) or vehicle (VEH) were anesthetized with 5% isoflurane and maintained at 2-3%.A midline abdominal incision was made using sterile surgical techniques, and a wax-coated iButton temperature logger (Thermochron DS1922L) was placed into the abdominal cavity.All rats received carprofen (5 mg/mL/kg; i.p.) prior to and twenty-four hours post-surgery for pain relief.iButton loggers were programmed to record core body temperature in 10-minute intervals beginning twenty-four hours before and twenty-four post-LPS administration.
Fixed tissue was sectioned in a 1:6 series of 40-μm coronal sections with a Leica SM2010R microtome and stored in cryoprotectant at -20°C.Microglia were visualized using immunohistochemistry as previously described (Doyle and Murphy, 2017;Eidson and Murphy, 2013).Briefly, free-floating sections were rinsed thoroughly in potassium phosphate buffer solution (KPBS), incubated in 3% hydrogen peroxide at room temperature for 30 minutes, and then rinsed in KPBS.Sections were then incubated in 1:10,000 rabbit anti-Iba1 (Wako; 019-19741) diluted in KPBS with 1% Triton-X overnight at room temperature.Following rinses in KPBS, sections were incubated in 1:600 biotinylated donkey anti-rabbit (Jackson Immuno; 711-065-152) diluted in KPBS with 0.4% Triton-X for one hour at room temperature.Following KPBS rinses, sections were incubated in an Avidin/Biotin solution (PK-6100, Vector Labs) for one hour at room temperature, followed by KPBS and sodium acetate rinses.The sections were then incubated in a 3,3'-diaminobenzidine solution for 30 minutes, rinsed with sodium acetate and KPBS, and mounted onto slides.Slides were dehydrated using increasing concentrations of ethanol and cover-slipped.Microglial morphology was imaged in the ventrolateral PAG (level 4; 8.04 mm posterior to bregma; Figure 3A; blue box), given its central role in the regulation of opioid-induced immunosuppression.As an anatomical control, microglial morphology was also analyzed in the entorhinal cortex (8.04 mm posterior to bregma; Figure 3A; red box).Like the PAG, the entorhinal cortex has dense µ-opioid receptor expression, but does not have a defined role in immune signaling.One to three sections per rat were imaged unilaterally at 40x on the Keyence BZ-X700 using the Z-stacking feature (1 µm steps).Microglia were then reconstructed using Imaris 10.0.0.Images were first converted into Z-stack TIFF files in FIJI, then converted to .imsfiles and opened in Imaris for preprocessing (inversion and background subtraction).
Microglia morphology was then traced and analyzed using the filament creation wizard, followed by manual validation.A total of 1588 microglia were reconstructed and analyzed in the PAG (VEH M: 372 microglia from 6 rats; VEH F: 296 microglia from 4 rats; MOR M: 496 microglia from 8 rats; MOR F: 424 microglia from 7 rats), and a total of 266 microglia were reconstructed and analyzed in the entorhinal cortex (VEH M: 61 microglia, VEH F: 59 microglia, MOR M: 76 microglia, VEH F: 70 microglia).
Length: total length of all edges in the microglia (µm).
4.
Segments: total number of segments in the microglia.
5.
Edges: total number of connections between vertices.
6.
Vertices: total number of points connecting edges.
See Figure 3B for an example of segments, edges, and vertices.2.7 Gut permeability.Gut permeability was measured using oral administration of fluorescein-isothiocyanate-labeled dextran (FITC-dextran; molecular weight 4kDa; Sigma-Aldrich 46944).Adult male and female rats (P60) were fasted for four hours (beginning at nine AM).Following the fast, rats were orally dosed with 600 mg/kg of a 125 mg/mL solution of FITCdextran using a 16g three-inch curved gavage needle with a 3mm ball.Four hours post-FITCdextran, blood was collected from the saphenous vein into EDTA tubes and centrifuged (4°C, 3000g, 15 minutes) to separate plasma.Samples (100 µL) were transferred to a black-bottom 92 well plate, and relative fluorescent units (RFU) were read using a SpectraMax M2 plate reader (emission 530 nm, excitation 485 nm).Data were normalized within sexes to generate fold change of MOR vs. VEH rats.
2.8 Measurement of bacterial contact via anti-LPS antibody levels.Anti-LPS antibody levels were determined in adult male and female rats (P60) using ELISA.Blood was collected from the saphenous vein into uncoated microcentrifuge tubes, allowed to clot for 30-60 minutes, then centrifuged (4°C, 3000g, 15 minutes) to generate serum.Plates were coated in-house using a 0.5% v/w LPS and carbonate-bicarbonate buffer solution (100 µL per well) and washed the following day using 0.05% goat serum and 0.01% TWEEN20 in PBS solution.The plate was then incubated at 37°C for one hour in the presence of 100 µL serum (per well; diluted 1:200).
Following a wash, 100 µL of 0.1% v/v HRP-conjugated anti-rat IgG antibody was added to each well, incubated at 37°C for one hour and then washed again.The reaction product was visualized by adding 100 µL of SureBlue TMB (SeraCare; 5120-0075) to each well.After a five minute incubation in the dark at room temperature, 100 µL of TMB stop solution (SeraCare; 5150-0020) was added.Optical density was read using a Bio-rad iMark microplate reader at 450 nm.
2.9 Measurement of antibody production.To investigate whether any potential differences in anti-LPS antibodies were generalized or specific, levels of the three major subtypes of antibodies (IgG, IgA, and IgM) were quantified in adult male and female rats (P60) using ELISA.
2.10 Experimental design and statistical analysis.Significant effects of sex, treatment, and time (where applicable) were assessed using two-or three-way mixed models or repeated measures mixed models; p<0.05 was considered significant.As repeated measures ANOVA cannot handle missing values, data were analyzed by fitting mixed models with Greenhouse-Geisser correction as implemented in GraphPad Prism 9.1.0(Motulsky, 2023).Tukey's or Sidak's post-hoc tests were conducted to determine significant mean differences between a priori specified groups.Due to the method of partitioning variance in linear mixed models, there is no universal method to calculate standardized effect sizes (e.g., η2 for ANOVA).Whenever possible, we report unstandardized effect sizes, which agrees with recommendations for effect size reporting (Pek and Flora, 2018), including the guidance of the American Psychological Association Task Force on Statistical Inference (Wilkinson, 1999).
As multiple microglia were reconstructed and analyzed from one rat, the assumption of independence was not met, and traditional statistical analyses could not be utilized.All unstandardized effect sizes are reported as differences between means (MOR-VEH) ± standard error of the mean.Female rats used to generate offspring were randomly assigned to the MOR or VEH condition.All experiments included both male and female offspring.No differences were observed between rats of different litters in the same drug exposure group (i.e., MOR and VEH); therefore, individual rats from the same litter across multiple litters served as a single cohort (4 VEH litters, 5 MOR litters).All analyses were completed blinded to the treatment group.
Results
3.1 24-hour LPS-induced fever.We first investigated the impact of perinatal morphine exposure on the response to LPS, focusing initially on fever and sickness behavior.After the prototypical spike in temperature due to handling stress (Machado et al., 2020), systemic administration of LPS induced a febrile response in both MOR and VEH groups (Figure 4A).Body temperature began to rise for both treatment groups at hour 2 and continued to increase from hours 3-6, with MOR males displaying a slower rise in temperature.Together, this data suggests that perinatal opioid exposure leads to increased febrile response to LPS.MOR males and females take longer to reach maximum fever and have a higher maximum fever, both of which were only significant in females.Together, this data suggests that LPS-induced fever and sickness is increased in MOR male and female rats.We next examined potential mechanisms by which perinatal opioid exposure leads to an increased response to LPS.
Based on the fever arc for VEH and MOR rats, we chose the 8 hour timepoint for analyses of central and peripheral immune measures.At this timepoint, both male and female MOR rats morphine promotes microglial reactivity and initiates cytokine release in a TLR4-dependent manner (Eidson et al., 2017;Wang et al., 2012).Thus, we next examined if perinatal morphine alters the microglial response to LPS.
Microglia respond to LPS by transforming to a more "activated" and deramified morphology, reflected as smaller size and less complex structure.Thus, we predicted that microglia would show increased deramified morphology in MOR rats.Microglia morphology was reconstructed, and seven metrics were analyzed: length, area, and volume; segments, edges, and vertices; and Sholl intersections (see Figure 8 for representative microglia traces from all four groups).In all metrics, smaller measurements represent deramified and "activation-associated" morphology.We first analyzed microglia size (length, area, and volume) and observed a leftward population shift for all three measures in MOR rats independent of sex (Figure 9A-C).
Hierarchical bootstrapping followed by permutation testing identified a significant difference in mean microglial length of -17.04 µm (-9.6%) and -18.82 µm (-10%) for MOR male and female rats, respectively, in comparison to VEH rats.Similar results were observed for both area and volume: for area, mean differences of -49.64 µm 2 (-9.8%) and -25.4 µm 2 (-5.1%) were observed in MOR male and female rats; for volume, MOR male and female rats had mean differences of -
%
We also analyzed additional measures of size and complexity, including the number of segments, edges, and vertices present per microglia (Figure 9D-F).Consistent with the results observed for size, all measures were reduced in MOR vs. VEH rats.Mean differences of -3.25 (-12.8%) and -4.8 segments (-18.5%) were observed in MOR male and female microglia.For edges, mean differences of -23.86 (-5.9%) and -19.02 (-4.5%) were noted for MOR vs. VEH rats, and for vertices mean differences of -23.86 (-5.9%) and -19.03 (-4.5%) were observed.Overall, MOR male and female rats again showed left-shifted populations of microglia size and complexity, consistent with increased activation.
We Together, all seven metrics of activation-associated morphology in microglia were significantly lower for MOR male and female rats vs. VEH controls, suggesting that perinatal opioid exposure potentiated the microglial response to LPS.
To determine whether the increased microglial activation observed in the PAG of MOR rats was specific to the PAG, we next examined microglial morphology in the entorhinal cortex, a region with high µ-opioid receptor expression but no known role in immunity.Surprisingly, results for the entorhinal cortex were similar to what was observed in the PAG: in all seven metrics, MOR male and female rats display increased activation-associated microglial morphology (see Supplemental Table 1 for a summary of these results).This suggests that increased microglial activation in response to LPS may be a generalized response in brain regions with high µ-opioid receptor expression (PAG and entorhinal cortex) for male and female rats perinatally exposed to morphine.
Thus far, we have reported that perinatal opioid exposure potentiates the response to LPS, as indicated by increased fever and sickness, elevated levels of IL-1α and increased microglial activation.We next examined if these differences were related to basal differences in immune function, such that MOR rats are less able to launch an appropriate immune response following exposure to antigens or pathogens.We first investigated gut permeability using FITC-dextran dissemination into the bloodstream.Bacteria in the gut are a major source of immune system stimulation and training, and increased gut permeability would promote increased bacterial contact with the immune cells in the lamina propria, leading to differential immune system development (Kaczmarczyk et al., 2021).In addition, opioids are known to slow gut peristalsis, which is associated with increased gut permeability (Akbarali and Dewey, 2019).
3.5 Analysis of gut permeability using FITC-dextran.Due to differences in basal levels between experimental rounds, data were normalized to the mean of the VEH group to generate fold changes for each sex.Overall, no significant differences in gut permeability were noted (Figure 10A-B).However, MOR females had an average fold change of 1.47 relative to VEH females, representing a 47.1% increase in gut permeability [unpaired T test; t(16)=2.038,p=0.0585]; no differences were observed for males [fold change 0.79; unpaired T test; t(14)=1.505, p=0.1546].This suggests that exposure to morphine in utero leads to long-term increases in gut permeability for female rats, which may alter immune system development and impact the response to immune stimulators.
3.6 Measurement of bacterial contact via anti-LPS antibody levels.To confirm our observed, albeit non-significant, increases in gut permeability in MOR female rats, we next quantified levels of anti-LPS antibodies.Increased gut permeability would allow for increased bacterial dissemination into the lamina propria; as LPS is one of the major antigens utilized by the immune system to recognize bacteria, we predicted that levels of anti-LPS antibodies would be similarly elevated in MOR females.We observed no significant effect of sex, so males and females were combined to increase power.In contrast to our predicted increase, we observed a significant decrease in anti-LPS antibodies in MOR rats [t(34)=2.191,p=0.0354] (Figure 10C).
Anti-LPS antibody levels were, on average, 27.1% lower in MOR vs. VEH rats.As the increased gut permeability observed in MOR females is in conflict with the observed decrease in anti-LPS antibodies, we next measured levels of antibody classes IgG, IgA, and IgM to identify potential deficits in antibody production as an alternative explanation for decreased anti-LPS antibody levels.3.7 Measurement of antibody production.We first analyzed IgG, the most common and abundant antibody subtype, and a primary mechanism to target microbes for phagocytosis.Our analysis identified a significant interaction of treatment and sex [F(1,25)=9.525,p=0.0049] (Figure 11A).Specifically, we noted a 56% (Klein and Flanagan, 2016).
We next analyzed IgA, which is involved in mucosal immunity and defense against oral or respiratory pathogens.There was no significant effect of sex, so males and females were collapsed to increase power.IgA levels were significantly higher in MOR rats [t(23)=2.288,p=0.0317] than VEH rats (VEH: 48669±12854 ng/mL; MOR: 153596±30551 ng/mL; 215% increase; Figure 11B).
Last, we analyzed IgM, a stimulator of the classical complement system.Two-way ANOVA identified a significant interaction of treatment and sex [F(1,28)=5.974,p=0.0211] (Figure 11C).
In Together, this data suggests that perinatal opioid exposure results in generalized deficits in antibody production, observed both in anti-LPS antibodies and IgG and IgM (females only) levels.Interestingly, a significant increase in IgA levels was noted for MOR rats, potentially related to alterations in enteric immunity and gut permeability.
Results from measurements of baseline immune functioning in MOR vs. VEH rats suggest that in utero exposure to morphine leads to long-term alterations in immune system activity, including increased gut permeability and decreased antibody production.These differences may explain the increased fever and sickness responses to LPS and altered cytokine levels and microglial activation post-LPS.See Table 2 for a summary of the results.
Discussion
Previous clinical studies indicate that infants exposed to opioids in utero are at a higher risk of infection and hospitalization (Arter et al., 2021;Uebel et al., 2015;Witt et al., 2017).
However, to date, the underlying mechanism by which perinatal opioid exposure leads to this increased risk is unknown.The present study was designed to address this gap, and, in particular, to characterize the physical and immunological response to LPS using a preclinical model of perinatal opioid exposure.We hypothesized that perinatal opioid exposure would compromise gut permeability and antibody production, consistent with the known immunosuppressive effects of chronic opioids in adult humans and rodents.Furthermore, we hypothesized that these effects would alter the response to an immune stimulator, LPS, observed as changes in fever, sickness, and markers (peripheral and central) of inflammation.
We first investigated the effects of perinatal opioid exposure on the response to an experimental model of Gram-negative bacterial infection, LPS.Our studies revealed that MOR rats responded with elevated fever and sickness following LPS administration vs. their VEH counterparts.Although increased fever and sickness may seem contradictory to the wellcharacterized immunosuppressive effects of opioids, fever is widespread among immunosuppressed patients and may even be the only symptom of infection (Pizzo, 1999).
Thus, while healthy individuals can regulate their immune system properly, immunosuppressed patients, particularly those who are neutropenic (i.e., with low levels of neutrophils), frequently show rapid rises in core body temperature and quickly proceed to sepsis.Indeed, previous studies investigating the response to LPS in rats made leukopenic (i.e., with a low number of circulating leukocytes) via chemotherapy have reported increased fever response to LPS, along with alterations in cytokine production (Miñano et al., 2004;Tavares et al., 2006Tavares et al., , 2005)).
Interestingly, the time course and magnitude of fever in these studies are similar to what is observed in the present study for MOR rats.This, along with our observed deficits in antibody production, suggests that perinatal opioid exposure may produce neutropenia and/or leukopenia.
In addition to the febrile response, the majority of rats that received LPS also displayed physical characteristics of sickness.Specifically, male rats and MOR female rats looked sick, with ears flattened back, eyes tightened, nose flattened, and piloerect fur.Surprisingly, female VEH rats displayed very few physical attributes of sickness, despite a robust fever response.No difference in sickness score was noted for MOR vs. VEH males, perhaps due to a potential ceiling effect, given the high sickness scores observed in VEH males.Male rodents generally exhibit more severe sickness behavior following LPS, including anorexia (Kuo, 2016;Pitychoutis et al., 2009), huddling, piloerection, ptosis, lethargy (Cai et al., 2016), and reduced locomotion (Yee and Prendergast, 2010); however see (Pitychoutis et al., 2009).For the present studies, we chose to use a low dose of LPS that would enable us to observe either an increase or decrease in sickness score.Use of even lower doses of LPS may elucidate whether our observed sex difference in VEH rats is due to increased sensitivity to the sickness-promoting effects of LPS in males.
Peripheral cytokine levels were also altered in response to LPS treatment.Elevation of IL-1α levels in MOR male and female rats at eight hours post-LPS likely contributes to the potentiated fever response seen in MOR rats, as IL-1α binds to IL-1R in the anterior and paraventricular nucleus of the hypothalamus and initiates a downstream cascade ending in synthesis of the pyrogen prostaglandin E2 (Cartmell et al., 1999), which increases body temperature via brown adipose tissue metabolism and vasoconstriction.Previous studies have indicated that acute morphine, given in conjunction with LPS, led to an increase in IL-1α levels in the brain (Roy et al., 1999), potentially through synergy with TLR4 (Eidson et al., 2017).Other models of perinatal opioid exposure have reported increases in adult levels of TLR4 and MyD88 (Jantzie et al., 2019;Smith et al., 2022).As IL-1α is one of the primary cytokines released after TLR4 stimulation via NF-κB signaling, this provides a potential mechanism by which perinatal morphine exposure increases IL-1α levels.We also noted a high degree of variability in IL-1α levels; surprisingly, this was not related to maximum temperature induced by LPS, but rather, this may be associated with the degree of immunosuppression in MOR rats, including elevations of TLR4 and/or MyD88.
The current study also identified increased microglial activation in the ventrolateral periaqueductal gray in male and female MOR rats following LPS treatment, suggesting that POE leads to long-term changes in microglial reactivity.While opioid exposure typically promotes peripheral immunosuppression, previous studies have shown that microglia are generally activated by morphine in a TLR4-dependent manner (Doyle et al., 2017).Increased microglial activation likely contributed to the increased fever and sickness behavior following LPS, as activated microglia release cytokines that act in the hypothalamus to promote fever and sickness behavior.In the present study, only peripheral cytokines were assessed.Here, we observed a significant increase in IL-1α; however, plasma cytokine levels are not always representative of local brain region concentrations, so future studies should investigate local cytokine levels in the hypothalamus and PAG.We also analyzed microglial phenotype in the entorhinal cortex to determine if the observed increase in microglial activation was specific for the PAG, or a more widespread phenomenon.Surprisingly, microglia in the entorhinal cortex of both male and female rats showed increased activation-associated morphology.The entorhinal cortex is primarily associated with memory formation and learning (Maass et al., 2015) and to date has not been implicated in immune signaling.As both the PAG and entorhinal cortex have high levels of µ-opioid receptor expression, this suggests that perinatal morphine may act on µopioid receptors in multiple regions of the CNS to decrease the threshold for microglial reactivity, potentially through morphine's action as a developmental stressor (Carloni et al., 2021).Future studies should investigate microglial reactivity in additional brain regions, both with and without µ-opioid receptor expression.
Given the observed changes in LPS-induced fever and sickness in MOR rats, we next investigated if these changes were due to alterations in basal peripheral immune function or specifically a result of immune stimulation.We hypothesized that morphine exposure would promote gut permeability and increase the level of anti-LPS antibodies due to increased bacterial dissemination into circulation and that these together would potentiate the response to LPS.Here, we report that MOR rats displayed increased gut permeability, but surprisingly produced fewer anti-LPS antibodies.The reduced level of antibodies was not specific to anti-LPS, as MOR rats had lower levels of the antibody subtypes IgG and IgM (females only), suggesting an overall deficit in antibody production that would predispose these rats to infection.Interestingly, we also observed significantly increased IgA antibody levels in MOR male and female rats.This may be related to morphine's effects on the gut, including gut permeability, as IgA is primarily involved in mucosal immunity and response to oral pathogens.
Together, our observed deficits in IgG and IgM are consistent with clinical data reporting increased hospitalization rates for infection in children born with in utero opioid exposure (Arter et al., 2021;Uebel et al., 2015;Witt et al., 2017).The cause of antibody production deficits, including a decrease in the number of antigen-presenting cells and/or B lymphocytes involved in antibody production or the ability of these cells to produce antibodies in response to antigen stimulation, warrants further investigation.As increased gut permeability is also associated with changes in gut microbiota composition, the impact of perinatal morphine exposure on gut microbiota composition and its relationship to other immune parameters should also be examined.
Overall, our data suggests that rats perinatally exposed to morphine have an immunosuppressed phenotype (specifically increased gut permeability and deficits in antibody production) that may increase the susceptibility of the immune system to a pathogen or immune challenge, consistent with our finding that MOR rats show a potentiated fever and sickness response to LPS (see Figure 12 for a summary of the results).These results provide further evidence that exposure to opioids in utero leads to long-term immune deficits.As the number of infants born to mothers using opioids during pregnancy continues to rise, determining the underlying mechanism whereby these infants are more vulnerable to potential pathogen exposure is critical.
as a consequence of suppressed immune function at baseline (including reduced antibody production).This suggests that POE increases susceptibility to infection by manipulating immune system development, consistent with the clinical literature.Investigation of the mechanisms whereby POE increases susceptibility to pathogens is critical for the development of potential interventions for immunosuppressed children exposed to opioids in utero.
Figure 3 also shows an example of two microglia: one with deramified morphology (C) and one with ramified morphology (D),along with values of all seven metrics collected for each microglia (E).Deramified morphology is associated with smaller values, while ramified morphology leads to greater values on all seven metrics.Deramification correlates with cytokine production, suggesting that deramified microglia are in a functionally "active" state(Althammer et al., 2020).Sholl intersections were analyzed using area under the curve (AUC) such that smaller AUC values are representative of fewer Sholl intersections.
Figure 3 .
Figure 3. Location of vlPAG (A, red box) and entorhinal cortex (A, blue box) for analysis of microglial morphology.dlPAG = dorsolateral periaqueductal gray, DR = dorsal raphe, Ent = entorhinal cortex, FMJ = forceps major, IC = inferior colliculus, LFP = longitudinal fasciculus of the pons, lPAG = lateral periaqueductal gray, MnR = median raphe, V1 = primary visual cortex, vlPAG = ventrolateral periaqueductal gray.B. Example of segments, edges, and vertices.The blue circle represents the soma, orange lines represent edges, green circles represent vertices, period of time in comparison to VEH females (Figures4B-C).We next analyzed two components of the fever response: time to maximum change in temperature and maximum change in temperature.For time to maximum change in temperature, although we observed no significant effect of treatment [Txt, F(1,37)=3.508,p=0.0690],MOR females took longer to reach maximum fever (Figure4D).Specifically, MOR female rats took 0.95 hours longer (VEH F vs MOR F, p=0.1811), shifting from an average time of 6.5 hours to 7.45 hours.No differences were noted for males (VEH M vs MOR M, p=0.9722).A significant main effect of treatment was observed in maximum change in temperature [Txt, F(1,35)=6.537,p=0.0151] (Figure4E); this effect was also driven by females (VEH M vs. MOR M, p=0.9604;VEH F vs. MOR F, p=0.0318).The maximum temperature change for MOR F was 1.84°C vs. 1.46°C for VEH F, a 0.38°C difference, equivalent to a 26.2% greater temperature change.
Figure 4 .
Figure 4. Perinatal morphine exposure leads to increased fever response to LPS. A. Fever arc from hours 0-16 post-LPS.B. MOR males showed a lower rise in body temperature from hours 3-6; temperature remained elevated vs. VEH males from hours 7-9.C. MOR females have elevated temperatures from hours 7-12.D. MOR males and females have delayed time to reach maximum fever.E. MOR males and females have increased maximum fever magnitude.N VEH M = 12, N VEH F = 7, N MOR M = 12, N MOR F = 10.* = significant at p<0.05.
Figure 5 .
Figure 5. Perinatal morphine exposure leads to elevated sickness behavior in female rats.A. Sickness arc from hours 0-16.B. No difference was observed in males.C. MOR females have elevated sickness behavior.D. MOR females have elevated maximum sickness behavior.E. Maximum temperature and maximum sickness scores positively correlate.N VEH M = 9, N VEH F = 7, N MOR M = 12, N MOR F = 10.* = significant at p<0.05.
Figure 6 .
Figure 6.Perinatal morphine exposure leads to increased fever and sickness scores 8 hours post-LPS.A. MOR males and females have elevated body temperature at 8 hours post-LPS.B. MOR females have elevated sickness behavior at 8 hours post-LPS.N VEH M = 9-12, N VEH F = 7, N MOR M = 12, N MOR F = 10.* = significant at p<0.05.
Figure 8 .
Figure 8. Representative traces of microglia from the PAG of VEH M, VEH F, MOR M, and MOR F.
next analyzed Sholl intersections, a classic metric of microglial activation.Comparison of the number of intersections per concentric circle (1 µm apart) showed lower intersections across the entire range for MOR male and female microglia, consistent with deramification (Figure 9G).To analyze Sholl intersections via hierarchical bootstrapping, we utilized area under the curve (AUC) of individual microglia.Overall, MOR male and female microglia had lower Sholl AUC values (Figure 9H), representing fewer intersections and suggesting increased activation.
Figure 9 .
Figure 9. Microglia activation 8 hours post-LPS.A-C.Length, area, and volume are all decreased in MOR male and female rats.D-F.The number of segments, edges, and vertices is decreased in MOR male and female rats.G. MOR male and female rats have fewer intersections across the entire Sholl radius.H. Sholl AUC is decreased in MOR male and female rats.N VEH M = 372 microglia, N VEH F = 296 microglia, N MOR M = 496 microglia, N MOR F = 424 microglia.
Figure 10 .
Figure 10.MOR female rats have increased gut permeability; however, both male and female MOR rats have decreased anti-LPS antibody levels.A. No differences in gut permeability were found between males.B. MOR females have elevated gut permeability.C. MOR males and females have decreased anti-LPS antibody levels, despite elevated gut permeability seen in females.N VEH M =4-5, N VEH F = 5-6, N MOR M = 8-12, N MOR F = 10-12.* = significant at p<0.05.
Figure 11 .
Figure 11.Perinatal opioid exposure alters antibody levels.A. MOR females have decreased levels of IgG.B. MOR male and female rats have elevated IgA antibody levels.C. MOR female
Animal Care Systems, Centennial, Colorado, USA) with corncob bedding.Food (Lab Diet 5001 or Lab Diet 5015 for breeding pairs, St. Louis, MO, USA) and water were provided ad libitum throughout the experiment, except during testing.These studies were approved by the Institutional Animal Care and Use Committee at Georgia State University and performed in compliance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals.Every effort was made to reduce the number of rats used and minimize pain and suffering.
2.1 Experimental subjects.All experiments utilized male and female Sprague Dawley rats (Charles River Laboratories, Boston, MA).Rats were housed in same-sex pairs or groups of three on a 12:12 hours light/dark cycle (lights on at 8:00 AM) in Optirat GenII individually ventilated cages (2.2 Perinatal opioid exposure paradigm.Female Sprague Dawley rats (P60) were implanted with iPrecio® SMP-200 microinfusion minipumps under isoflurane anesthesia
Table 2 .
Summary of results. | 2023-09-27T13:11:20.005Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "3fa6b0f7262edc0944ebcbe009352bfa5540a38f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/09/22/2023.09.20.558690.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e10d01ca5b8552d816ceaac20258a97c7c4e955",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235726226 | pes2o/s2orc | v3-fos-license | The assessment of local site effects and dynamic behaviour in Nicosia, Cyprus
Single-station microtremor measurements were conducted to investigate earthquake and soil behaviour for the first time in Nicosia, Cyprus. Cyprus is located in a tectonically complex area in the Eastern Mediterranean where three plates meet. The study area was chosen to cover the areas to be opened for new development. Nicosia, the capital of Cyprus, is also the island's most important cultural, industrial, commercial, and transportation centre. The study creates base maps for the soil to assess earthquake resistance crucial for construction. Microtremor Method was applied at 100 stations and the Multi-Channel Analysis of Surface Waves (MASW) method was used at 52 stations. Also, RefractionMicrotremor (Re-Mi) and L-Shaped Spatial Autocorrelation (L-SPAC) methods were carried out at 17 stations to substantiate the research. The results of the microtremor method indicate that the predominant soil period values have an average of 1 second and pre-dominant peak period values are generally found between 0.1 to 5 s at the study area. Peak amplitude values are observed between 1 and 2.4. The Vulnerability Index Parameter (Kg) exceeded 20 at the central and the southern stations, and Kg values change between 7 and 54 units. The Kg values were found to be higher than 20 in soils where shear wave velocity is lower than 760 m/s. At the same time, the values of the predominant peak period were greater than 1 second. Cyprus is located in the Alpine Himalayan earthquake zone. The Cyprus Arc is known as the main seismic source of the island, It constitutes the tectonic border among African and Eurasian lithospheric plates in the region. During an earthquake in Nicosia, seismic waves will be amplified by an average of 1.5 times and soil deformation will occur due to the exceeding elastic limits. The results provided important insight into soil behaviour and indicated its reactions in a potential earthquake.
Introduction
The common behaviour that soil presents during a massive earthquake, depends on the soil layers and their properties. It is known that earthquakes have an impact on the soil surface. Gmax value which controls the dynamic behaviour of the soil shows sudden changes even in very small scales in the horizontal and the vertical directions. These changes are defined by seismic S wave velocity values to be obtained on a small scale (Ansal et al., 2015). Therefore, when constructing earthquake-resistant engineering structures, soil properties should be investigated in detail. In these studies, soil bedrock models are prepared in one (1D) and two (2D) dimensions by using in situ methods (Nortey et al., 2018). Nowadays, MASW, Seismic Refraction, Up Hole, Down Hole, Cross Hole Seismic Studies, Single Station Microtremor, Refraction Microtremor and Array Microtremor methods are widely used as in situ methods for geophysical site investigations (Olona-Allué et al., 2008;Margaryan and Yokoi, 2008;Yalçınkaya, 2010;Claprood et al., 2011;Akgün et al., 2013a;Akgün et al., 2013b;Özdağ et al., 2015;Mohamed et al., 2016;Pamuk et al., 2017). Nicosia, which constitutes the study area, is a second-degree risky earthquake zone throughout the island of Cyprus. Nicosia region has suffered great damages as a result of historical earthquakes. Nicosia and its surroundings have different geological units that present variety on a greater scale. For this reason, this research aimed to thoroughly investigate the earthquake soil common behaviour parameters for the Nicosia district. Clay minerals were found to be abundant into the formation of soils, in the study area. As a result of clay minerals, alteration and the swelling behaviour due to the change of the water level content has been demonstrated by the studies that these soils show very sudden changes in the horizontal and vertical directions (Petrides et al., 2004;Atalar and Das, 2009). Previous studies such as United Nations Development Programme (UNDP) project between 2000 to 2004 represent valuable information from borehole data and reveals that the marl gives, in general, a shear velocity in the range of 350-825 m/s (DeCoster et al., 2004). Vs values of Nicosia marl were calculated with a mean of 403 m/s. Mia Milea (Haspolat) village located in the northeast of Nicosia was a site of rather interesting shear-wave seismic records, resulting presumably from a high water table prevailing in that area (Eleftheriou et al., 2004). As a result of drilling works in Northern Cyprus; at a depth of 10-20 m, a low permeability marl (Hard Clay) layer is encountered. In most of the previous studies, it was observed that the groundwater level was close to the surface above this layer. The shrinkage and compressibility of marine deposited clays may effect liquefaction in the presence of groundwater in the marl layer (Ekinci et al., 2019).
Cyprus is located in the Alpine Himalayan earthquake zone, where approximately 15% of world earthquakes occur. The Cyprus Arc, which is thought to cause earthquakes in Cyprus, constitutes the tectonic border among African and Eurasian lithospheric plates in the region where the island is located. This arc is located offshore to the south of Cyprus and the majority of earthquakes are known to occur on this arc. Tectonic movements also occur along this arc and this indicates the cause of these earthquakes (Woodside, 1977;Jackson and McKenzie, 1988;Geiss, 1992;Robertson, 1994;Robertson, 2000;Harrison et al., 2004;Welford et al., 2015). According to historical earthquake information for the period between 26 BC and 1900 AD, there were 8 destructive earthquakes with an intensity of 8 according to the Mercalli Intensity Scale (Ambraseys, 1965).
More than 400 earthquakes with greater intensity of 5 according to the Mercalli Intensity Scale have occurred around Cyprus in the last century and these earthquakes were felt in neighboring countries as well as in certain regions of the island (Ambraseys, 1992;Ambraseys and Adams, 1993;Algermissen and Rogers, 2004). Moreover, 14 of these earthquakes caused damage and loss of life. The most destructive earthquakes were recorded in 1918, 1941, 1953, 1995, 1996(Ambraseys and Adams, 1993Papadimitriou and Karakostas, 2006). Evaluations performed on earthquakes that have occurred in historical and contemporary periods indicate that the occurrence of earthquakes and their distribution over time are not regular. Frequent recurrent earthquakes are followed by periods of seismic activity. The most active region of Cyprus in terms of earthquakes is the coastline of Paphos-Limassol-Larnaca-Famagusta covering the southern part of Cyprus (Ambraseys, 1965;Galanopoulos and Delibasis, 1965;Papazachos, 1973;Ambraseys, 1992;Ambraseys and Adams, 1993;Papazachos and Papaioannou, 1999;Papadimitriou and Karakostas, 2006;Çağnan and Tanırcan, 2009;Palamakumbura and Robertson, 2016).
In the study area of Nicosia, the soil properties are likely to change even on small scales as many streams are under the influence of common accumulation. These changes were found to be effective at a depth between 10-15 meters in the vertical direction at the sides of the stream beds. Due to these reasons, it was aimed to investigate the ambient parameters such as Vs velocity, dominant period, amplification, and vulnerability index via in situ methods.
To define the general soil characteristics of the study area, the first stage of the single-station microtremor measurements were carried out at 100 points. Quasi Transfer Spectra (QTS(f)) was calculated for each measurement point. Peak period, peak amplitude, and vulnerability index contour maps were prepared from these spectra.
In the second stage, the changes in peak period maps were examined and Multi-Channel Analysis of Surface Waves Method (MASW) was applied at 52 points. As a result of MASW studies, 1D and 2D Vs velocity depth profiles and V s30 map were obtained on-site and soil classification maps were prepared according to National Earthquake Hazards Reduction Programme (NEHRP) regulations (BSSC, 2004). In the last stage, Refraction-Microtremor (Re-Mi) and L-Shaped Spatial Autocorrelation (L-SPAC) methods were carried out to increase the depth of the research in 17 selected points depending on the changes in V s30 velocity change maps. With the common evaluation of all study results, K g Vulnerability Index coefficients for predicting the soil deformation changes that may occur in the earthquake effect were transferred to the map environment.
Finally, the interpretation for the study area was made based on the shear velocity values (V s ≥ 760 m/s) of the Engineering Bedrock. According to the conservation of energy, changes occur on the earthquake movement resulting in amplitude-frequency spectrum up to the engineering bedrock-soil interface in an elastic behaviour (Yalçınkaya, 2004;Kanli et al., 2006;Nath, 2007;Akgün et al., 2013a;Özdağ et al., 2015).
The study area was selected in the northern part of Nicosia. All measure-ments were made on 6 lines formed in this region. Geophysical applications conducted on these lines are Microtremor measurements on 100 different stations at 1 km intervals and MASW Method applications on 52 different stations with 2 km intervals from Gonyeli at west towards to Haspolat at east and from Dikmen at North towards to Nicosia downtown the south (Fig. 1).
Geology of the study area
The selected geological zones for Cyprus is suggested by the Geological Survey Department of Cyprus's four geological zones are simply; Kyrenia, Troodos, Mamonia, and Circum Troodos Sedimentary Succession (GSD, 2002). Another suggestion of Cyprus geological zones is centred around six areas according to geological evolution and emplacement of its geological units: These are; Kyrenia, Mamonia (Mamonia Complex), South Cyprus, Troodos (Troodos Ophiolite), and Mesaoria zones, and the Alluviums (Fig. 2) (Atalar, 2005(Atalar, , 2006. In the greater Nicosia area, the eldest unit is the Troodos Ophiolite zone which contains mostly pillow lavas and plutonic rocks. The following litho-stratigraphic unit is the Kythrea Formation of sandstone, siltstone, and claystone which is equivalent to the Pakhna Formation of south Cyprus. In the upper sequences, Kalavasos (Mermertepe) formation contains gypsum, and Nicosia formation mainly contains marls. Lastly, the Athalassa formation which is equivalent to the Gürpınar formation in the north, mainly with calcarenite and sandstones, underlying the alluviums at the surface.
Applications
There are four different methods applied for the study area to estimate local site effects in Nicosia. The most important objective here is to obtain the S wave velocity depending on the depth. The results of the dispersion curve to be obtained from here provide variations in layer thickness and S wave velocity (Roberts and Asten, 2004). The methods and their applications are clarified and data analysis of these methods is explained.
Microtremor measurements
Microtremor measurements were performed by minimizing artificial and natural noise where strong weather conditions (wind and rain etc.) were absent. Additionally, measurements were made at less than 250 meters from the stations determined on the map. These measurements were generally carried out in soft soil or rock environments and the Microtremor device was covered with a bucket during the measurements.
Microtremor measurements were performed at 100 stations. Measure-ments are usually taken between 21:00-04:00 to minimize the traffic noise. A Guralp CMG 6TD 3-component velocity meter was used. Data were recorded for 20-30 minutes and 100 Hz sampling intervals were used. During the recording process, the data quality was continuously observed via computer, while the noise content was taken into consideration and the recording time was extended (45-60 min) where the noise was high. Geopsy (SESAME, 2004) software was used in Microtremor data evaluation studies. In the data processing stage, the effect of the linear component was removed. Bandpass filtering was applied in the range of 0.05-20 Hz. scale and spectrum data separated 81.92 seconds wide windows. Windows were selected and 5% cosine trimming was applied. For each window, the Fast Fourier Transform (FFT) was applied to obtain the amplitude spectra of each component.
Teves-Costa approach can provide preliminary information about the thickness of the soil (Teves- Costa et al., 1996). For this consideration, the predominant period has been known as: In Eq. (1), H represents the soil layer thickness and V s is known as shear wave velocity in this equation. In Eq. (2), H/V is the horizontal over vertical spectrum ratio, NS is the N-S component's amplitude spectrum, EW is the E-W Component's amplitude spectrum, and Z is the vertical component's amplitude spectrum.
The H/V technique originally proposed by Nogoshi and Igarashi (1971), and became wide-spread by Nakamura (1989), consists of estimating the ratio between the Fourier amplitude spectra of the horizontal (H) to vertical (Z) components of the ambient noise vibrations recorded at one single station. Nakamura (1997) examined the relationship between the structural damage and the K g (vulnerability index) value after an earthquake and determined that the damage rate increased in areas where the K g value was greater than 20 units. K g is explained in Eq. (3). A 0 is the maximum amplitude of H/V spectra and F 0 is the frequency corresponding to A 0 (also called predominant soil vibration frequency). The process of the QTS follows 6 main steps. These are first a three-component data record; second; selection of time windows (avoid noise); third, estimation of Fourier amplitude spectra for each time windows; fourth, calculating quadratic mean value of two horizontal components; the fifth; obtaining the H/V ratio for each window and last sixth; obtaining the QTS.
The interpretation of the H/V spectral ratio is intimately related to the composition of the seismic wavefield responsible for the ambient vibrations, which in turn is dependent both on the sources of these vibrations and on the underground structure. It is also related to the effects of the different kinds of seismic waves on the H/V ratio QTS(f) spectrum which provides H/V ratios for each window represents E3 station microtremor recording and calculated predominant amplitude value A 0 =7.75 a predominant frequency f 0 =7.5 Hz or predominant soil period value T 0 =0.13 sec (Fig. 3).
The H/V spectral ratio method is an experimental technique to evaluate some characteristics of soft-sedimentary (the soil) deposits. This technique is the most effective in estimating the natural frequency of soft soil sites when there is a large impedance contrast with the underlying bedrock. The method is especially recommended in areas of low and moderate seismicity, due to the lack of significant earthquake recordings, as compared to high seismicity areas (SESAME, 2004).
The concept of the soil transfer function is used to define the earthquake force that creates dynamic load in the lateral direction in soils. The soil transfer function can be obtained theoretically or practically. Practically Nakamura (1989) method is described in this study. Finally, depending on the stress-strain relationship, soil deformation is defined (elastic, elastoplastic, and plastic).
The soil transfer function can be calculated separately in three different ways by the common usage of S wave velocity, P wave velocity, and commonly used theoretically (Özdağ et al., 2015). Theoretical soil transfer functions are calcu- lated by using the viscoelasticity of soil layers in the frequency environment. During the calculation of the theoretical transfer spectrum process, P and S velocity values, thicknesses, densities, and damping factors to be included in the bedrock within situ studies are used as input parameters. Using these assumptions, an observational soil transfer function is obtained. The soil transfer function gives information about how the earthquake waves passing from the bedrock to the soil is changed (Akgün et al., 2013a).
Quasi Transfer Spectra (QTS) which consists of the seismic impedance differences between the layers and define the effects of the seismic wave at the amplification and frequency is obtained by using the microtremor method data (Dindar et al., 2015).
Seismic waves spend a significant part of their travel from the source to the earth in the hard bedrock that forms the earth's crust. The last stage of their travel takes place in the so-called loosely bonded surface layers, whose properties differ greatly from the bedrock, and the physical properties of these ground layers largely determine the characteristics of the vibration observed on the earth (Yalçınkaya, 2010).
In this context, it is assumed that the S wave velocity is less than 760 m/s in the surface layer called soil, and the places where the S wave velocity is greater than 760 m/s are called engineering bedrock, and the places where the S wave velocity is greater than 3,000 m/s are called seismic bedrock (Nath, 2007).
In the so-called seismic bedrock, it is assumed that there are no physical changes in the lateral direction from the depth level of the layer in question and that there is a more homogeneous structure compared to the upper depth levels and the S wave velocity is greater than 3,000 m/s in this layer.
Multichannel analysis of surface waves measurements
The method aims to reveal the S wave velocity structure in 1D and 2D towards to the required depth (Apostolidis et al., 2004;Roberts and Asten, 2004). Multichannel Analysis of Surface Waves (MASW) is a method in which various artificial sources (sledgehammer, weight drop, etc.) are used. V s30 is widely used in velocity calculations, particularly because it is not affected by low-velocity zones (Park et al., 1999). MASW method measurements were performed in 52 stations in the field studies. MASW method applications were administered with average intervals of 2 kilometers and measurements performed using a sledgehammer with a profile length of 120 m and offsets were selected on 5, 10 and 15 m shot range. Hard flat media is used to avoid repeated reflection at selected stations when shooting. Additionally, care was taken to work at intervals without traffic noise while shooting. Measurements were performed on asphalt, flat, or rock environments with flat inclined surfaces. In the study area, a total of 6 linear lines were formed, 19 km long in the west-east direction, and 5 km wide in the north-south direction, covering a total area of 95 km 2 area. DOREMI device was used with 24 receivers (featured as 4.5 Hz; frequency and vertical P geo-phones). The geophones were located between 2-5 meters intervals. Recording length has been set with 2 seconds (2,000 ms) and the sampling interval has been set as 0.125 milliseconds.
The first step of data processing in most surface-wave methods is estimating one or more dispersion curves which are called dispersion analysis. Generally, it has been the fundamental-mode (M0) curve usually estimated. Theoretical M0 curves are then calculated for different earth models by using a forward modeling design. Then, after the inversion of the initial model with the calculated model, V s values, the final model for each depth values were calculated for each station. V s30 values were plotted on the map and the soil classification was made according to the NEHRP directive (Fig. 7).
Re-Mi and L-SPAC measurements
These passive source methods are preferred for deeper analysis of soil than active source methods. In Refraction Microtremor (Re-Mi) method, data collection is performed by using a linear array. The method reveals the S wave velocity structure of the soil to the required depth in one dimension (Apostolidis et al., 2004;Roberts and Asten, 2004). The Re-Mi method, which is also defined as Array Microtremor Method, was revised by Asten (2006) as the Spatially Averaged Coherency Spectra (SPAC) method. The fundamental of the SPAC method is the Rayleigh waves obtained from the natural tremor (vibration) recording of the earth. Re-Mi and L-Shaped Spatially Averaged Coherency Spectra (L-SPAC) methods were carried out with the help of the DoReMi 24-channel seismic device. The data was collected using 24 channel vertical P geophones applying the Re-Mi method. In Re-Mi method applications measurement recording time was 30 seconds and 30 measurements made in each station. Besides geophone intervals were 7 m and here, measurements had been improved with two multi-cable connections with 11 geophones. Thus, in the system with 24 receivers, five geophones were used in two-way laying from the centre. For example, geophones are generally positioned at existing channels 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22. In the L-SPAC method, the geometry of the arrangement was adjusted to be 50 m on the x-axis and 50 m on the y-axis with the gauge intervals of 10 m. Each profile was recorded for 20 to 30 seconds. In each measurement, three stacking was made. In the L-SPAC applications, the measurements were carried out by positioning two 50 m long pairs at an angle between 90-120 degrees. Accordingly, phase velocities with frequencies between 5 and 25 Hz were calculated from the MASW method survey, and values between 0 and 12 Hz were determined by L-SPAC and Re-Mi techniques.
Results
The evaluations of all measurements are explained with the following contour maps from Fig. 4 to Fig. 7. The QTS graphs were obtained for each station. The peak period (T 0 ), peak amplitude A 0, and K g values obtained from the QTS spectra were calculated and presented separately in Figs. 4 to 7. Additionally, SW-NE directional A-A' and SE-NW directional B-B' profiles were prepared to investigate the effect of topographic changes on the profile based on A 0 and T 0 values. Figure 4 reveals the predominant period values with a T 0 map where the values differ between 0.1 and 4.9 seconds. Predominant soil period (peak period) values obtained from the Quasi Transfer Spectrum are usually lower than 1 second in the north of the study area. Here, the seismic impedance changes in the engineering bedrock seem to reduce at the north sides of the study area. The dominant period values are greater than 1 second in the centre of the area and this indicates that the engineering bedrock is deeper than 30 meters. In the southeast of the area, T 0 values were found to be 4 and above. This region probably consists of a very thick solid soil layer. In the field studies, the soil profiles, according to the measurement results made at 52 points by the MASW method, soil profiles were generally observed between 200 and 700 m/s. The soil profiles show a generally 3-layer environment at a depth of 30 meters. The engineering bedrock that is nearly equal to 760 m/s values were reached at very few points. In the north of the study area where T 0 values are lower than 1 second, the shear wave velocity values (V s ) are calculated higher than 760 m/s. It reveals that the Kythrea formation may represent engineering bedrock in the Dindar et al., 2015;Akgün et al., 2016) north of the study area. However, the V s values calculated lower than 760 m/s, show comparable results with A 0 and T 0 . Peak amplitude values show higher values than 1 unit. Figure 5 illustrates the amplitude values according to the A 0 peak amplitude distribution map, which is obtained from the QTS calculated for the area of study. Amplitude values according to A 0 peak are observed in the values 1 to 2.4 as a dominant class. The fact that the peak amplitude values greater than 1 indicates that the effect of displacement in the spectrum of earthquake waves will increase. Based on these findings, we can say that in the event of a destructive earthquake, the seismic waves will be amplified through the ground by an average of 1.5 times.
In Fig. 6, the calculated K g values obtained from the quasi transfer spectra increase as they go south in the study area. In the centre of the study area, the values were between 10-20 units and some stations in the south were 20 units and above. In the study area, considering K g values greater than 20 units, soil deformation changes the behaviour of the soil under dynamic load that will occur due to the elastic limits.
Additionally, In Fig. 7 in the western part of the study area, group C and at the north of the study area, B and C types are observed by using MASW, Re-Mi, and L-SPAC methods. Furthermore, SW-NE-oriented A-A' line and SE-NW directional B-B' profiles are examined on the map to investigate the effect of changes on the V s30 values.
In Fig. 7, according to the V s30 map, values were obtained between 260 and 820 m/s. According to the NEHRP regulations; these values were observed to be dominated by B, C, and D group types. In the south of the study area, soil types are C and D type.
In the north of the study area, the dominated geological formation is the Kythrea group which is highly allochthonous dolomites, limestones, and marbles. Predominant soil period (peak period) values obtained from the QTS are usually lower than 1 second in the north of the study area. Here, the seismic impedance changes in the engineering bedrock reduce at the north sides of the study area.
The dominant period values are greater than 1 second in the centre of the area and this indicates that the bedrock thickness is greater than 30 m. In the southeast of the area, T 0 values were found to be 4 and above. This region is potentially consisting of a very thick solid soil layer.
Elevation, amplitude, predominant peak period, and V s30 values distribution were presented with distance in Fig. 8 Amplitude values vary between 1-6 units range for the first performed line which refers to A-A'. There is 2 main peak or sudden increase at the line. The first peak is at approximately 1,500 m. Also, the second peak has a smooth maximum and it is observed at the interval of 4 to 6 km. These two maximum points are calculated as either 3 units or 2 units according to the A-A' line. On the other hand, there is a similar trend for B-B' line. However, the first peak is at a 4 km distance from the most SE of the study area. The first peak is 6 units and the second one is averagely 2.5 units.
Pre-dominant peak period values generally have a scale between 0 to 5 units in the study area according to Fig. 4. The performed lines A-A' and B-B' has no similar trends in Fig. 8. Therefore SW-NE direction for A-A' it has lower values at the first and the last kilometers of the study area. Besides it has a maximum of 3 seconds at the 4,000 m of the study area from the SW to the NE. Oppositely there is a decreasing graphic from the beginning to the end of the line for B-B'.
The V s30 values referring to Fig. 8 shows two different kinds of lines. The first line gives a concave-up graph and the other one shows an increasing graph for the plotted area. Therefore A-A' has a minimum value which is averagely 500 m/s at 4 km from the beginning of the A-A'. On the other hand the values of this line named group C according to NEHRP which represents very dense soil or soft rock. A-A' line represents a newly developing city area. B-B' line represents Miea Milia/Haspolat area. Generally, the data observed in a range between 500-800 m/s for A-A' and 400-600 m/s for B-B'.
Discussion
This research took over three years to complete and create foundations for a better understanding of soil behaviour through the use of novel combinations of multiple measurement methods.
In this study, local site effects that will control the behaviour of soils in Nicosia and its immediate surroundings under dynamic effects (earthquake motion) have been investigated and evaluated by geophysical methods.
When the QTS amplitude map (H/V) (Fig. 5) is examined, it is seen that the amplitudes vary between 1-5.8. This means that the changes in the amplitude-frequency values of the earthquake waves that will reach the ground surface from the bedrock will increase approximately twice. This phenomenon must be taken into consideration when designing engineering structures that are likely to be built in the study area.
K g Vulnerability index values are generally average of 20 values of the study area where values close to the threshold value are observed. S-wave velocity in the same regions was found to be less than 760 m/s for 30 m depth and at the same time, the values of the ground dominant vibration period were greater than 1 s. In these areas, in the event of a possible earthquake, the lateral deformation is likely to be outside the elastic limits. The highest velocity values in the study area were obtained in the northwest of the study area (Fig. 8).
When S wave velocities obtained for 30 m depth, values of soil predo-minant period, and K g vulnerability index parameter was examined compara-tively; In areas where S velocity is increased, K g vulnerability index values decreased. The reason for this decrease is that the lateral deformation of the soil under a dynamic load is directly related to the S wave velocity. At the same time, it has been observed that the soil predominant period increases in these regions, ie the bedrock depth increases. According to these definitions, the depth of bedrock of most of the study area is more than 40 m. When this is taken into consideration, it is recommended to create elastic design spectra in situ for the structures planned to be constructed within the study area.
In the south of the study area and settlement Haspolat, the soil thickness over 30 m, and the average period values of 2-2.5 seconds were calculated (Fig 4.). These values are consistent with the soil classifications specified in Earthquake regulations and other classifications based on V s30 .
The higher values of 1 unit point out a layered medium in shallow scale in peak amplitude map. These values checked with V s values and it shows soil effect will magnify the potential earthquake wave and this means that the definition of the engineering bedrock for the first 30 m for Nicosia soils are not sufficient.
The V s30 designation can be used in areas with such a soil thickness. According to the results of this researches, it is recommended to make special design spectra according to the average V s velocity values obtained for the study area where tall buildings are located.
Conclusions
In this study, geophysical methods were used to obtain QTS, T 0 , A 0 , K g , and V s30 values with NEHRP soil classification. For the NEHRP classification, V s30 velocity values were calculated from MASW Method applications at 52 stations. For the peak period, peak amplitude, and vulnerability values, single-station broadband Microtremor measurements were performed at 100 stations. V s30 values across the study area vary from 260 to 820 m/s. According to NEHRP soil classifications, B, C, and D soil groups which determines rock, very dense soil/ soft rock, and stiff soil are dominant in the study area. According to these findings, there is a rocky medium in the northernmost part of the region, very dense soil or soft rock in the centre, and solidified soil groups in the south and southeast. Peak period values range from 0.1 seconds to 5 seconds. The dominant period values obtained from the Quasi Transfer Spectrum (QTS) are usually T 0 < 1 second in the north of the field. Here, the seismic impedance changes in the engineering bedrock are reduced. The dominant period values obtained in the range of T 0 > 1 in the centre of the area which indicate that the engineering bedrock at the centre is deeper than 30 m. K g values change between 7-54 units. Where the K g changes are greater than 20, soil deformation changes for the behaviour of the soil under dynamic load will occur due to the elastic limits. As a result of this analysis, the findings obtained regarding the behaviour of soil-structure common interaction demonstrate the overall effects that will occur during a destructive earthquake. | 2021-07-03T22:31:05.536Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e9de55a3fcf5d419aa0bc4ce4299625d1de13c4f",
"oa_license": "CCBYNC",
"oa_url": "https://hrcak.srce.hr/ojs/index.php/geofizika/article/download/16927/volume38_4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e9de55a3fcf5d419aa0bc4ce4299625d1de13c4f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
17841329 | pes2o/s2orc | v3-fos-license | Frequency of p190 and p210 BCR-ABL rearrangements and survival in Brazilian adult patients with acute lymphoblastic leukemia
Objective This study investigated the occurrence of the p190 and p210 breakpoint cluster region-Abelson (BCR-ABL) rearrangements in adults with acute lymphoblastic leukemia and possible associations with clinical and laboratory characteristics and survival. Methods Forty-one over 18-year-old patients with acute lymphoblastic leukemia of both genders followed-up between January 2008 and May 2012 were included in this study. Clinical and laboratory data were obtained from the medical charts of the patients. Reverse transcription polymerase chain reaction (RT-PCR) using specific primers was employed to identify molecular rearrangements. Results At diagnosis, the median age was 33 years, and there was a predominance of males (61%). The most common immunophenotype was B lineage (76%). BCR-ABL rearrangements was detected in 14 (34%) patients with the following distribution: p190 (28%), p210 (50%) and double positive (22%). Overall survival of patients with a mean/median of 331/246 days of follow up was 39%, respectively, negative BCR-ABL (44%) and positive BCR-ABL (28%). Conclusion These results confirm the high frequency of BCR-ABL rearrangements and the low survival rate of adult Brazilian patients with acute lymphoblastic leukemia.
Introduction
Acute lymphoblastic leukemia (ALL) in adults comprises a group of diseases with biological, clinical, laboratorial and prognostic heterogeneity characterized by abnormal proliferation and accumulation of immature lymphoid cells in the bone At present, the diagnosis and classification of acute leukemia depend on cytomorphologic, immunophenotypic, cytogenetic and molecular analyses. Molecular tests are part of the criteria for the risk classification system of the World Health Organization (WHO) and are used to evaluate the prognosis correctly and define therapeutic strategies. 4 Of the various genetic alterations observed in adult ALL, the breakpoint cluster region-Abelson (BCR-ABL) fusion gene is the most common and is associated with a particularly poor prognosis. 1,5,6 This gene rearrangement can present two distinct isoforms, p190 and p210 due to different breakpoints. 7 Recent studies indicate that these two isoforms may be associated with different clinical phenotypes in adult ALL patients. 8 The aim of this study was to investigate the occurrence of the p190 and p210 BCR-ABL rearrangements in adult ALL patients and to investigate possible associations with clinical and laboratory features and survival.
Methods
The study group comprised 41 over 18-year-old patients of both genders diagnosed with ALL at the Fundação de Hematologia e Hemoterapia de Pernambuco (Hemope) from January 2008 to May 2012. The diagnosis was established by clinical, cytomorphological and immunophenotypic criteria. The standard treatment protocol used was the hyperfractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone (HyperCVAD) regimen. 9 This project was approved by the Research Ethics Committee of the institution (#17/2010) and the study was conducted in accordance with the Declaration of Helsinki 2008.
Clinical and laboratory data were obtained from the patients' records. Samples of peripheral blood and bone marrow were collected after informed consent had been given. The identification of the p190 and p210 BCR-ABL gene rearrangements was performed by reverse transcription polymerase chain reaction (RT-PCR) according to the international BIOMED-1 protocol. 10 The following controls were used in the RT-PCR reactions: positive, negative, endogenous and contamination.
Statistical analysis was performed using the Bioestat 5.0 and Stata 9.1 programs. The t-test was used to compare the groups regarding age, leukocyte count, blasts, platelet count and hemoglobin. The Fisher exact test was used for the categorical variables (gender and immunophenotype). Overall survival was calculated using the Kaplan-Meier Log rank method. p-values <0.05 were considered statistically significant.
Results
Of the 41 patients analyzed, ALL was more prevalent in young adults and men and the most common immunophenotype was B lineage (Table 1).
No statistically significant differences were found between the groups of BCR-ABL positive and negative patients in respect to the clinical and laboratory variables. However, the p210 BCR-ABL patients had higher leukocyte counts and all p190 BCR-ABL patients had the B immunophenotype (Table 2). The overall survival was 39% with a mean follow-up of 331 days (median 246 days). Survival was lower for BCR-ABL positive (28%) than for BCR-ABL negative (44%) patients. The Log-rank test, however, showed no statistically significant difference (p-value = 0.2297) between the survival curves of the two groups (Fig. 1). The mortality rate of BCR-ABL positive patients is 1.94 times greater [95% Confidence Interval (CI): 0.80-4.26] than the BCR-ABL negative individuals, but again the difference is not statistically significant (p-value = 0.148).
Discussion
The median age of the patients was 33 years, which is similar to several published series. [11][12][13][14][15][16][17] Males predominated in the sample, which is in accordance with the main multicenter studies. [11][12][13][14][15][18][19][20][21] The results of several studies have shown similar numbers of leucocytes 11,12,20 at diagnosis, including the percentage of blasts in the peripheral blood 13 and platelet count. 17 The 76% frequency of B cell phenotype is in accordance with various published studies. 11,12,17,[21][22][23] The 34% frequency of the BCR-ABL rearrangement is similar to that found in several studies with values ranging from 17% to 37%, 1,[11][12][13][14][17][18][19][20][21][22] including in elderly patients, as reported by Larson. 24 No published Brazilian studies with data regarding the molecular analysis of BCR-ABL in adult ALL patients were found for comparison. A case series of 42 adult Brazilian patients showed 7% of Ph + samples. 25 The results presented in this study confirm the high frequency of BCR-ABL rearrangements in adult ALL patients, but, differ from other studies regarding the type of isoforms found. Gleier et al. 21 showed a 37% positivity for the BCR-ABL fusion gene in 478 adult ALL patients including the p190 (77%) and p210 (20%) rearrangements and both isoforms (3%). Dombret et al. 22 found the following frequencies among 154 adult ALL patients: p190 (68%), p210 (28%) and both isoforms (4%). The explanation for our results is not clear, including the
BCR-ABL + p210 (n = 7)
BCR-ABL + p190/p210 (n = 3) 0 1 25 49 73 97 121 145 169 193 217 241 265 289 313 337 361 385 409 433 457 481 505 529 553 577 601 625 649 673 697 721 745 769 793 817 841 865 889 913 937 961 985 1009 1033 occurrence of BCR-ABL positivity in T-ALL cases 19 , but may be due to sample size or characteristics of the population studied, as well as patients with chronic myeloid leukemia in acute phase. 26 The analysis of the survival curves, in addition to confirming the low rate of overall survival for adult patients diagnosed with ALL, 9,14,20,21,23 also suggests an increased adverse prognosis conferred by the presence of BCR-ABL rearrangements, 2,18,[21][22][23]27 and therefore a need for other therapeutic modalities, including targeted therapies and bone marrow transplantation. 28 The range of the confidence interval of the mortality rate suggests that the sample size was too small to show a difference and that increasing it would make it more evident. Phenotypic differences between p190 and p210 BCR-ABL patients is controversial. 8,21 Further studies with a larger sample size, including elderly patients, are needed to better characterize the association between these rearrangements and different phenotypic expressions and survival. The detection of the BCR-ABL fusion gene is important for the classification of risk groups of ALL patients and the correct targeting of therapy. 2,3 Moreover, in addition to the BCR-ABL fusion gene, other rearrangements, such as E2A-PBX1, TEL-AML1, MLL-AF4, should be screened, because they also have prognostic significance. 29
Conclusion
Our results provide the first published evidence of the high frequency of BCR-ABL and poor survival in adult Brazilian ALL patients. The study confirms the importance of detecting BCR-ABL rearrangements for the treatment and prognosis of these patients. | 2016-05-12T22:15:10.714Z | 2014-07-18T00:00:00.000 | {
"year": 2014,
"sha1": "b6808389ed5fb59b75ba1fcdd83203f89728471c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bjhh.2014.07.016",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "596ee25767dd33488b6e3e1b3520c4cf71f07c3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256346329 | pes2o/s2orc | v3-fos-license | High-dose Cefepime vs Carbapenems for Bacteremia Caused by Enterobacterales With Moderate to High Risk of Clinically Significant AmpC β-lactamase Production
Abstract Background Limited data suggest that serious infections caused by Enterobacterales with a moderate to high risk of clinically significant AmpC production can be successfully treated with cefepime if the cefepime minimum inhibitory concentration (MIC) is ≤2 µg/mL. However, isolates with a cefepime-susceptible dose-dependent (SDD) MIC of 4–8 µg/mL should receive a carbapenem due to target attainment and extended-spectrum β-lactamase (ESBL) concerns. Methods This was a retrospective cohort study of hospitalized patients with E. cloacae, K. aerogenes, or C. freundii bacteremia from January 2015 to March 2022 receiving high-dose cefepime or a carbapenem. Cox regression models were used with incorporation of inverse probability of treatment weighting and time-varying covariates. Results Of the 315 patients included, 169 received cefepime and 146 received a carbapenem (ertapenem n = 90, meropenem n = 56). Cefepime was not associated with an increased risk of 30-day mortality compared with carbapenem therapy (adjusted hazard ratio [aHR], 1.45; 95% CI, 0.79–2.14), which was consistent for patients with cefepime SDD isolates (aHR, 1.19; 95% CI, 0.52–1.77). Multivariable weighted Cox models identified Pitt bacteremia score >4 (aHR, 1.41; 95% CI, 1.04–1.92), deep infection (aHR, 2.27; 95% CI, 1.21–4.32), and ceftriaxone-resistant AmpC-E (aHR, 1.32; 95% CI, 1.03–1.59) to be independent predictors associated with increased mortality risk, while receipt of prolonged-infusion β-lactam was protective (aHR, 0.67; 95% CI, 0.40–0.89). Conclusions Among patients with bacteremia caused by Enterobacterales with moderate to high risk of clinically significant AmpC production, these data demonstrate similar risk of 30-day mortality for high-dose cefepime or a carbapenem as definitive β-lactam therapy.
Gram-negative infections pose serious therapeutic problems due to rising antimicrobial resistance, which caused >2.8 million infections and 35 000 deaths annually in the United States from 2012 to 2017 [1]. Several Enterobacterales spp. contain chromosomally encoded and inducible ampC genes, with E. cloacae, K. aerogenes, and C. freundii demonstrating a moderate to high risk for clinically significant inducible AmpC production (AmpC-E) [2]. Exposure of these bacteria to certain β-lactam antibiotics, even if they demonstrate initial in vitro susceptibility, can induce ampC gene expression, which may lead to clinical failure [3].
Due to growing concern regarding increased selection of carbapenem-resistant organisms, noncarbapenem treatment strategies have been explored [4]. Cefepime, a weak ampC inducer, withstands hydrolysis via formation of a stable acyl enzyme complex [5]. Retrospective studies have shown that cefepime has efficacy similar to that of carbapenems for the treatment of Enterobacter spp. bacteremia [6,7]. However, limited data highlighting treatment-emergent cefepime resistance and concerns about diminished cefepime efficacy for the treatment of cefepime-susceptible dose-dependent (SDD) AmpC-E isolates (cefepime minimum inhibitory concentration [MIC] 4-8 µg/mL) lend hesitancy to its use [8,9].
Concerns of diminished cefepime efficacy for the treatment of SDD AmpC-E arose, in part, from failure of cefepime to meet necessary pharmacodynamic targets secondary to inadequate dosing and/or interval schedules [10][11][12]. Reese and colleagues demonstrated this by showing that cefepime 2 g every 12 hours failed to achieve target attainment of 50% free drug concentration greater than the MIC (fT > MIC) with cefepime MICs of 8 and 16 µg/mL [13]. Based on similar pharmacodynamic/pharmacokinetic (PK/PD) data as well as limited clinical experience with Enterobacterales infections, the Clinical Laboratory and Standards Institute (CLSI) and European Committee on Antimicrobial Susceptibility Testing (EUCAST) revised cephalosporin susceptibility breakpoints, increased the recommended total daily dose of cefepime for Enterobacterales SDD isolates, and removed the requirement for ESBL phenotype detection [14][15][16]. Following guideline updates, an observational study demonstrated better outcomes with carbapenems compared with cefepime in patients with E. cloacae bacteremia; however, there were limited data on the use of high-dose cefepime [17].
Considering the limited available data, the Infectious Diseases Society of America (IDSA) Guidance on the Treatment of Antimicrobial-Resistant Gram-negative Infections suggests that infections caused by AmpC-E can be successfully treated with cefepime, with the caveat that cefepime SDD AmpC-E isolates have a higher likelihood of being extended-spectrum β-lactamase (ESBL) producers and, thus, should preferentially be treated with a carbapenem as cefepime is considered suboptimal [18]. Still, due to sparse data from heterogeneous PK/PD and retrospective observational studies, in addition to the lack of routine ESBL screening for Enterobacterales spp. other than E. coli and K. pneumoniae, questions remain pertaining to the treatment of AmpC-E bacteremia [6,[19][20][21][22][23]. As such, the purpose of this study was to evaluate outcomes in patients with AmpC-E bacteremia receiving high-dose cefepime or a carbapenem as definitive β-lactam therapy.
METHODS
This retrospective cohort study evaluated hospitalized adult patients with E. cloacae, K. aerogenes, or C. freundii bacteremia between January 2015 and March 2022 at 2 urban academic medical centers in Detroit, Michigan, receiving either highdose cefepime or a carbapenem (meropenem or ertapenem) as definitive therapy. High-dose cefepime was defined as 2 g every 8 hours, while meropenem and ertapenem were dosed 1-2 g every 8 hours and 1 g every 24 hours, respectively. Renally adjusted equivalents and prolonged infusions were administered as appropriate. An additional inclusion criterion was receipt of ≥48 hours of cefepime or carbapenem therapy within 48 hours of index blood culture collection. Patients were excluded if they transferred in from an outside facility with AmpC-E blood culture, were prisoners, pregnant or breastfeeding, had cancer with an Eastern Cooperative Oncology Group (ECOG) score of 3 or 4 [24], had a concomitant infection with in vitro resistance to both cefepime and carbapenem therapy, or if they died or transferred to hospice or an outside facility within 72 hours of index blood culture collection. Although not recommended as definitive therapy by current treatment guidance [18], patients receiving ≤1 dose of ceftriaxone as active empiric therapy were included in the study as it is used in this manner at both study sites, as appropriate.
The primary outcome was mortality within 30 days of index blood culture collection. Microbiological failure (positive blood culture with index organism at ≥48 hours post-initiation of in vitro active agent with documented source control), microbiological relapse (growth of index organism in blood culture following negative blood culture), hospital and intensive care unit (ICU) length of stay (LOS), and 30-day infection-related readmission were also evaluated.
Patient demographics and baseline characteristics were extracted from the electronic health record (EHR) and entered into Research Electronic Data Capture (REDCap) [25]. Comorbidity burden was estimated by the Charlson comorbidity index, and measures of organ function and illness severity were assessed as described by the highest Acute Physiology and Chronic Health Evaluation II (APACHE II), Sequential Organ Failure Assessment (SOFA), and Pitt bacteremia score within 48 hours before or on the day of index culture collection [26]. A second Pitt bacteremia score was collected at 48 ± 24 hours of definitive antibiotic therapy initiation to assess clinical response to definitive therapy. All isolates were identified by clinical microbiology laboratories located within the 2 study centers. Susceptibility testing was performed via the Phoenix (BD) or Vitek-2 system (bioMerieux). Regarding hospital guidelines/protocols in place to limit or direct therapeutic selection for AmpC-E blood isolates, 1 of the 2 sites included in their EHR report of microbiological and antimicrobial susceptibility test results that third-generation cephalosporins are not preferred due to the risk of treatment-emergent resistance.
Based on limited subpopulation analyses evaluating highdose cefepime and carbapenem therapy for infections caused by AmpC-E, conservative estimates of 15% and 60% mortality were anticipated for the cefepime cohort as a whole and those receiving cefepime for cefepime SDD isolates, respectively. Therefore, a total sample size of at least 224 patients, with 60/ 224 (26.8%) of those patients having cefepime SDD isolates, was determined a priori to achieve 85% power at the 95% confidence level. Nominal variables were compared using the Pearson chi-square test or Fisher exact test. Ordinal and continuous variables were analyzed using the Mann-Whitney U test and Student t test for nonparametric and parametric data, respectively.
To address nonrandomized allocation of β-lactam therapy, propensity scores were calculated by multivariable logistic regression to estimate the binary outcome of each patient's probability of receiving cefepime or a carbapenem as definitive therapy. The following covariates were included in the generation of the propensity score due to their statistical difference between groups of P ≤ .1: admitted from home, referral from clinic, APACHE II score, first active empiric therapy (cefepime, ertapenem, or meropenem), and surgical source control procedure. Using the propensity scores, inverse probability of treatment weighting (IPTW) was applied to create a study pseudo population, balanced for potential covariate bias. Patients receiving cefepime were weighted by the inverse probability of being treated with cefepime while patients receiving a carbapenem were weighted by the inverse probability of being treated with a carbapenem, equivalent to 1 minus the patient's propensity score [27,28]. Covariate balance by propensity score was assessed with the Kolmogorov-Smirnov (KS) goodness-of-fit statistic and standardized mean differences (SMDs), as appropriate. The prediction ability of the propensity score model was assessed with an area under the receiver operating characteristic (AU-ROC) curve. The primary end point, 30-day mortality, was analyzed for each the unadjusted and IPTW pseudo population using a Cox proportional hazards model with timevarying covariates. The time-varying Cox proportional hazards model accounts for immortal time bias and allows for an assessment of risk associated with variations in the time elapsed from index culture collection to initiation of active empiric and definitive β-lactam therapy between groups. All variables associated with 30-day mortality in univariate analysis with a P value ≤.1, present in ≥10% of all cases, and not already included in the propensity score model were considered for inclusion in the multivariable Cox regression. All tests were 2-tailed, with a P value of ≤.05 considered statistically significant. Analyses were performed using IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, NY, USA). This study was reviewed and approved by the Wayne State University and Henry Ford Health System institutional review boards and the Detroit Medical Center's research committee.
RESULTS
In total, 656 patients with positive AmpC-E blood receiving antibiotic treatment were screened for study inclusion, with 315 fulfilling inclusion criteria (cefepime n = 169, carbapenem n = 146) ( Figure 1). The median (interquartile range [IQR]) age was 63 (53-74.5) years, 55.2% were male, and 59% were admitted to the ICU at least once during the hospital admission (Table 1). Treatment characteristics are outlined in Table 2. In unadjusted and weighted Cox regression analysis with IPTW and time-varying covariates, cefepime was not associated with an increased risk of 30-day mortality compared with a carbapenem (18.9% vs 17.1%, respectively; adjusted hazard ratio [aHR], 1.45; 95% CI, 0.79-2.14) ( to be independent predictors associated with increased mortality risk, while receipt of prolonged-infusion β-lactam was protective (aHR, 0.67; 95% CI, 0.40-0.89) ( Table 4). The weighted standardized differences were below the 0.1 threshold for all investigated covariates (Supplementary Figure 1). An area under Figure 2).
DISCUSSION
This study reported clinical outcomes in hospitalized adult patients infected with AmpC-E blood isolates with moderate to high risk for clinically significant inducible AmpC production treated with high-dose cefepime or carbapenem therapy, as previously described. Upon assessment of the primary outcome, cefepime was not associated with an increased risk of 30-day mortality compared with a carbapenem in both unadjusted and propensity score-weighted Cox regression models with time-varying covariates, which was consistent when comparing patients with cefepime SDD vs cefepime-susceptible isolates. These findings differ from those reported in a 2015 observational study by Lee et al. comparing clinical outcomes in patients infected with ceftriaxone-nonsusceptible cefepime SDD E. cloacae isolates and treated with cefepime or carbapenem therapy, which identified increased 30-day mortality in patients treated definitively with cefepime [17]. The authors concluded that cefepime may be considered for infections caused by cefepime-susceptible E. cloacae isolates; however, cefepime should be used cautiously for cefepime SDD E. cloacae infections. Notably, only 38.9% of patients in that study received high-dose cefepime as definitive therapy compared with 100% of cefepime patients in the current study [17]. To our knowledge, only 1 observational study evaluating cefepime for the treatment of ESBL-producing Enterobacterales included patients receiving cefepime in 8-hour intervals. In that study, inferior outcomes were reported for cefepime compared with carbapenems; however, only 12 patients received cefepime 2 g every 8 hours, and the patients had infections caused by ESBL-producing E. coli, K. pneumoniae, or P. mirabilis infections [29]. Results of the weighted multivariable Cox regression analysis identified the following factors to be independently associated with 30-day mortality: Pitt bacteremia score >4, deep infections, and ceftriaxone-resistant AmpC-E isolates. The association of higher Pitt bacteremia score leading to increased odds of mortality aligns with previous literature, including studies evaluating outcomes in patients with AmpC-E bacteremia [17,26,30]. The finding of increased odds of 30-day mortality when treating high-inoculum infections (ie, deep infections) with cefepime or a carbapenem has been heavily debated in recent decades [31]. Studies that evaluated the phenomenon of attenuated antibacterial activity at high bacterial inoculum, referred to as the inoculum effect, demonstrated that cephalosporins, and to a lesser extent carbapenems, were less susceptible and had diminished efficacy at higher bacterial inoculums [32][33][34][35]. In a study by Burgess et al., cefepime and meropenem were evaluated against K. pneumonia standard and highinoculum infections. The authors identified that although bactericidal activity for both antibiotics remained the same at standard inoculums, only meropenem sustained bactericidal activity at higher inoculums [36]. Notably, all K. pneumoniae isolates in that study were ESBL-producing, and literature evaluating the inoculum effect between cefepime and meropenem for non-ESBL and ESBL-producing AmpC-E is scant.
Another debated topic regarding AmpC-E is whether carbapenems are necessary for infections caused by all ceftriaxoneresistant Enterobacterales spp. In the current study, ceftriaxone-resistant isolates were independently associated with 30-day mortality in multivariable Cox regression analysis. According to the IDSA, carbapenems are the preferred drugs for moderate to severe infections caused by ESBL-producing E. coli, K. pneumoniae, K. oxytoca, or P. mirabilis of which a ceftriaxone MIC of ≥2 µg/mL can be used as a proxy for ESBL production [18]. While most E. coli-, K. pneumoniae-, K. oxytoca-, or P. mirabilis-producing ESBLs have ceftriaxone MIC ≥2 µg/mL, data evaluating ceftriaxone resistance and ESBL production in other AmpC-E are lacking.
Administration of prolonged β-lactam infusions for increased exposure and target attainment compared with 30-minute infusions has been discussed previously [37], with prolonged infusions demonstrating decreased mortality in Immunocompromised: any chemo or radiation therapy within 30 days, HIV/AIDS with CD4 <200, or chronic steroids (equivalent to >40 mg prednisone). c Hospital-acquired infection: index positive blood culture collected 48 hours after hospital admission. d Follow-up Pitt bacteremia score: highest score collected 48 ± 24 hours after definitive antibiotic therapy initiation. e Septic shock at index culture collection: sepsis associated with a systolic blood pressure <90 mm Hg and the need for intravenous hydration and vasopressors for blood pressure resuscitation. f Deep infection: endocarditis, septic pulmonary emboli, osteomyelitis, and hepatic or muscular abscesses presumed to be caused by the AmpC-E blood isolate based on provider documentation and imaging results in the electronic medical record.
critically ill patients with P. aeruginosa infections [38,39]. The current study demonstrated that receipt of prolonged-infusion β-lactam (eg, cefepime or meropenem) was associated with a protective effect in patients with AmpC-E bacteremia compared with those receiving an intermittent infusion. However, without serum β-lactam concentrations, target attainment Active empiric antibiotic: antibiotic therapy with in vitro activity received before microbiological identification. c Patients receiving ≤1 dose of ceftriaxone as active empiric therapy were included. d Definitive antibiotic: cefepime or carbapenem therapy received within 48 hours of index culture collection and continued for ≥48 hours following microbiological identification. e Loading dose: receipt of a 30-minute cefepime or carbapenem infusion as the first β-lactam dose. f Prolonged infusion: denominator for carbapenem group includes only meropenem cases as all ertapenem doses were administered as 30-minute infusions. g Postdefinitive antibiotic: cefepime or carbapenem therapy received after ≥48 hours of definitive therapy and continued for at least 48 hours. h Lack of clinical improvement: any of the following after ≥48 hours of definitive therapy: persistent fever, leukocytosis, repeat positive blood culture, follow-up Pitt bacteremia score equal to or higher than the initial Pitt bacteremia score. i Discharge/dosing convenience: EHR documentation that the change from definitive to postdefinitive therapy was for regimen convenience purposes. j Practitioner preference: EHR documentation that the change from definitive to postdefinitive therapy was based on microbiological AmpC-E genus and species data, unrelated to MIC. k Surgical source control procedures included: intravenous catheter removal, valvular repair/replacement, invasive device removal, incision and drainage, drain placement, debridement, resection, excision, or amputation. l Microbiological failure: positive blood culture with index organism after ≥48 hours of definitive therapy with documented source control, if applicable. m Microbiological relapse: growth of index organism in blood culture following negative blood culture. n Patients were considered to have a 30-day infection-related readmission if they were readmitted to the hospital within 30 days of discharge with a positive culture from any source and receipt of in vitro active antimicrobial therapy.
between groups remains unknown. Additional prospective studies are warranted to examine this question, especially in a critically ill population that has previously demonstrated suboptimal β-lactam plasma concentrations even upon receipt of prolonged β-lactam infusions [40,41].
The strengths of this study include the consideration of highdose cefepime as a carbapenem-sparing option for bacteremia caused by AmpC-E with moderate to high risk of clinically significant AmpC β-lactamase production, while prior studies have focused primarily on standard cefepime dosing regimens. Additionally, the number of patients included in the cohort with cefepime SDD blood isolates is 4-fold that previously reported on, augmenting the clinical validity of this study. Further, the inclusion of time-varying covariates in weighted statistical models considers variations in time elapsed from culture collection to β-lactam initiation.
This study is not without limitations. First, while possible AmpC induction across AmpC-E was evaluated by identifying organisms initially susceptible to certain β-lactam agents that on subsequent isolation become resistant, AmpC-E isolate genotyping was not conducted to confirm that the same organism was recovered and that AmpC production had in fact significantly increased. Thus, one cannot eliminate the possible presence of ESBL-producing isolates harboring and expressing β-lactamase genes other than CTX-M. One such β-lactamase gene was SHV, which was previously identified in 33% of ESBL-producing E. cloacae isolates, and current multiplex polymerase chain reaction kits that identify SHV are for research use only and not for diagnostic procedures [23,35]. However, the usefulness of ESBL testing in clinical practice remains debatable as Enterobacterales isolates may have multiple existing mechanisms of resistance including Enterobacterales with chromosomally expressed AmpC, possibly limiting test accuracy and the ability to detect class A enzymes. Additionally, current breakpoints relied on PK/PD data with high-dose cefepime that, if used against ESBL-producing Enterobacterales, may provide a substantial PD cushion. Further, variations in practitioner preference related to the treatment of serious infections caused by AmpC-E may have resulted in treatment selection bias not remedied by methods used to mitigate bias including propensity score weighting and time-varying covariates.
In summary, our results suggest that high-dose cefepime may be a reasonable option for bacteremia caused by AmpC-E with moderate to high risk of clinically significant AmpC β-lactamase production. Additional microbiological and treatment factors may be considered in therapeutic guidance for AmpC-E with moderate to high risk of clinically significant AmpC β-lactamase production including ceftriaxone susceptibility data, β-lactam dose, and duration of infusion. Further large-scale studies are warranted. Abbreviations: aHR, adjusted hazard ratio; AmpC-E, AmpC-producing Enterobacterales; IPTW, inverse probability of treatment weighting; MIC, minimum inhibitory concentration. a Thirty-day mortality: mortality within 30 days of index positive blood culture collection. b Propensity score and time-varying covariates: admitted from home, referral from clinic, APACHE II score, surgical source control procedure, active empiric cefepime, ertapenem, or meropenem, and differences in time elapsed from blood culture collection to initiation of active empirical, definitive, and postdefinitive antibiotic.
Supplementary Data
Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. Abbreviations: aHR, adjusted hazard ratio; AmpC-E, AmpC-producing Enterobacterales; HR, hazard ratio; IPTW, inverse probability of treatment weighting. a Propensity score and time-varying covariates: admitted from home, referral from clinic, APACHE II score, surgical source control procedure, active empiric cefepime, ertapenem, or meropenem, and differences in time elapsed from blood culture collection to initiation of active empirical, definitive, and postdefinitive antibiotic. b Classification and regression tree analysis used to predict the Pitt bacteremia scores associated with mortality. c Deep infection: endocarditis, septic pulmonary emboli, osteomyelitis, and hepatic or muscular abscesses presumed to be caused by the AmpC-E blood isolate based on documentation in the electronic medical record. d Ceftriaxone-resistant AmpC-E: ceftriaxone minimum inhibitory concentration of ≥2 µg/mL for AmpC-producing Enterobacterales in blood culture. | 2023-01-29T16:02:30.332Z | 2023-01-25T00:00:00.000 | {
"year": 2023,
"sha1": "dc3f40bf365784893b0ddbafc54f3a514894e508",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/advance-article-pdf/doi/10.1093/ofid/ofad034/48908828/ofad034.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1db6224ab2561f513bd4c96520babf45a773e8ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255805546 | pes2o/s2orc | v3-fos-license | Expression of ligands for activating natural killer cell receptors on cell lines commonly used to assess natural killer cell function
Natural killer cell responses to virally-infected or transformed cells depend on the integration of signals received through inhibitory and activating natural killer cell receptors. Human Leukocyte Antigen null cells are used in vitro to stimulate natural killer cell activation through missing-self mechanisms. On the other hand, CEM.NKr.CCR5 cells are used to stimulate natural killer cells in an antibody dependent manner since they are resistant to direct killing by natural killer cells. Both K562 and 721.221 cell lines lack surface major histocompatibility compatibility complex class Ia ligands for inhibitory natural killer cell receptors. Previous work comparing natural killer cell stimulation by K562 and 721.221 found that they stimulated different frequencies of natural killer cell functional subsets. We hypothesized that natural killer cell function following K562, 721.221 or CEM.NKr.CCR5 stimulation reflected differences in the expression of ligands for activating natural killer cell receptors. K562 expressed a higher intensity of ligands for Natural Killer G2D and the Natural Cytotoxicity Receptors, which are implicated in triggering natural killer cell cytotoxicity. 721.221 cells expressed a greater number of ligands for activating natural killer cell receptors. 721.221 expressed cluster of differentiation 48, 80 and 86 with a higher mean fluorescence intensity than did K562. The only ligands for activating receptor that were detected on CEM.NKr.CCR5 cells at a high intensity were cluster of differentiation 48, and intercellular adhesion molecule-2. The ligands expressed by K562 engage natural killer cell receptors that induce cytolysis. This is consistent with the elevated contribution that the cluster of differentiation 107a function makes to total K562 induced natural killer cell functionality compared to 721.221 cells. The ligands expressed on 721.221 cells can engage a larger number of activating natural killer cell receptors, which may explain their ability to activate a larger frequency of these cells to become functional and secrete cytokines. The few ligands for activating natural killer cell receptors expressed by CEM.NKr.CCR5 may reduce their ability to activate natural killer cells in an antibody independent manner explaining their relative resistance to direct natural killer cell cytotoxicity.
Background
Natural Killer (NK) cells are a subset of lymphocytes that direct innate immune responses to kill stressed, virally-infected, and transformed cells [1]. NK cells belong to the group 1 innate lymphoid cells (ILCs) as do ILC1s [2][3][4]. NK cells can interact with target cells and lyse them directly via the release of cytotoxic granules containing perforin and granzyme [5,6]. Activated NK cells can also secrete a broad range of cytokines and chemokines that can activate adaptive immune cells to lyse target cells, bridging the innate and adaptive immune systems [7][8][9]. Regardless of the method used by NK cells to lyse target cells, they must first be activated to elicit a response. The activation state of an NK cell results from the integration of different signals transmitted through its activating and inhibitory natural killer cell receptors (aNKR and iNKR) [9,10]. Activation can result from the loss of inhibitory signaling, when there are no ligands for iNKR to engage, in conjunction with sustained aNKR signaling, or from the engagement of aNKR by their ligands that overwhelms inhibitory signaling through iNKR [8,10,11]. However, in vivo, NK cells interacting with target cells will receive a variety of signals through both receptor types and whether this results in activation or not depends on the number and strength of each type of signal transmitted.
A common method to activate NK cells is to co-culture them with human leukocyte antigen (HLA)-null cells. The most frequently used HLA-null cells include the myelogenous leukemia K562 and B-lymphoblastoid 721.221 (.221) cells lines, which do not express the major histocompatibility complex class I (MHC-I) HLA-A, -B, or -C antigens on their surface [12][13][14]. As these cells are incapable of engaging the inhibitory killer immunoglobulin-like receptors (KIR) on NK cells, which recognize subsets of HLA-A, -B, and -C, inhibitory signaling through these receptors is abrogated. As signals from these inhibitory receptors oppose those from aNKR, their removal allows the engagement of aNKR by their ligands to activate NK cells [15].
Previous work from our lab demonstrated that the K562 and .221 HLA-null cell lines stimulated NK cells differentially to secrete the cytokine interferon-γ (IFN-γ), the chemokine CCL4, and to express the degranulation marker CD107a [16]. Specifically, we observed that .221 activated a greater fraction of NK cells than did K562 and that stimulation with .221 preferentially induced IFN-γ and CCL4 secretion, while K562 more potently stimulated degranulation [16]. In the absence of the expression of the major ligands for iNKR on the surface of K562 and .221, it is likely that ligands to aNKR regulate NK cell activation and differences in their expression profiles might explain their different capacities to activate NK cells. Indeed, the expression of some ligands to aNKR was shown to differ on both cell lines, although these differences remain incompletely characterized [17][18][19][20][21][22].
CEM.NKr.CCR5 cells are commonly used to stimulate NK cells by antibody dependent cellular cytotoxicity (ADCC) [23,24]. They were derived from CEM.NKr cells that were selected from the parental CEM T cell line for their resistance to direct NK cell lysis [25]. CEM cells express HLA-A, B and C antigens and the aNKR CD16, whose engagement is important for ADCC activity [25,26]. The aNKR profile of this cell line is poorly defined.
In addition to the two major aNKR families, signaling through several other receptors can contribute to NK cell activation. These include cluster of differentiation (CD)244/2B4 and the NK-T-B cell antigen (NTB-A), which are CD2 family receptors that engage CD48 to trigger NK cell cytotoxicity [41]. Another receptor, expressed on virtually all NK cells, is the leukocyte adhesion molecule DNAX accessory molecule-1 (DNAM-1) that binds to CD112 (Nectin-2) and CD155 (poliovirus receptor) ligands [18,42,43]. Signaling through DNAM-1 can stimulate NK cells when it is co-expressed with the lymphocyte function-associated antigen 1 (LFA-1). LFA-1 can bind the integrins intercellular adhesion molecule (ICAM)-1 and ICAM-2 on target cells, bridging NK and target cells and forming the immunological synapse [44][45][46][47][48]. Additionally, target cells expressing the T cell co-stimulatory B7 molecules CD80 and CD86 can stimulate NK cells [49][50][51]. The nature of this signaling is still poorly understood, but it has been suggested that NK cell activation by CD80 and CD86 depends on a CD28 variant expressed on NK cells [50,52].
In contrast to HLA-A, -B, and -C, which signal through iNKRs, the non-classical HLA-E and -F can contribute to NK cell activation through the engagement of aNKRs. Much like its classical MHC-I counterparts, HLA-E interacts with the inhibitory C-type lectin-like receptor NKG2A, which heterodimerizes with CD94, and with the aNKRs NKG2E and -C [53][54][55]. HLA-F is a more recent addition to the family of ligands to aNKR and has been shown to stimulate degranulation and cytokine production through its binding to KIR3DS1 on NK cells [56][57][58].
To determine whether the differences observed in the frequency and function of NK cell subsets stimulated by K562, .221 and CEM.NKr.CCR5 cells are related to differences in their expression profiles of ligands to aNKRs, we analyzed the expression of a comprehensive panel of aNKR ligands on these cell lines by multi-parametric flow cytometry. As ligands to inhibitory KIR are not expressed on HLA-null cell lines, it is likely that expression of ligands to aNKRs plays a role in modulating differential NK cell responses to K562 and .221. Although others have proposed that the resistance of CEM.NKr.CCR5 cells to direct NK cell cytolysis is related to changes in cell surface marker expression that occurred when NK cell resistance to cytolysis was selected for, the ligands for aNKR have not been profiled on this cell line [25]. We report here a characterization of the aNKR ligand phenotype of three NK cell stimuli to contribute to a better understand the mechanisms governing their behavior as inducers of NK cell function and targets for NK cells cytolysis. [58]. CEM.NKr.CCR5 cells were typed for HLA allotypes using sequence based HLA typing methods and found to express HLA allotypes that were consistent with those previously reported for their parental CEM and CEM.NKr cell lines [25,59]. Cell lines were thawed and cultured in RPMI 1640 medium supplemented with 10% FBS; 2 mM L-glutamine; 50 IU/mL penicillin; 50 mg/mL streptomycin (R10, all from Wisent) for at least one passage before staining. Cells were passaged three times a week and maintained in culture for a maximum of 1 month. Only healthy cells with a high viability were used in staining experiments.
Antibody staining and acquisition
Cell lines were stained in triplicate on 1 or 2 occasions or in replicates of six on 1 occasion with UV Live/Dead ® Fixable Blue cell stain kit, as per manufacturer's directions (Thermofisher, Waltham, MA) and were stained with mAbs clones or chimeric proteins specific for cell surface aNKR ligands organized into 4 panels for the purpose of analysis. Table 1 shows for each mAb and chimeric protein used, its aNKR ligand specificity, the aNKR each ligand recognized, the designation of the antibody clone used for staining, its commercial source and to which fluorochrome antibodies or secondary antibodies recognizing chimeric proteins or primary antibodies were conjugated. Fc chimeric proteins or unconjugated mAbs 3D11 and 3D12, cells were prepared as previously described, with the exception that they were stained with chimeric proteins or primary mAb on ice for 40 min. After washing, binding of Fc chimeric proteins was detected using a polyclonal anti-human IgG (Fcγ-specific)-PE conjugated secondary antibody (BioSciences), while 3D11 and 3D12 binding was detected using a polyclonal F (ab') 2 anti-mouse IgG-APC conjugated secondary antibody (eBioscience) for 20 min on ice. Following staining, all cells were washed and fixed with 2% paraformaldehyde (Santa Cruz, Dallas TX). Between 400,000 and 600,000 events were acquired on an LSRFortessa × 20 within 24 h (BD). Unstained, single stained controls (CompBead; BD), fluorescence minus one (FMO), and secondary antibody alone controls were used for multi-color compensation and gating purposes. An additional isotype control for the secondary antibodies used to detect KIR3DS1-Fc chimeric protein and unconjugated 3D11 mAb binding to HLA-F were also used. As staining with the antibody specific for ULBP-1 generated signals with a low mean fluorescence intensity (MFI), an isotope control for this mAb was also used as a control in addition to the above-mentioned controls. Flow cytometric analysis was performed using FlowJo software version 10 (TreeStar, Ashland OR).
Results are presented as the MFI of cells stained with mAbs/Fc chimeric protein to each aNKR ligand versus FMO/isotype control/secondary antibody alone staining. The MFI of staining for individual ligands was reported as background subtracted FMO, isotype control or secondary antibody alone MFI. The control used to background subtract staining for each condition is indicated in Table 2.
Statistical analysis
Statistical analysis was performed using GraphPad Prism version 6 (GraphPad Software, Inc., La Jolla, CA). Mann-Whitney tests were used to assess the significance of differences in the MFI of each aNKR ligand's cell surface expression level compared to their respective control staining conditions. Kruskal-Wallis tests with Dunn's post tests were used to determine the significance of differences in the background subtracted MFI generated by staining K562, .221 and CEM.NKr.CCR5 for each of the cell surface aNKR ligands tested. The distribution of the MFI of ligand expression is reported as median (range) of the replicate values. P-values less than 0.05 were considered significant.
Results
The MFI of aNKR ligand expression on K562, .221 and CEM.NKr.CCR5 cells We hypothesized that differences in aNKR ligand levels expressed on K562, .221 and CEM.NKR.CCR5 cells could explain their differential abilities to stimulate NK cells. To address this, we determined the MFI of aNKR ligand expression by staining these cell lines with mAbs and chimeric proteins specific for these ligands and analyzing results by flow cytometry. The MFI of ligand staining was compared to that generated by the controls for each of the ligand specific reagents. Additional file 1: Figure S1 shows the gating strategy used to detect live singlet K562, .221 and CEM.NKr.CCR5 cells. Figure 1 shows examples of the histograms generated by staining K562, .221 and CEM.NKr.CCR5 cells with mAbs and chimeric proteins specific for aNKR ligands versus their FMO/isotype/secondary antibody controls. Table 2 and Fig. 2 show the median MFI obtained by staining these cell lines with the mAbs/chimeric proteins specific for aNKR ligands in Panels 1, 2, 3, and HLA versus their FMO/isotype/secondary antibody alone controls. Table 2 also shows the uncorrected, background corrected and control condition MFI staining levels of aNKR ligands on these three cell lines.
All the mAbs and chimeric proteins in the Panels 1, 2, 3 and HLA stained K562 at above background levels ( Fig. 1a- Dunn's post tests showed that K562 expressed ULBP-1, ULBP-2/5/6, ULBP-3 with higher background corrected MFIs than did .221 cells (p < 0.01 for all, Table 2 and Fig. 3a). The MFI of background corrected MIC-A and MIC-B cell surface staining was not significantly different on K562 and .221 cells, while K562 expressed the MIC-A ligand at a higher MFI than did CEM.NKr.CCR5 cells (p < 0.05, Table 2 and Fig. 3a). Staining with anti-CD48, -CD80 and -CD86 mAbs was highest on .221 cells and lowest on CEM.NKr.CCR5 cells. Observations that achieved statistical significance were lower expression of the CD48 ligand on K562 than .221 cells (p < 0.01, Table 2 and Fig. 3b). The CD112 and CD155 ligands were expressed at higher levels on K562 than .221 cells (p < 0.01 for both, Table 2 and Fig. 3b). The ICAM-1 ligand was expressed with a higher MFI on .221 than on CEM.NKr.CCR5 cells (p < 0.01) while the ICAM-2 ligand was expressed on all three cell lines at high levels that were not significantly different from each other ( Table 2 and Fig. 3c). The ligands for NKp30 and NKp44 were expressed at a higher MFI on K562 than on .221 (p < 0.001 and p < 0.01, respectively) and the NKp44 Fig. 3c). KIR3DS1-Fc and mAb 3D11 bind HLA-F. HLA-F was present at a higher MFI on K562 than on CEM, NKr.CCR5 cells (p < 0.001, for both) and staining with KIR3DS1-Fc revealed a higher MFI of HLA-F expression on .221 than on CEM.NKr.CCR5 cells (p < 0.05). On the other hand, HLA-E was present at a higher corrected MFI on CEM.NKr.CCR5 cells than on .221 cells, while both K562 and .221 cells expressed HLA-E at background corrected MFIs that did not differ significantly from each other ( Table 2 and Fig. 3d).
Together, these results show that K562 cells expressed higher levels of several of the ligands for the aNKRs Grey histograms represent staining with mAbs or chimeric proteins binding to aNKR. White histograms represent staining with fluorescence minus one, isotype control or secondary antibody alone controls. For information on which control was used for staining with each mAb or chimeric protein see Table 2 NKG2D and NCRs than did .221 and CEM.NKr.CCR5 cells. K562 differed from .221 by having lower expression levels of the CD48 ligand. K562 also has lower levels of CD80 and CD86 ligand expression that did not achieve significance when Dunn's post tests were applied. However, the CD80 and CD86 ligand expression levels were significantly higher on .221 cells than on either K652 or CEM.NKr.CCR5 cells when results for .221 and K562 were compared using Mann-Whitney tests (p = 0.002 and 0.005, respectively). The median MFI of background corrected aNKR ligand expression generated by staining CEM.NKr.CCR5 cells with the mAbs/chimeric proteins in panel 1, 2 and 3 was low for 4 of the ligands tested (MFI < 234). Only anti-CD48 and anti-ICAM-2, stained CEM.NKr.CCR5 cells with a high MFI that was readily distinguishable from background staining. This cell line also expressed higher levels of HLA-E than did K562 and .221 cells and expressed above background levels of HLA-F that were lower than those observed on K562 and .221 cells.
Discussion
In this report we screened three cell lines that are frequently used to test NK cell functionality for expression of a panel of aNKR ligands. K562 and .221 cells are both HLA-null cell lines that should have a similar ability to abrogate NK cell inhibitory signals mediated by HLA binding to their iNKR. iNKR-HLA interactions determine NK cell education status and functional potential [60,61]. Previous work showed that these HLA-null cell lines induced differential patterns of IFN-γ secretion, CCL4 secretion and CD107a expression by NK cells, suggesting that expression patterns for aNKR ligands may differ between these 2 cell lines and explain differential activation patterns. We found that K562 and .221 expression levels did not differ significantly for MIC-A, MIC-B, ICAM-2, HLA-E and NKp46 ligands. K562 expressed significantly higher levels of the NKG2D ligands ULBP-1, ULBP-2/5/6 and ULBP-3 and the ligands for NCRs NKp30 and NKp44. K562 differed from .221 by having significantly lower expression levels of CD48 and lower levels of CD80, and CD86 that did not achieve statistical significance using Dunn's post tests. On the other hand, the NK resistant cells line CEM.NKr.CCR5, often used as a target cell in ADCC assays, expressed most of the aNKR ligands tested at low expression levels, Although prior studies have partially characterized the expression profiles of ligands to aNKRs on K562 and .221 cells, to our knowledge this has not been done for the CEM.NKr.CCR5 cell line that differs from its parent CEM cell line by being resistant to direct NK cell cytolysis [25,62]. In this study, we included a larger and more comprehensive panel of mAbs/chimeric proteins detecting aNKR ligands than have previously been investigated.
The receptors implicated in triggering NK cell mediated cytolysis include NKG2D and the NCRs [62][63][64][65]. The ligands for NKG2D include ULBP-1, ULBP-2/5/6, ULBP-3, MIC-A and MIC-B [65]. NKG2D is a C type lectin receptor that associates with signaling molecules such as DAP10/KAP10 to initiate the cascade of events leading to cytolysis [17,[66][67][68]. The NCRs include NKp30, NKp44 and NKp46, among others [65]. They associate with different tyrosine kinase activating motifs bearing signal transducing polypeptides to mediate activating signals [31,32]. The characterization of the ligands for these NCRs is incomplete, which is why chimeric proteins based on these receptors are used to probe for the presence of their ligands on target cells. Blocking the interaction of NCRs and NKG2D with these ligands also reduces cytolysis mediated by NK cells, highlighting the role of NKG2D engagement in NK cell killing [62].
In our previous studies comparing the functional profile induced in NK cells by K562 and .221 cell stimulation, we found that while both HLA null cell lines were able to induce CD107a expression, the functional NK subsets that included CD107a expression contributed to a higher proportion of the total NK cell response when stimulated by K562 than by .221 cells [16]. The higher expression level of NKG2D and NCR ligands on K562 than on .221 cells reported here and the lower levels of expression of other ligands for aNKR such as CD48, CD80 and CD86 would be consistent with an aNKR ligand profile that favors NK cell stimulation towards CD107a expression, which is a marker for degranulation, a step in the pathway toward cytolysis. NKG2D signaling has been shown to play a crucial role in NK cell cytotoxic granule polarization, degranulation, and cytotoxicity [69]. The ability of K562 to induce signaling through this pathway may explain why this cell line preferentially stimulates NK cell degranulation, rather than cytokine or chemokine secretion [16]. Unfortunately, no antibody currently exists that can discriminate between ULBP-2, − 5, and − 6 and we are incapable of determining which combinations of these ligands are expressed on K562. However, it is possible that K562 cells express ligands for all three of these receptors, in addition to ligands for ULBP-1 and -3. The consequence of this may be induction of more potent signaling through NKG2D of K652 than .221 cells, which only express MIC-A and MIC-B with a modest MFI. Moreover, expression of NKp30 on NK cells has been correlated with both perforin expression and degranulation. Engagement of this aNKR by K562 may also contribute to their induction of degranulation [70]. CD112 and CD155 bind to DNAM-1 [18,28,29]. Thus, K562 has a ligand profile that does not only stimulate NK cells through NKG2D, NKp30 and NKp44 but also through DNAM-1.
Our finding that .221 cells express the ULBP-1, ULBP-2/5/6 and ULBP-3 ligands for NKG2D at a low MFI that is not much above background levels is in line with a report by Pende et al. [62]. Our findings differ for MIC-A, which we found was expressed over background by .221 cells while Pende et al. found it not to be expressed over background. These discrepant results may be due to differences in the reagents used to detect the NKp30 ligand or to other technical issues relating to staining and analysis methods. These findings are consistent with previous reports that ULBP ligands are preferentially expressed on K562 and with the observation that K562 cells express the tumor ligand B7-H6, which is a ligand for NKp30 [21,62].
The higher expression levels of CD48, CD80, CD86 and ICAM-1 on .221 than K562 cells may explain why .221 cells stimulated a higher frequency of functional NK cells, particularly those secreting IFN-γ and CCL4 than did K652 [16]. When K562 expressed a higher MFI of aNKR ligands than .221 cells, as was in the case of CD112, CD155 and HLA-F, the differences were more modest than were the between-cell differences in CD48, CD80, CD86 and ICAM-1 expression on .221 versus K562 cells. The elevated expression level of these ligands on .221 cells may provide these cells with a more potent activating signal. Engagement of the aNKR 2B4 by its ligand CD48 on .221 has been reported to induce low levels of IFN-γ production and concurrent signaling through 2B4; signaling through other aNKRs can drive high levels of cytokine and chemokine production [9,20]. Although, the exact NKRs that recognize the co-stimulatory molecules CD80 and CD86 have not been identified, expression of these ligands on target cells can trigger NK cell-mediated cytolysis [50]. Together, the ligands that are expressed on .221 are capable of engaging receptors that are important for cytokine and chemokine production, which may explain why .221 cell stimulate larger numbers of IFN-γ and CCL4 secreting NK cells than do K562 cells [16].
Our findings confirm that HLA-E, the ligand to the aNKRs NKG2E and NKG2C and the iNKR, NKG2A, on NK cells, is expressed by K562 and .221 cells with a low MFI that was nevertheless above background staining levels [71]. On the other hand, CEM.NKr.CCR5 cells express cell surface MHC-I antigens and higher HLA-E levels than do K562 and .221. Stable cell-surface expression of HLA-E requires binding to one of a set of nonamer peptides derived from the leader sequences of MHC-Ia molecules or HLA-G, which are absent on the HLA-null .221 cell surface [72,73]. It is unlikely that HLA-F expression by K562 and .221 cell lines contributes to HLA-E expression as the signal sequence of HLA-F does not have a nonamer peptide able to bind HLA-E [55]. However, the HLA-E peptidome was recently shown to be less restricted than previously thought. HLA-E can also bind to an array of self-peptides in the absence of HLA class I signal peptides, permitting its stable expression and induction of NK cell cytotoxicity [74,75]. In addition, HLA-E is also capable of presenting EBV-derived peptides, such as BZLF1, which would be expected to be present in the EBV transformed .221 cell line [76,77]. HLA-E was recently shown to present a cytomegalovirus derived signal peptide important in driving the expansion of adaptive-like NKG2C+ NK cells [78]. HLA-E/BZLF1 complexes are poorly recognized by NK cells and it is likely that HLA-E molecules presenting non-canonical self, make greater contributions to .221-induced NK cell activation [76].
The non-classical MHC-I antigen, HLA-F, is a KIR3DS1 ligand, which is expressed by K562, .221 and CEM.NKr.CCR5 cells [56][57][58]. HLA-F has also been reported to interact with KIR3DL2 and KIR2DS4 [79]. Although the interaction of HLA-F with KIR3DL2 was confirmed by others, its interaction with KIR2DS4, which is structurally related to KIR3DL2 due to a gene conversion event has not been confirmed [56,80]. HLA-F is also expressed on HIV infected cells [56]. Other investigators did not observe HLA-F on K562 [56,57]. Here, we used both the mAb 3D11 and KIR3DS1-Fc to stain K562 cells for cell surface HLA-F. Both reagents generated concordant results for the presence of HLA-F on this cell line.
KIR3DS1 homozygotes were more frequent in a population of HIV exposed seronegative than in HIV susceptible individuals and KIR3DS1 homozygotes remained uninfected for longer time intervals despite HIV exposure than those with other KIR3DL1/S1 genotypes, suggesting that KIR3DS1 HLA-F interactions may provide protection from HIV infection [81,82]. The global distribution of KIR3DS1 varies from one population to another [83,84]. For example, it is rare in sub-Saharan African populations [83]. It is interesting to speculate on whether HLA-F/KIR3DS1 or /KIR3DL2 or possibly /KIR2DS4 combinations can influence HIV control mediated by NK cells and whether this could account for between-individual or -population differences in HIV susceptibility or the rate of HIV disease progression.
For the purpose of this study, the ligands analyzed were included on the basis of their ability to stimulate NK cell responses through the engagement of aNKRs.
However, it is important to consider that several of these ligands are capable of engaging both aNKRs and iNKRs. CD112 and CD155, which signal through the activating DNAM-1, can also bind to the iNKR, T cell immunoreceptor with immunoglobulin and ITIM motifs (TIGIT) [85,86]. While both DNAM-1 and TIGIT are widely expressed on NK cells, the affinity of CD155 for TIGIT is greater than for DNAM-1 and TIGIT expression can reduce DNAM-1/CD155 interactions in a dose-dependent manner [87][88][89]. TIGIT has also been shown to compete with DNAM-1 for the binding of CD112. Furthermore, when transfected into the NK cell line YTS, TIGIT greatly limits NK-mediated cytotoxicity by disrupting cytotoxic granule polarization [89,90]. Considering this, it is possible that CD112, which is exclusively expressed on K562, and CD155 which is expressed at higher levels on K562 than .221 cells contributes more to NK cell inhibition than activation and may be an additional reason why K562 activates a smaller fraction of NK cells, compared to .221 [16]. Another aNKR ligand, HLA-E, similarly contributes to both NK cell activation and inhibition. HLA-E binds to the CD94/NKG2 family of NK cell receptors, which includes the activating NKG2E and -C and the inhibitory NKG2A and -B NKRs [53,54]. Interactions between NKG2A, which is expressed on the majority of NK cells, and HLA-E have been shown to predominate over interactions with NKG2C and surface expression of HLA-E is sufficient to rescue those cells from lysis by NKG2A + NK cells [53,54,91]. Despite this, work assessing NK cell stimulation by autologous HIV-infected CD4 T cells, which express HLA-E, found that expression of NKG2A on NK cells was associated with improved activation [92]. It is plausible that, while interactions between HLA-E on .221 and NKG2A on NK cells can tune down NK cell activation, signaling through the other aNKR engaged by .221 ligands can compensate for this inhibitory input.
CEM.NKr.CCR5 cells expressed few aNKR ligands. Ligands for CD48 and ICAM-2 were present on this cell line as well as low levels of ligands for NCRs. The interaction of ICAM-2 with its ligand may contribute to the formation of NK cell CEM.NKr.CCR5 conjugates [65]. Less is known regarding the consequences of CD48 ligand expression. It is interesting to speculate that the presence of few other cell surface ligands for aNKR contributes to the resistance of this cell line to direct NK cell cytolysis in the absence of an antibody bridging target and effector cells [25]. Pende et al., tested the CEM.NKr.CCR5 parental cell line, CEM, for cell surface expression of ULBP-1. ULBP-2, ULBP-3 and MIC-A and found that CEM cells expressed ULBP-2 and ULBP-3 [62]. The absence of these aNKR on CEM.NKr.CCR5 is suggestive that the process of selecting for CEM.NKr resistance to direct NK cell cytolysis led to loss of several ligands for aNKR [25].
Conclusion
The two HLA-null cell lines K562, .221 and the ADCC target cell CEM.NKr.CCR5 differed in their expression of ligands to aNKR. The data presented here provide a systematic assessment the stimulatory potential of three cell types commonly used to study NK cell activation. The different aNKR ligand expression profiles of K562, .221 and CEM.NKr.CCR5 are associated with the induction of qualitative differences in the NK cell responses to these cell lines. CEM.NKr.CCR5 cells, that are resistant to direct NK cell killing express few ligands for aNKR. This work provides a basis for examining the specific contribution of each ligand-aNKR pair to different stages of NK cell activation.
Additional file
Additional file 1: Figure S1.
Availability of data and materials
All data is presented in the manuscript. All raw data are kept in experimental notebooks or as electronic files such as those acquired by flow cytometry. These are available on request. | 2023-01-15T14:40:14.776Z | 2019-01-29T00:00:00.000 | {
"year": 2019,
"sha1": "272b9ddceef321a75e9ee57772fb19e127e46998",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12865-018-0272-x",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "272b9ddceef321a75e9ee57772fb19e127e46998",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
233015899 | pes2o/s2orc | v3-fos-license | A review of current bioanalytical approaches in sample pretreatment techniques for the determination of antidepressants in biological specimens
Abstract: Antidepressants are a class of compounds widely used in clinical settings for the treatment of several diseases. In the last years there has been a considerable increase in their consumption, representing an important public health issue in several countries. Because they are substances with narrow therapeutic windows, and since they are capable of interacting with other classes of compounds, monitoring of these compounds is of relevance, minimizing the risk of medical interactions as well as side and toxic effects. In addition, understanding the extent of their use, their detection through routine toxicology tests and development of new methods for detection and monitoring is of extreme importance concerning public health, patient well-being, and implications in clinical and forensic situations. The main objective of this work is to perform a critical review on the biological samples used in the detection and quantification of antidepressants with special focus on the techniques for sample preparation.
Brief introduction: Antidepressants -Classification and relevance from the toxicological point of view
Depression is a common chronic or recurrent disease that affects individuals regardless of social or economic status. According to the World Health Organization, this mental disorder was predicted to be for all ages and both sexes, the second cause of global disease by the year 2020. Patients with these serious mental disorders usually die 10 to 20 years earlier than the rest of the population due to their physical health and difficulties in accessing comprehensive health services by most of them [1]. Antidepressants are used for the treatment of conditions such as depression, anxiety, and other mental health problems and depressive disorders have been diagnosed in more patients attending health centres. In addition, health specialists are using more and more antidepressant and antipsychotic medications, and the consumption of tranquilizers and drugs to control hyperactivity by children and adolescents is still too high, which has led to several expert alerts [2]. Several classes of antidepressants are available nowadays. These classes are: monoamine oxidase inhibitors (MAOI) like iproniazid, tricyclic antidepressants (TCA) like imipramine, selective serotonin re-uptake inhibitors (SSRI) like fluoxetine, serotonin-norepinephrine re-uptake inhibitors (SSNRI) like venlafaxine, noradrenergic, and specific serotonergic antidepressant medication (NaSSA) like mirtazapine. Bupropion is an "atypical" antidepressant and belongs to a single chemical class (aminoketone), being mainly a norepinephrine-dopamine re-uptake inhibitor (NDRI). Piperazines are a chemical class that has been marketed as "multi-modal" drugs with high binding affinity and complementary mechanisms of action for several serotonin receptors and serotonin transporters like vortioxetine [3][4][5][6].
These drugs are capable of, on the one hand, reduce the symptoms of depression, but on the other hand several important drug interactions are likely to occur [2]. These drugs present large inter-individual differences and their therapeutic windows are relatively small which makes patient compliance monitoring extremely important. Although there are several antidepressant drug classes, there is still a population of patients that do not respond to these medications [3]. Multimodal mechanisms of action of new generation antidepressants (vortioxetine, desvenlafaxine, vilazodone, and levomilnacipran) show that depression may not be caused solely by a simple serotonin deficiency but rather be related to the distribution of 5HT 1A receptors in different areas and its relationship with serotonin action mediated or not by glutamate, norepinephrine, and histamine. Research on the use of these new antidepressants in human subjects is limited and more studies are needed to reveal substantial differences concerning the mechanism of action and other pharmacological characteristics of these new drugs [7]. An important aspect relevant in the treatment with these drugs is the inter-individual differences in genes. In the case of antidepressants, the relevant genetic variation is related to phase I drug metabolism, in particular polymorphisms of CYP2C19 and CYP2D6. This assumes particular importance in the effectiveness of each drug [8]. Future research is likely to focus on any of the strategies for combining different antidepressants, expanding the indications or increasing their efficacy [9]. Future developments should consider new agents for the most refractory cases or even the use of new drugs such as esketamine [10] or psilocybin [11].
Besides, as a consequence of their excessive use, and also because of the concomitant use of other medicines, alcohol or drugs of abuse, this class of compounds is often involved in both clinical and forensic situations, namely drug abuse or suicide attempts which can result in severe intoxications of voluntary or accidental nature [12]. Therefore, developing analytical methods for their determination and quantification is of interest not only to clinical toxicology, but also in the field of forensic sciences [13]. Monitoring these compounds allows minimizing the risk of side effects, reducing the possibility of drug-drug interactions, adjusting the dose as needed and evaluating patient compliance as well [14,15].
The high consumption rate of these compounds can be considered a public health problem and therefore, it is important to compile the existing literature on the matter (including analytical methods for their determination) to assist health professionals and improve patients' quality of life.
Biological specimens
Efforts have been made in order to identify and quantify antidepressant drugs given the increased use of these compounds in recent years. The fact that there is a wide spectrum of substances, the interaction of these compounds with other drugs and the possibility of concomitant effects with other substances, represents a constant challenge for a rigorous determination by clinic and forensic toxicology laboratories. This requires the use of highly specific and sensitive analytical techniques to determine these compounds and metabolites and the choice of the biological sample to be analyzed is important as well.
The most commonly used samples are plasma [16,17], serum [18], and urine [19]. Hair [20] and oral fluid [21] are also commonly used and whole blood [22,23] is sometimes preferred. Urine is the sample preferably used because of the ease of collection but with the disadvantage that it can be adulterated and/or tampered with and does not always allow the detection and/or quantification of the compounds due to their rapid biotransformation [19]. A good alternative would be oral fluid because of the ease of collection and the difficulty of adulteration; this sample has been gaining more and more importance in the field of drug monitoring not only in what concerns drugs of abuse, but concerning medicinal drugs as well [24,25]. The collection procedure of this sample is painless, non-invasive and the concentrations can be possibly correlated to those of plasma which is advantageous particularly with regard to mentally unstable patients and children [24,26,27]. Another unlikely and complex biological sample is vitreous humor (HV). Filonzi dos Santos et al. [28] developed a methodology for the determination of the TCA amitriptyline, nortriptyline, imipramine, and desipramine which were extracted from 0.5 mL of HV samples by means of hollow fiber-liquid phase microextraction and detection by gas chromatography-mass spectrometry (GC-MS) with electron ionization. This method proved to be appropriate for the analysis of real post-mortem cases and based on the articles that performed the measurements of these compounds in femoral whole blood (FWB) and HV. In real cases, the HV/FWB ratio was of about 0.1 [29].
Moreover, as the consumption of antidepressants has been increasing, it is important that methods are developed for the identification and determination of this class of compounds in a wide range of biological matrices.
Sample pretreatment
In the absence of a review of the newest techniques for the preparation of biological samples used for the determination of antidepressants and given that this is the most time-consuming process for laboratories, a comprehensive review focusing on the possible approaches and especially new trends in the treatment of specimens for the identification and determination of these compounds is relevant. The reviewed articles were independently selected by the three authors in order to determine their relevance in the context of the current review and only the articles selected by at least two of the authors were included in this paper.
As previously mentioned, antidepressants are normally detected in whole blood, serum, plasma, and urine samples, requiring a specific treatment of each of these specimens, according to the investigated compounds. This process can be initiated by a pretreatment step which is important to remove matrix compounds that may interfere with the analytes resulting in better results, reduced noise, and is more relevant when applied to more complex matrices. Therefore, the most common pretreatment procedures applied to antidepressants are: liquid-liquid extraction (LLE), solid-phase extraction (SPE), protein precipitation (PPT), and dilution. In general, sample treatment has an effect on precision, accuracy and robustness of analytical methods, and the preconcentration [30] of analytes increases sensitivity and allows lower limits of detection (LODs).
Liquid-liquid extraction
The classical LLE extraction is one of the most used techniques for sample pretreatment in the toxicology field.
However, this is a less used technique for the extraction of antidepressants from biological samples with only few papers available.
The most recent work is that of Degreef et al. [31] who developed a method for the determination of 40 antidepressants and metabolites in 0.2 mL plasma samples using methyl-tertiary-butyl-ether as extracting solvent. The quantification based on liquid chromatographytriple quadrupole mass spectrometry (LC-MS/MS) with electrospray ionization (ESI) should be highlighted. Quantification limits (LOQs) between 0.5 and 25 ng/mL were obtained. The authors concluded that the method was simple with a relatively short chromatographic run, wide calibration range and could be implemented in therapeutic drug monitoring and forensic or clinical research.
For the LLE technique applied to the extraction of antidepressants, the most used organic solvent is ethyl acetate which can be utilized either by itself or in a mixture. This solvent is volatile, poorly soluble in water, polar but has advantages such as low cost, low toxicity, and allows adequate extraction of several classes of compounds.
Although having benefits, this technique, features some limitations such as considerable volume of sample and organic solvents, low recovery and poor selectivity. In addition, it presents constraints when extracting compounds with distinct lipophilicities and produces high matrix effects when liquid chromatography-mass spectrometric methods are used [32]. Figure 1 represents the LLE technique.
Solid-phase extraction
For many years, SPE has been widely used in the field of toxicology for drugs determinations in several biological specimens. This technique allows the use of different types of cartridges depending on the cost, availability, and nature of the analytes to be determined. This procedure has been applied several times to the sample pretreatment in the determination of antidepressants, mainly in water samples.
With regard to human biological samples, Kall et al. [33] developed a methodology for the determination of vortioxetine and its major human metabolite (Lu AA34443) in plasma samples using two different methods for quantification. An isocratic cation exchange (SCX) with analysis by high-performance liquid chromatography-tandem mass spectrometry (ESI) method utilizing SPE (C8 and 96-well plate) sample extracts and secondly, a reversedphase ultra-performance liquid chromatography-tandem mass spectrometry method (UPLC-MS/MS) (positive ionization mode) with gradient elution following protein precipitation with acetonitrile was employed. For the first method described, they obtained extraction recoveries between 102% and 104% and LOQs of 0.4 ng/mL for vortioxetine and 2.0 ng/mL for Lu AA34443; for the second method, with better results, they obtained LOQ values of 0.2 ng/mL for vortioxetine and 0.5 ng/mL for Lu AA34443. Rosado et al. [19] developed a method for the determination of a relevant number of antidepressants (fluoxetine, venlafaxine, amitriptyline, mianserin, trimipramine, nortriptyline, mirtazapine, sertraline, dothiepin, citalopram, and paroxetine) and four of their metabolites (desmethyltrimpramine, O-desmethylvenlafaxine, norfluoxetine, and desmethylmirtazapine) in urine and plasma samples by GC-MS analysis. SPE was used and the extraction was performed using Strata™ X cartridges. LOQ values varied from 1 to 15 ng/mL, while recoveries in the ranges from 40% to 89% in urine and from 68% to 98% in plasma were obtained. This method has proven to be suitable for application in the monitoring of those drugs. The most recent work is that of Shin et al. [25] who developed and validated a method to quantify 18 antidepressants (amitriptyline, bupropion, citalopram, clomipramine, cyclobenzaprine, desipramine, desvenlafaxine, doxepin, duloxetine, fluoxetine, imipramine, mirtazapine, nortriptyline, paroxetine, sertraline, trazodone, trimipramine, and venlafaxine) in oral fluid samples with extraction by SPE and analysis by LC-MS/MS (ESI). For sample collection, Quantisal devices were used, with which 1 mL of oral fluid was collected and 3 mL of buffer was added. For the extraction of SPE, with Cerex ® Trace-B cartridges, 500 µL of the Quantisal sample was applied. The authors obtained LOD and LOQ values of 10 ng/mL and reported recoveries between 91% and 129%. They considered this method perfectly applicable to routine laboratories with advantages such as the rapid run time (5 min) and low sample volume useful to determine the concentrations of antidepressants in this specimen.
For the extraction of antidepressants, there are several SPE sorbents available, but the most used are Oasis ® HLB, Strata ® X and mixed-mode reversed phase-strong cation exchange cartridges. Despite being a clean technique, this extraction procedure presents numerous disadvantages such as extensive production of residues, high cost per sample, time-consuming and laborious sample preparation (including also method development). Figure 2 represents the SPE technique.
Protein precipitation and specimen dilution approaches
For the pretreatment of complex matrices such as whole blood, plasma, or serum, it is common to apply at the beginning of the extraction process a protein precipitation step. This is very useful, simple, and rapid minimizing subsequent treatment approaches. For the class of antidepressants, the most commonly used solvent is acetonitrile but methanol [34] and trichloroacetic acid [35] have also been used. For instance, Farajzadeh and Abbaspour [36] applied acetonitrile as precipitant solvent to determine three antidepressants in plasma samples by gas chromatography-flame ionization detection (GC-FID), reporting recoveries from 79% to 98%. Nezhadali et al. [37] applied acetonitrile for plasma and serum samples pretreatment in the determination of fluoxetine by ultraviolet-visible spectrophotometer and reported recoveries around 100%. Another example is that of Hegstad et al. [38] that developed a method for the enantiomeric separation and quantification of R/S-citalopram using ultra-high performance supercritical fluid chromatography-tandem mass spectrometry (ESI) in 0.1 mL of serum samples. For the preparation of these samples, they used protein precipitation with acidic acetonitrile and filtration through a phospholipid removal plate. They obtained values of LOD and LOQ of 1.3 and 2.0 nM, respectively, and recoveries between 81% and 91%. Another form of pretreatment of the sample is dilution and shoot approaches, namely, specimens of urine and plasma; this is performed most often with water, reducing interferences in the analysis. Nojavan et al. [39], Ríos-Gómez et al. [40], and Hamidi et al. [41] have diluted urine or plasma samples with deionized water, while Mohebbi et al. [42] have used ammoniacal buffer.
New trends for sample preparation
In recent years, there has been an increasing interest in using miniaturized or microextraction techniques in several areas for the analysis of numerous compounds in order to avoid classical techniques such as SPE and LLE which demand the use of higher volumes of both organic solvents and biological samples. Therefore, the use of this type of technique for sample preparation should be highlighted taking into account not only the aforementioned advantages, but also the possibility of reuse the extraction devices and the fact that they are less expensive techniques. The first article described is from 1997, by Lee et al. [43]; the authors used 0.5 mL of whole blood for the extraction of four TCA by headspace solid-phase microextraction (HS-SPME) and analysis by GC-FID.
Ide and Nogueira [44] in 2018, developed a methodology with extraction by bar adsorptive microextraction and liquid desorption (BAμE-LD) with a SX phase and with analyte detection by high performance liquid chromatography with diode array detector and LC-MS/MS (ESI). Four of these compounds (amitriptyline, bupropion, citalopram, and trazodone) were studied using deionized water samples. This new generation device proved to be innovative and robust, further combined with the advantage of microextraction through the use of LD, proving to be particularly efficient in the analysis of antidepressants in trace amounts. With the implementation of the BAμE-LD technique, it becomes possible to select the best sorbent phase, reducing the associated cost, facilitating the preparation, and handling of the device, allowing the use of residual amounts of organic solvents, making it an environmentally friendly technique and a good alternative to other sorption-based microextraction approaches. The difficulty in keeping the bar under constant stirring in the vortex and manipulation during back-extraction are the most important disadvantages of the procedure [44].
Another example is the article from 2018 by Moghadam et al. [45] who developed a methodology, also using deionized water for the detection of desipramine, escitalopram, and imipramine with a extraction technique of air agitated-emulsification microextraction based on a low density-deep eutectic solvent (AA-EME-LD-DES) and analysis by high performance liquid chromatography with ultraviolet detection (HPLC-UV). After validating this method, they applied it to human plasma samples where they were able to achieve recoveries between 88.75% and 95.12% for desipramine, 90.27% and 93.46% for escitalopram, and 94.88% and 95.74% for imipramine. This new version of low-density solved based emulsification, first used in 2018, was introduced for highly effective enrichment of three antidepressant drugs. Due to its specificities, choline chloride was easily synthesized using a safe and easy alternative that does not need extra purification phases. Given this, the authors concluded about the advantages of this version of the technique which proved to be simple, safer, fast, efficient, and low cost. In addition, it is viable for compounds analysis in the interval between therapeutic and potentially toxic in plasma matrix [45]. Furthermore, LLE technique has gained interest in what concerns miniaturized techniques and in recent years it has been reported in several papers where the classical approach was not used but instead, a miniaturized version of the technique was utilized. Fernández et al. [46] in 2016 developed an ultrasound assisteddispersive liquid-liquid microextraction (UA-DLLME) method for the simultaneous determination of six antidepressants (mirtazapine, venlafaxine, escitalopram, fluvoxamine, fluoxetine, and sertraline) by ultra-performance liquid chromatography with photodiode array detector using 0.5 mL of plasma samples. This extraction technique used acetonitrile as dispersant and chloroform as extracting solvents. They obtained LOD values of 4 ng/mL for mirtazapine, venlafaxine, and sertraline and 5 ng/mL for escitalopram, fluvoxamine, and fluoxetine. They also obtained LOQ values of 12 ng/mL for mirtazapine, 13 ng/mL for venlafaxine and sertraline and 17 ng/mL for escitalopram, fluvoxamine, and fluoxetine with recoveries in the range from 93% to 110%. UA-DLLME presents an advance in the field of eco-friendly analytical chemistry since it is sensitive, very fast, simple, of low cost, and reliable requiring a considerable small volume of sample and extraction solvent. On the other hand, this microextraction method is not adequate for unstable analytes, emulsions are easily formed, extraction time is long and the equilibrium is incomplete leading to poor repeatability in this method [46,47]. Hamedi and Hadjmohammadi [30] developed, in 2016, a methodology based on alcohol-assisted dispersive liquid-liquid microextraction (AA-DLLME) for preconcentration and determination of fluoxetine in human plasma and urine samples, followed by reversephase high performance liquid chromatography with ultraviolet detection. The conditions included 1-octanol as extraction solvent and methanol as disperser solvent. For plasma samples, LOD and LOQ of fluoxetine were 3 and 10 ng/mL, respectively with a recovery of 90.15%. In the case of urine samples, LOD and LOQ values were 4.2 and 10 ng/mL, respectively, with a recovery around 89%. This technique of extraction showed to be better for the environment having less toxicity over dispersive liquid-liquid microextraction. Moreover, it is also well known for its simpler extraction, for the small solvent volume needed, being fast, easily operated, reliable and precise [30,48]. The prefix "AA" in this technique represents the disperser agent (alcohol). Nevertheless, it shares the same advantages and disadvantages than the DLLME technique. The positive aspects are the short period of extraction and also the good cost-effective value, while the disadvantages are the restriction in the selection of extraction solvents, low extraction efficiency, difficulty of automation and the large consumption of disperser solvent [30,47].
Similarly to LLE, since 2014 it has become possible to observe evolution in the SPE technique concerning its application in miniaturized methods. For example, in 2015, Asgharinezhad et al. [49] developed a study using dispersive-micro solid-phase extraction (D-µ-SPE) for the isolation and preconcentration of two antidepressants (citalopram and sertraline) onto the surface of Fe 3 O 4 @ polypyrrole nanocomposite (Fe 3 O 4 @PPy NPs) with NaClO 4 sorbent in plasma and urine samples using HPLC-UV analysis. They obtained a LOD of 0.6 and 1.0 ng/mL for citalopram in plasma and urine samples, respectively, while for sertraline, the obtained LOD were 0.7 in plasma and 0.6 ng/mL in urine. For both compounds and biological samples, they obtained LOQ values of 2.0 ng/mL. The recoveries were in the range from 93% to 99%. Fe 3 O 4 @PPy NPs with core-shell structure featuring electrical and ferromagnetic characteristics was synthesized by an oxidative polymerization method. The authors concluded that with the enhancement of the stability of the NPs and their dispersibility in aqueous media and because of new interactions, namely hydrogen bonding, hydrophobic and π-π interactions amid sorbent and target analytes, the coating of NPs with PPy can enhance the sorption ability of the target analytes. They also concluded that this method demonstrated good results and higher extraction efficiency than Fe 3 O 4 NPs which they had previously developed. The authors claimed advantages such as the short time of extraction, low sorbent and organic solvent consumption, high efficiency, low cost and ease of application when compared to SPE [49].
In 2014, Banitaba et al. [50] applied a fiber coating based on electrochemically reduced graphene oxide for the cold-fiber headspace solid-phase microextraction (HS-CF-SPME) of antidepressants (amitriptyline, trimipramine, and clomipramine) in diluted plasma samples and GC-FID analysis. They obtained a LOQ of 1.0 ng/mL for amitriptyline, 1.47 ng/mL for clomipramine and 1.77 ng/mL for trimipramine while recoveries of 96%, 73%, and 80%, respectively, were obtained. SPME presents advantages such as simplicity, speed, low cost of analysis, automation, selectivity, sensibility combined with the nonuse of organic solvents when gas chromatography is used. The reuse of fibers is also possible which is advantageous when compared to SPE. On the other hand, SPME usually presents low recovery values. When this extraction method is performed using the headspace approach, it presents good selectivity and longer fiber lifetime since the matrix is not in direct contact with the coating, providing cleaner extracts. However, efficiency could be lower when compared to the direct immersion (DI) SPME method. CF-SPME was introduced with the aim of enhancing HS concentration as well as the distribution coefficient. In this technique, while the sample matrix is heated, enhancing the mass-transfer rate of analytes, the fiber coating is being cooled resulting in a distribution coefficient improvement. Further results showed the considerable advance in extraction efficiency with cooling even attained exhaustive extraction in some cases [51,52].
Over the years, there have been continuous developments in this area and a compilation of the studies carried out since 2017 to the present year was made for this review ( Table 1). Only those papers that relied on the validation and determination of antidepressants in biological samples were selected. De Boeck et al. [53] developed, in 2018, a method capable of identifying a large number of antidepressants using an innovative extraction technique based on ionic liquid (IL) dispersive liquid-liquid microextraction (IL-DLLME) in which whole blood samples (1 mL) were extracted and analyzed by LC-MS/MS (ESI). They obtained LOD values between 0.78 and 35.15 ng/mL and recoveries from 53% to 133%. This technique takes advantage from the characteristics of ILs such as low vapour pressure at room temperature and lower toxicity when compared to conventional organic solvents. Nevertheless, the number of ILs and the number of possible variations of DLLME is high (e.g., involving the nature of the dispersive solvent, the absence of a dispersive solvent, the use or not of hydrophilic ILs, the use of surfactants, the way the droplet is removed, and the necessity or not of a cooling step, and the stirring mode), making method development and optimization very complicated [54].
Also for blood samples, in 2018, Ask et al. [35] developed a new extraction technique based on dried blood spots (DBS) with a previous step of clean-up by parallel artificial liquid membrane extraction (PALME) for the determination of amitriptyline. They used sample amounts as low as 5 µL of blood (up to 20 µL), detection by ultra-high performance liquid chromatography-tandem mass spectrometry (ESI) and obtained recoveries between 74% and 78% and a LOQ of 2.9 ng/mL. Using the same DBS extraction technique, followed by SPE, Moretti et al. [55] developed a methodology for the determination of 20 antidepressants in 85 µL post-mortem blood samples with chromatographic analysis by LC-MS/MS (ESI); the stability of the samples was evaluated for a period of three months. It should be noted that this new method allowed obtaining LOD values between 0.1 and 3.2 ng/mL for all compounds and permitted to quantify 9 of them. From the data provided by the authors, recoveries between 32.1% and 120.0% were obtained and they concluded that DBS might represent a good complementary sample storage device in forensic investigations. Furthermore, the DBS technique presents other advantages; for instance, since it is a dried sample of blood, its transport and storage are particularly easy, it is stable at room temperature, as well as there is a reduced risk of being contagious for the professional that collects or manipulates this type of sample. Along with this, it requires minimum volume of biological sample. On the other hand, given its characteristics, if the sample is contaminated by the external environment, does not dry out sufficiently or if it is a sample with a reduced volume, the final concentration may be considerably affected [56]. Using PALME, DBS were processed allowing desorption, extraction, and high efficiency cleaning to occur simultaneously in 96-well plates [35].
In 2020, Behpour et al. [57] developed a methodology for the determination of desipramine and citalopram in serum and breast milk samples combining a gel electromembrane extraction (GEL-EME) method with the switchable hydrophilicity solvent-based homogeneous liquid-liquid microextraction (SHS-HLLME) method. With the analysis performed by GC-FID, they obtained LOD values of 0.7 and 0.3 ng/mL for desipramine and citalopram, respectively, and recoveries between 75.4% and 83.5%. GEL-EME is more advantageous when compared to EME, essentially because the membrane that composes it is prepared using an environmentally friendly process since it does not use toxic organic solvents. The combination of these two methods gives better results due to the low volume of organic solvent and the GEL-EME makes complex matrices cleaner. The authors concluded the work emphasizing the advantages of the GEL-EME/SHS-HLLME system since the injection of water in the GC is avoided with the use of organic solvent as extraction solvent in the SHS-HLLME.
Microfluidic systems were also used to determine antidepressants; in fact, Hedeshi et al. [58] have published recently their work concerning the use of modified paper extractive phases in a microfluidic device for the determination of some compounds in urine including a number of antidepressants (amitriptyline, trimipramine, and clomipramine). This approach presented excellent relative recoveries, ranging from 95% to 103%. Detailed information concerning the use of microfluidic systems, paper-based substrates, and gel electromembrane, as well as other microextraction techniques, for the determination of antidepressants are described in Table 1.
The most commonly used equipment for the analysis of antidepressants is HPLC-UV as applied both for urine and plasma samples. Fresco-Cala et al. (2018) [59] In the absence of the LOQ value, the lowest point of the calibration curve was considered. developed a new micro solid-phase extraction technique (microSPE) with incorporation of carbon nanotubes for the determination of four antidepressants in urine samples reporting LOQ values between 14 and 30 ng/mL and recoveries between 72% and 108%. The authors concluded that due to the ease of retention resulting from the additional π interactions, carbon nanotubes in the monolith improve the sensitivity to the antidepressants making this method simple and economical. Cai et al. [60] developed, in 2017, a polymer monolith microextraction technique (PMME) (with polyoxometalate) for the determination of three compounds in urine samples achieving lower LOQ values, between 2.2 and 4.7 ng/mL, and recoveries ranging from 83% to 105%. The authors reinforce economic, solvent-saving, ease of operation and convenience, as well as ease of preparation characteristics of this technique of microextraction. In addition, [61] for the determination of five antidepressants in urine samples used a dispersive solid-phase extraction technique (DSPE) and classify this technique as a cleanup method based on SPE. However, the sorbent is added directly to the extract without any conditioning or pretreatment when compared to SPE. Hamidi et al. (2017) [41] also developed a new technique of dispersive solid-phase extraction (ultrasound assisted dispersive magnetic solidphase extraction (UADM-SPE)). They obtained recoveries between 69 and 84%, and 90%, respectively, with some of the same analyzed compounds. UADM-SPE advantages go from its cost-effective relation, ease of operation and short extraction period to the low consumption of toxic organic solvent and short time of analysis.
But more recently, analytical techniques such as mass spectrometry and tandem mass spectrometry coupled to gas and liquid chromatography have been increasingly implemented in the determination of antidepressants in biological samples, due to the specificity and sensitivity due to unambiguous molecular weight separation. In addition, powerful high-resolution analysis techniques associated with mass spectrometry can be used which can improve described advantages mentioned above by improving analyte resolution. An example of this is the work developed by Majda et al. in 2020 [62] who developed a methodology for the determination of 8 antidepressants (amitriptyline, desipramine, imipramine, nortriptyline, venlafaxine, citalopram, fluoxetine, and paroxetine) in 200 µL samples of post-mortem blood and bone marrow with extraction by direct immersion solid-phase microextraction (DI-SPME) and chromatographic analysis by liquid chromatography time-of-flight mass spectrometry (LC-TOF-MS). The authors achieved LOD values between 2.98 and 9.98 ng/mL and LOQ values between 8.95 and 29.95 ng/mL for complex biological matrices. DI extraction commands higher efficiency of the analytes in study, even with low volatility, since the fiber is in direct contact with the sample. This results in a reduction of fiber lifetime due to increased detrition. Thus, contamination probability rises along with carry-over effect during extractions. For these reasons, DI should be avoided in the analysis of complex matrices but rather be used in cleaner samples [51,63]. Figure 3 summarizes microextraction approaches for sample preparation for the extraction of antidepressants.
Detection of antidepressants
The increase in the prescription and the consumption of antidepressants has led to the need of developing analytical methods for the detection and quantification of this class of medicines in a wide range of matrices. Among the methods published in scientific journals, the most described are gas chromatography (GC) coupled to mass spectrometry (GC-MS) [19,42,66,68,75,77,80,83] and flame ionization detector (GC-FID) [34,36,39,57], high performance liquid chromatography (HPLC) coupled to ultraviolet detector (HPLC-UV) [41,[59][60][61]70,72] and liquid chromatography (LC) coupled to tandem mass spectrometry (LC-MS/MS) [25,31,53,55], or ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS) [33,35]. Mass spectrometric detection provides better sensitivity and specificity allowing separating co-eluting compounds and using deuterated analogues as internal standards. In the case of GC-MS, particularly for the detection of metabolites, a derivatization step is often deemed necessary prior to chromatography, which usually makes the procedures more time consuming and laborious [19]. Taking into account the chemical structure of these compounds (namely secondary amines), the main derivatizing agents used are N-methyl-N-(trimethylsilyl)trifluoroacetamide (MSTFA), N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA), and trimethyl chlorosilane (TMCS). Agents such as trifluoroacetyl and heptafluorobutyryl can also be used; however, these agents are relatively unstable and damage capillary GC columns. Another problem usually associated with GC analysis for antidepressants is their poor fragmentation pattern since usually the main ions have a low m/z (an example of this is fragment 58, which is common to many of these drugs). This poor fragmentation means that the other ions have to be carefully chosen for qualification (positivity criteria). Rosado et al. [19] evaluated, in 2017, the contribution between different ions in a multi-method that allowed the determination of 15 antidepressants and metabolites by GC/MS. In order to solve this problem, different mixtures of antidepressant drugs were used.
Most of these drawbacks were overcome with the implementation of liquid chromatographic-mass spectrometric procedures allowing the detection of metabolites at very low concentrations. In addition, no derivatization is needed using this type of instrumentation decreasing analysis time in general [31,35]. In most applications, the compounds are ionized in the positive electrospray ionization (ESI+) mode of the mass-spectrometer despite of being less prone to ion suppression phenomena than the way less used atmospheric pressure chemical ionization (APCI). The influence of matrix effects must be carefully evaluated during method validation since it is capable of impairing sensitivity, precision, and accuracy. Compounds usually associated to ion suppression include carbohydrates, salts, lipids, highly polar compounds, and even metabolites of the analytes being tested.
The mobile phases were always a binary or ternary combination of some of the following reagents: methanol, acetonitrile, acetate or phosphate buffer, and water.
Meanwhile, ~ 1% of various additives (formic acid, trifluoroacetic acid, triethylamine, phosphoric acid, ammonium acetate, and ammonium formate, etc.) were often added to solvent systems to improve peak shape. However, as it also occurs in GC/MS, different classes of antidepressants have very similar masses and as such comparable transitions may be expected. Therefore, with an open window of alternatives both to targeted and nontargeted analysis, it is advantageous to use highly sensitive and resolution techniques, such as LC-TOF-MS as published by Majda et al. [62] in 2020 and UHPLC-QTOF as published by He et al. [65] or an analytical technique using HRMS (ORBITRAP analysers). Indeed, these instruments are considered a better choice as they present high sensitivity, selectivity and provide also accurate mass measurements. Due to its high efficiency in generating and registering a relevant quantity of information, this type of instrumental techniques offers a new perspective in chemical analysis. Furthermore, these techniques have the flexibility to expand the panel of analytes, decrease the volume of sample used, detect trace concentrations, and allow also a better understanding of metabolism patterns [86,87] due to the possibility of accurate mass measurements.
Conclusion and future perspectives
Antidepressant drugs are widely used and increasingly prescribed by health professionals as a common practice for the treatment of several pathologies. The overuse of these medications increases the importance of developing methods to monitor their concentrations in patients. It is essential that laboratories provide responses concerning the determination of these compounds in biological samples. Several extraction procedures have been applied to antidepressants, such as LLE and SPE, but also their miniaturized versions. The trend towards the development of new methods of identification and quantification follows an idea of using smaller amounts of biological samples, lower volumes of organic solvents, with reusable materials and less waste, resulting in fast, simple, and efficient techniques. It should be noted that this must be combined with robust analyzes where high resolution techniques have been gaining more and more interest allowing maximum specificity and sensitivity, making possible the identification of the analytes even if they are present at low concentrations. It is also of great interest to automate the extraction and/or analysis procedures. The main objective will be the improvement of patients' lives and improving the medical condition of each individual by a better management of their situation or by improving the treatment of other patients by compiling numerous data and support public health authorities.
Research funding: This work was partially supported by CICS-UBI that is financed by National Funds from Fundação para a Ciência e a Tecnologia (FCT) and Community Funds (UIDB/00709/2020). S. Soares acknowledges the FCT in the form of fellowships (SFRH/BD/148753/2019).
Conflicts of interest:
Authors state no conflict of interest. | 2021-04-05T13:11:39.610Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "d3e899118afc4bef58b7ee3c37cee49b46f3f004",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/revac-2021-0124/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f1d761762cfaf3ab89fa44ee2da32dba3be681da",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
530283 | pes2o/s2orc | v3-fos-license | Dishevelled nuclear shuttling
Structure-function analysis of the Dishevelled (Dsh) protein in frog embryos has defined sequences that regulate Dsh nuclear localization, which proves critical for Wnt signaling.
A classical way to investigate the functions of a protein is to start by defining where it is distributed. Membrane-spanning proteins often function as receptors involved in recognition and cell adhesion, whereas nuclear proteins frequently play a role in regulating gene expression and transcription. But it is becoming increasingly clear that protein subcellular localization can be extremely dynamic, allowing key proteins to play different roles in different compartments. Now, in Journal of Biology [1], Sergei Sokol and colleagues show that the Dishevelled (Dsh) protein of the Wnt signaling pathway can shuttle in and out of the nucleus (see 'The bottom line' box for a summary of the work and 'Background' for further explanations and definitions). These observations challenge the conventional thinking about Dsh function and suggest that Dsh might do very different things depending on where it is in the cell.
Canonical and non-canonical Wnt pathways
During growth, development and disease, extracellular signals are communicated, or transduced, into the cell and in such a way as to elicit a particular cellular response. Many key signal transduction pathways have been dissected using genetic and biochemical approaches; such studies have defined the molecules that ensure signals initiated at the cell surface are efficiently transmitted to the cell nucleus, where they often result in the induction of a specific gene-expression program. Many signal transduction pathways are composed of modules that are remarkably conserved across species, such that lessons from different experimental model organisms have contributed to the understanding of molecular hierarchies that control signal communication in many cellular contexts.
Studies of the Wnt pathway provide a wonderful example of how researchers from different fields have contributed to a detailed understanding of a key signal transduction pathway [2]. The Wnt pathway is critical for development and homeostasis of animals from Hydra to human [3]; Wnt signaling regulates cell proliferation, cell polarity and cell-fate determination. The Wnt signaling machinery is tightly regulated, and disruption of components of the signaling pathway have been implicated in diseases including cancer [2,4].
The first step in the Wnt signal occurs when extracellular Wnt ligand binds Frizzled receptors on the cell surface, leading to the activation of several distinct transduction pathways (see Figure 1). The canonical Wnt pathway involves stabilization of the intracellular protein  -catenin. The degradation of -catenin is regulated by interaction with a number of proteins including Axin, glycogen synthase kinase 3 (GSK3) and the adenomatous polyposis coli protein (APC). The degradation machinery is inhibited by Dsh, leading to the accumulation of -catenin, which in turn translocates to the nucleus and initiates a gene expression program by interacting with transcription factors such as T-cell-specific transcription factor (TCF). Frizzled receptors can also initiate an independent 'noncanonical' Wnt pathway that diverges to regulate complex developmental events involved in planar cell polarity and convergent extension movements during embryo development, via small GTPases and the JNK kinase. The intracellular protein Dishevelled is common to both canonical and non-canonical signaling pathways, raising the question of how this mysterious protein acts at the signal crossroads.
Dishevelled distribution
The Dishevelled protein was first discovered in flies, and several homologs have been found in other organisms including mammals [3]. Analysis of Dsh sequence alignments revealed the presence of three conserved domains called DIX, PDZ and DEP [5,6], which are implicated in protein-protein interactions and targeting to subcellular sites. The modular design of the Dsh protein suggested that different domains might function to route Wnt-Frizzled signals in different directions.
Sokol's group at Harvard Medical School decided to test this hypothesis by making mutant forms of Dsh that lack different domains and fusing them to green fluorescent protein (GFP) to track their subcellular localization (see the 'Behind the scenes' box for more of the rationale for the work). When they expressed the full-length Dsh-GFP protein in Xenopus ectoderm cells they observed spotty staining in the cytoplasm, but a Dsh protein lacking the DEP domain appeared in the nucleus. Initially perplexed, Sokol and colleagues then scanned the Dsh polypeptide for a leucine-rich nuclear export sequence (NES). They found one and mutated it to demonstrate that normally Dsh is efficiently exported from the nucleus by means of this sequence. The NES mutant accumulated in the nucleus, but it could still function normally, inducing a secondary dorsal axis when injected into frog embryos (a classical developmental assay for Dsh biological activity). The team confirmed the nuclear shuttling of Dsh using drugs that block active nuclear export; these drugs led to nuclear accumulation of endogenous Dsh proteins in mammalian cells.
"Once we had seen nuclear export we wanted to find out if there was an import signal," recalls Sokol. His team hunted for a conventional nuclear localization signal (NLS). "This was probably the most difficult part of the study, because the Dsh NLS doesn't match the known consensus." The team began a mutagenesis program, hacking away at the protein until they narrowed the NLS down to a short stretch of residues between the PDZ and DEP domains. "The Dsh NLS is Figure 1).
• Wnt signaling is implicated in many biological processes including cell proliferation, cell polarity and cell-fate specification. The 'canonical' Wnt/ -catenin signaling pathway links Wnt signaling to stabilization of the -catenin protein; -catenin in turn translocates to the nucleus and interacts with T cell-specific transcription factor (TCF) to drive the expression of target genes. Wnt signaling can also lead to activation of at least one non-canonical pathway involved in regulating planar cell polarity.
• Dsh is composed of three conserved domains: the amino-terminal DIX domain, which is found in Dsh and Axin; the central PDZ domain (found in Postsynaptic density-95, Discs-large and Zonula occludens-1 proteins); and the DEP domain (found in Dsh, Egl-10 and pleckstrin).
• The dynamics of protein localization inside the cell can be influenced by the activity of nuclear localization signals (NLS) and nuclear export signals (NES) that regulate protein shuttling into and out of the nucleus, respectively.
atypical; it doesn't look like anything else," comments Sokol. "But it's very conserved between Dsh proteins across species, so it may be a specialized way for Dsh to get into the nucleus." Removing the NLS blocked nuclear import. NLS-mutant proteins also failed to induce secondary axes in frog embryos, or to stabilize -catenin and activate downstream target genes. When the Dsh NLS was replaced with an unrelated NLS from a viral protein, Dsh activity was restored, as was canonical Wnt signaling. In contrast, the Dsh NLS mutation did not affect non-canonical Wnt signaling. Finally, Sokol's group showed that endogenous Dsh relocates to the nucleus in mammalian cells upon Wnt stimulation.
Dishevelled's nuclear shuffle
Sokol notes that some early reports mentioned nuclear localization of Dsh [7,8], but these did not address the functional importance of Dsh nuclear accumulation in Wnt signaling. Randall Moon's group at the University of Washington in Seattle had noticed Dsh in the nucleus in association with the Dapper protein [8]. "What is interesting is the Sokol finding that blocking nuclear export leads to accumulation in the nucleus, suggesting that Dsh nuclear accumulation is regulated," says Moon. "This is a careful study which provides compelling evidence for Dsh nuclear import," adds Howard Hughes Investigator Norbert Perrimon, who (independent from Sokol) works at Harvard Medical School.
"These results bring a new level of complexity to the regulation of the Wnt signaling pathway," agrees Patricia Salinas from University College London, UK. Her group previously showed that Dsh binds to microtubules and locally regulates signaling events in neuronal axons [9]. She notes that a large number of Wnt signaling components have recently been found in the nucleus. "These results fit very well with our view that Dsh regulates distinct signaling events in specific cellular compartments. Our task now is to elucidate how the localization of Dishevelled is regulated. For example, what determines its re-localization to the nucleus or to microtubules?" Perrimon notes that "the issue of Dsh nuclear localization needs to be reexamined in the other systems to find out how general this is." Sokol's results have met with some resistance from the Wnt community. Moon notes that often it takes years to change peoples' ideas about Wnt signaling. He cites the example of Frizzled receptors signaling via heterotrimeric G proteins, which he proposed years ago and which has only recently been clearly demonstrated. Sokol points out that -catenin itself was originally described in cell adhesion. "People didn't believe that it goes to the nucleus until much later." Sokol is sure that many of the Wnt signaling proteins have nuclear functions.
"What remains completely opaque is what Dsh is doing in the nucleus and with whom," says Moon. Dsh has been reported to interact with over a dozen proteins, including several kinases [6]. Sokol speculates that Dsh may have nuclear roles beyond the stabilization of -catenin. For example, he notes that the recently identified Frodo protein binds to Dsh and Tcf proteins independent of -catenin and may serve as a bridge to regulate gene expression [10]. "I would say we are just at the tip of the iceberg," says Sokol. "There may be huge Dsh nuclear complexes that control chromatin structure or assembly." All in the field appear to agree that cellular localization offers possibilities for distinct functions in different compartments. "It remains a question whether Dsh mobilization to the What prompted you to study Dishevelled localization? Dishevelled (Dsh) is known to be involved in different signaling pathways and one explanation might be that different signaling events take place in different cell compartments. When we started to break the Dsh protein into domains and to analyze the properties of the individual domains, we found that the Dsh mutant missing the DEP domain ended up in the nucleus and at the same time it was fully active in terms of canonical signaling to Wnt target genes. So, we figured that this might be critical to understanding Dsh regulation.
How long did it take to do the experiments and what were the steps that ensured success?
We obtained the key result, the accumulation of Dsh protein in the nucleus upon treatment with a nuclear export inhibitor, quite early on. This assured us that the project was viable, and most of the other experiments were done in a few months. We had to establish a reliable cell fractionation technique to separate nuclei from cytoplasm and to obtain antibodies that specifically recognize endogenous Dsh. Another critical step was to show that Dsh can translocate to the nucleus in response to Wnt ligands. The use of Xenopus embryos allowed us to assess the functional activity of different Dsh mutants in reproducible assays that take only a few weeks.
What was your initial reaction to the results and how were they received by others?
We were quite surprised initially and set out to show that nuclear Dsh is functional. Our results don't fit the general consensus view in which -catenin is the major factor that controls activation of Wnt target genes in the nucleus. We suggest that Dsh in the nucleus may function in a new branch of the Wnt pathway that is independent of -catenin stabilization. Many Wnt researchers were also surprised, as this raised questions about what Dsh might be doing in the nucleus.
What are the next steps?
We would still like to know how Dsh functions and what it is doing in the nucleus. We are investigating the function of proteins that bind to Dsh at different locations, so that we can separate different branches of Dsh control pathways. We would like to explore the possibility that Dsh regulates chromatin structure and associates with chromatin or nuclear complexes. | 2014-10-01T00:00:00.000Z | 2005-02-16T00:00:00.000 | {
"year": 2005,
"sha1": "789a521fa0a9ad75fa57d813b6731fe5af9f226d",
"oa_license": "CCBY",
"oa_url": "https://jbiol.biomedcentral.com/track/pdf/10.1186/jbiol21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "789a521fa0a9ad75fa57d813b6731fe5af9f226d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234820127 | pes2o/s2orc | v3-fos-license | Technical and agronomic efficiency of nitrogen use on the yield and quality of oat grains 1
: The efficiency of nitrogen use by oats in association with climatic conditions is fundamental to the development of more sustainable managements with yield and quality. The objectives of this study were to define the agronomic efficiency of nitrogen by the ratio of the dose provided and product obtained, estimate the maximum technical efficiency of the nutrient on grain yield; and for the optimum dose, simulate the expression of the straw and industry yields, protein and total fiber in different conditions of the agricultural year in a soybean/oat system. The study was conducted from 2011 to 2016, in Augusto Pestana, RS, Brazil, in a randomized block design with four repetitions in a 4 x 2 factorial referring to nitrogen doses (0, 30, 60 and 120 kg ha -1 ) and oat cultivars (Barbarasul and Brisasul) in a soybean/oat system. Nitrogen increased grain, straw, and industry yields and total grain protein, with agronomic efficiency of 7.8, 19.7 and 3.3 kg ha -1 and 0.10 g kg -1 , respectively, with reduction of the total fiber in 0.05 g kg -1 per kg of N supplied. The dose of maximum technical efficiency in the expression of grain yield is dependent on the weather conditions during cultivation. In general, the maximum efficiency of grain productivity was obtained with 86 kg ha -1 of N, with linear equations showing increased productivity of straw and industry yield, total protein, and reduction of the fiber content of oat grains by nitrogen use.
Introduction
Inadequate management of N is one of the factors mostly impacting grain yield and quality, thereby making the process unsustainable . For maximum expression of the yield and quality of oat grains, the adjustment of management technologies is recommended to improve the efficiency of nitrogen absorption and use .
Meteorological conditions directly influence nitrogen losses, either by nitrate leaching or ammonia volatilization, thereby affecting nitrogen absorption by roots and reducing nitrogen use efficiency (Mamann et al., 2020). In addition, when applied in small doses, it limits yield, but in high doses, although it maximizes yield, it also promotes the lodging of plants thereby making harvesting difficult. This is accompanied with losses in grain yield and quality, resulting in economic and environmental damage (Marolli et al., 2018).
Advances are required in the development of strategies that promote better use of nitrogen in oats, adding efficiency with less environmental impact . In this perspective, the agronomic efficiency of the input/product ratio obtained and the technical efficiency for estimating the optimal dose of the nutrient, can assist in decision making by promoting more sustainable processes of nitrogen management in oats.
The objectives of this study were to define the agronomic efficiency of nitrogen by the ratio of dose provided and product obtained, estimate the maximum technical efficiency of the nutrient on grain yield, and for the optimum dose, simulate the expression of the straw and industry yields, protein and total fiber in different conditions of the agricultural year in a soybean/oat system.
Material and Methods
The experiments were conducted in a field, from 2011 to 2016, in the county of Augusto Pestana, RS, Brazil, geographically located at 28° 26' 30'' S latitude and 54° 00' 58'' W longitude. Soil obtained from the experimental area was classified as Oxisol and the climate of the region, according to the Köppen classification, is of the Cfa type, with hot summer without dry season (Kuinchtner & Buriol, 2001;Santos et al., 2006). The area in which the experiment was installed had a consolidated no till farming system. During summer, the area was occupied by soybean, characterized as the most used crop precedent in Southern Brazil. Ten days before each sowing of oat, a soil analysis was carried out, identifying, on average, the following chemical characteristics of the site (Tedesco et al., 1995): pH = 6.3; P = 34.1 mg dm -3 ; K = 231 mg dm -3 ; OM = 3.2%; Al = 0 cmol c dm -3 ; Ca = 6.6 cmol c dm -3 and Mg = 2.9 cmol c dm -3 .
Sowing was carried out between the first and the second week of June with mechanized seeder-fertilizer. Each plot comprised five lines of 5 m in length and spacing between lines of 0.20 m, forming an experimental unit of 5 m 2 . The population density used was 400 viable seeds per square meter. The seeds of the selected genotypes were submitted to germination and vigor tests in the laboratory, in order to correct the density of plants to constitute the desired population. During sowing, 45 and 30 kg ha -1 of P 2 O 5 and K 2 O were applied, respectively, based on the levels of P and K in the soil with the expectation of grain yield of 3 t ha -1 , and with 10 kg ha -1 of N at sowing (except in the control treatment), with the remainder of each dose applied as urea (N = 45%) topdressing, to complete the proposed doses of N-fertilizer applied in the phenological stage of the expanded fourth leaf, with the source urea. During the study, tebuconazole fungicide was applied at a rate of 0.75 L ha -1 and weeds were controlled with metsulfuron-methyl herbicide at a dose of 2.4 g ha -1 and additional hoeing when necessary.
The experimental design was a randomized block with four replicates, following a 4 x 2 factorial scheme refering to doses of N-fertilizer (0, 30, 60 and 120 kg ha -1 ) and oat cultivars (Barbarasul and Brisasul), respectively. In each year of cultivation in the soybean/oat system, two experiments were conducted, one to quantify the total biomass yield (straw + grains) and the other to estimate grain yield. The biomass yield (BY, kg ha -1 ) was obtained by cutting the three central lines of each plot close to the soil at the stage of physiological maturity. The biomass samples were sent to a forced air oven at a temperature of 65 °C, until reaching constant weight and converted to kg ha -1 . Grain yield (GY, kg ha -1 ) was obtained by cutting the three central lines of each plot at the stage of harvest maturity with grain moisture around 22%. Thereafter, the plants were tracked with a stationary harvester and grains were sent to the laboratory to correct the moisture to 13%. Thus, the straw yield (SY, kg ha -1 ) was obtained by subtracting the grain yield from the biomass yield. The number of grains larger than 2 mm (NG > 2 mm, n) was obtained by counting 100 grains from the sample in each plot, which were placed in a 2 mm mesh sieve and those above this dimension were counted. The husking index (HI, g g -1 ) was determined by the ratio between the cariopsis mass of 50 grains larger than 2 mm and its grain weight. Industrial yield (INY, kg ha -1 ) was obtained by the product of grain yield with the number of grains larger than 2 mm and the husking index (INY = GY x NG > 2 mm x HI). The determination of total protein (TP, g kg -1 ) and total fiber (TF, g kg -1 ) was conducted using near infrared spectrophotometry (NIRs) on a sample of unhulled grains. The device used was of the Perten brand, model Diode Array DA7200. The air temperature (ºC) and rainfall (mm) information for analysis of the meteorological conditions of the agricultural years were obtained by the Automatic Station installed 500 m from the experiment. It is worth mentioning that meteorological conditions, together with grain yield, were used to classify the agricultural years as favorable, intermediate and unfavorable for the cultivation of oats. Data were subjected to analysis of variance to detect the main and interaction effects (not shown) and linear regression analysis, in the fitting of equations to estimate the agronomic efficiency of oats by the kilogram of nitrogen supplied per kilogram of product obtained. The grain yield data were also subjected to quadratic regression analysis, in the formulation of equations to estimate the maximum technical efficiency of nitrogen use by oats. Optimal nutrient doses were used to simulate straw and industry yields, protein and total fiber of the grains. Statistical analyses were performed with the aid of the GENES software.
Results and Discussion
Based on the temperature, rainfall and mean yield of oat grains presented in Table 1, the years 2011 and 2013 were favorable (FY) to oat cultivation. The year 2011 was marked by well-distributed rainfall during the growing cycle, with volumes similar to the historical average of the last 25 years . Rainfall precipitations were observed in the moments preceding the application of nitrogen, providing adequate soil moisture for urea solubilization, as shown in Figure 1A. The maximum, minimum and average temperatures were stable throughout the growing cycle. In 2013, rainfall distribution occurred regularly between the months of growing, with volume below the historical average. At the time of fertilization, the soil moisture from rainfall, in previous days, may have favored greater nutrient use by the plant. In addition, temperatures were milder, reducing possible nitrogen losses through volatilization ( Figure 1C).
The years 2012 and 2014 showed grain yield much lower than the desired expectation of 3 t ha -1 , justifying their classification as unfavorable agricultural years (UY) for oat growing. In 2012 ( Figure 1B), there was water restriction at the beginning of development, however rainfall increased some days prior to fertilization and temperatures reached close to zero degrees during nutrient management (Table 1). At the end of the growing season, rainfall was frequent with a high accumulated value, promoting days of lower radiation quality and delaying grain harvesting. In 2014 ( Figure 1D), the first days of the growing season were marked by above average rainfall and high air temperatures. These conditions affected the efficiency of photosynthesis and the consequent shoot formation and root growth.
The growing conditions showed the grain yield obtained in the years 2015 and 2016, as intermediates (IY) for oat cultivation. In 2015 (Table 1), the accumulated rainfall was close to the historical average. The rains before fertilization guaranteed soil moisture for nitrogen management, however, a long period of water restriction after fertilization, possibly affected the efficiency of nutrient use in yield components preparation. High temperatures during anthesis may impair the development of the reproductive system ( Figure 1E). In the year 2016, milder temperatures and stability were recorded throughout the growing season, but with reduced rainfall in the grain filling period, and significant rainfall in the final phase of the season, when the grain yield was defined. This occurred with the possibility of losses on grain quality ( Figure 1F). White oats are a highly adaptable species; however, the occurrence of leaf diseases and meteorological restrictions are limiting the expression of the maximum crop yield . Among the meteorological factors, temperature and rainfall maximized the yield and quality of oat grains the most, as the environmental conditions improved (Klink et al., 2014;Trautmann et al., 2020). In oats, a favorable environment is characterized by rainfall with small volumes Rev. Bras. Eng. Agríc. Ambiental, v.25, n.8, p.529-537, 2021. adequately distributed during the growing season and with mild temperatures from the vegetative phase until grain filling (Souza et al., 2013;.
Min
The analysis of variance results showed significant effects in the variables analyzed both for the main effects years and nitrogen doses and for the interaction, configuring the need to analyze the efficiency of nitrogen use in each year of cultivation (data not shown). Table 2 presents analysis of the agronomic efficiency of the kilogram ratio of nitrogen supplied per kilogram of product obtained. In these conditions, grain yield showed efficiency range between 7.0 and 9.8 kg ha -1 of grains per kg of N among the agricultural year conditions, with average trend of 7.8 kg ha -1 . In general, grain yield showed a reduced variation in efficiency due to nitrogen use; however the linear coefficient was significant in indicating the starting point for nitrogen use. This resulted in the classification of 2011 and 2013 as favorable years for oat production.
In the expression of straw yield (Table 2), agronomic efficiency showed greater amplitude, ranging from 8.6 to 28.8 kg ha -1 . The most significant values of this efficiency for nitrogen to straw yield were recorded in the intermediate years. Regardless of the agricultural condition, each kilogram of nitrogen supplied produced a return of 19.7 kg ha -1 of straw yield. In general, in the expression of straw yield, the most expressive intercepts were observed in the intermediate and favorable years to oat cultivation.
In the analysis of industry yield (Table 2), the agronomic efficiency of nitrogen use did not show any relationship with the agricultural year, showing for example, that the favorable year of 2013 and the unfavorable year of 2012, showed similar agronomic efficiency of 5 kg ha -1 of industrial yield per kg of N supplied. Although the year 2016 recorded high agronomic efficiency on grain and straw yield, it showed the lowest efficiency on industry yield. A fact that highlights the importance of the individualized analysis of the variables that make up the estimate of industrial grain yield, such as the husking index and the number of grains greater than 2 mm.
In the expression of total protein (Table 2), the most expressive values of agronomic efficiency were obtained in favorable (2011), unfavorable (2012) and intermediate (2016) years of growing, indicating no relationship with the year of cultivation. In the general equation, regardless of the condition of the agricultural year, the observed agronomic efficiency was 0.10 g of protein per kilogram of grain for each kilogram of nitrogen supplied per hectare, with an initial concentration of FY -Favorable year; UY -Unfavorable year; IY -Intermediate year; R² -Coefficient of determination; P (bix) -Probability of the slope parameter of the line; * -Significant at p ≤ 0.05, by t test; ns -Not significant at p ≤ 0.05, by t test; Averages followed by the same lowercase letters in the column and uppercase letters in the row, constitute a statistically homogeneous group using the Skott-Knott model at p ≤ 0.05 Table 2. Equations of agronomic efficiency and average values of yield and industrial and nutritional quality of oat grains in different years of cultivation 100.9 g kg -1 . However, the increase in nitrogen dose resulted in a decrease in total fiber, indicating an average reduction of around 0.05 g of fiber per 100 kg of grains for each kilogram of the nutrient added per hectare in the oat crop, regardless of the growing year condition. Thus, higher fiber contents were found under more restrictive nitrogen use, especially in years unfavorable to cereal cultivation.
Evaluating the agronomic efficiency of urea, observed the existence of genetic differences in the expression of grain yield, where the greatest efficiency was obtained with the cultivar URS Taura with 4.68 kg of grains produced per kg of N supplied. Although more efficient, it produced the lowest grain yield, starting from a lower linear coefficient when compared to cultivars with lower angular coefficients. Agronomic efficiency is obtained by the slope of the linear equation, indicating the relationship with which the variable of interest answers per unit of nitrogen (Moll et al., 1982). However, the linear coefficient of the equation must also be considered, as it determines the nutrient performance starting point .
The differences between results of agronomic efficiency have been attributed to the predecessor crop, cultivar used and meteorological conditions (Prando et al., 2013). Some authors (Martinez et al., 2010;Hawerroth et al., 2013) have reported that the protein and fiber content of oat grains are also influenced by nitrogen availability and cultivation conditions with high temperature and reduced air humidity during maturation of grains (Monteiro, 2009). Contradictory results of this research were obtained by Zakirullah et al. (2017), indicating gradual increments in the percentage of crude fiber in oats with an increase in the nitrogen level. Figure 2 shows the estimates of the maximum technical efficiency of nitrogen use for grain yield. In this perspective, the favorable year (2011) indicated maximum technical efficiency similar to the unfavorable year (2012), with 82 and 86 kg ha -1 of N, respectively. However, the 2011 simulation shows grain yield of 4200 kg ha -1 , compared to 2012 with simulation for 2841 kg ha -1 . Although the nitrogen dose is similar, the product efficiency was very expressive, indicating the importance of environmental relationships and the greater In addition to the need for greater nitrogen use in 2014 for maximum yield, the result obtained was much lower compared to 2013, with the lowest dose of the nutrient (Figure 2). It is noteworthy that the intermediate years of 2015 and 2016 showed similarity of the optimal dose of nitrogen use with similar values of maximum yield. The results presented suggest that the use of optimum doses for the expression of yield, takes into account the environmental conditions at the time of application of the nutrient and is based on meteorological forecasts during the cultivation cycle, in search of greater economic return and reduction of environmental impacts due to the ease of losses by volatilization or leaching under restrictive growing conditions.
The maximum technical efficiency of nitrogen use is given by the response of higher yields with less supply of the input . Using 75 kg ha -1 of N, Kolchinski & Schuch (2003) determined the maximum technical efficiency of nitrogen use for the yield of oat grains. Silva et al. (2016) observed that the maximum technical efficiency of nitrogen use is strongly dependent on environmental conditions. These authors reported a technical efficiency of nitrogen use with 86 kg ha -1 , generating an expected grain yield of 4181 kg ha -1 in a favorable year. However, under restrictive conditions, they found maximum technical efficiency in the use of nitrogen with 119 kg ha -1 , with yield of 2930 kg ha -1 of grains. The results are similar to those obtained, corroborating that the wide range of grain yield is associated with the high variability of growing conditions, with the year factor being the component of greatest influence on the variations in expectation (Storck et al., 2014).
In Table 3, the nitrogen doses indicated by the maximum technical efficiency for the expression of grain yield by agricultural year condition were used to estimate the expression of the straw and industry yields, protein and total fiber, from the equations that established the behavioral trend. Therefore, the biological interpretation of nitrogen use in these variables is sought considering the optimum dose of grain yield regardless of agricultural year. In this perspective, with 86 kg ha -1 of N there is an expectation of 7539 kg ha -1 of straw, 1412 kg ha -1 of industry yield and 109.5 and 124.9 g kg -1 of protein and total fiber, respectively.
Studies have reported positive results from the use of nitrogen for straw yield; however, high doses of fertilization result in the lodging of plants, causing significant yield losses (Zakirullah et al., 2017;Marolli et al., 2018). Hawerroth et al. (2013), Sunilkumar & Tareke (2016) and Lima et al. (2017) confirmed that the application of nitrogen as topdressing increases the protein concentration in oat grains. However, the nutrient absorption capacity varies between cultivars, soil fertility and meteorological and environmental factors such as humidity, temperature, photoperiod and radiation. Moreira et al. (2001) highlighted that the fiber content of oats does not change with increased nitrogen supply. Silveira et al. (2016) stated that the fiber concentration of oats is predominantly dependent on the cultivar and on the weather conditions during cultivation.
Conclusions
1. Nitrogen increases grain, straw, and industry yields and total grain protein, with agronomic efficiency of 7.8, 19.7 and 3.3 kg ha -1 and 0.10 g kg -1 , respectively, with reduction of the total fiber by 0.05 g kg -1 per kg of the nutrient supplied.
2. The dose of maximum technical efficiency in expressing grain yield is dependent on the weather conditions during the growing season. In general, the maximum efficiency of grain productivity is obtained with 86 kg ha -1 of N, with linear equations that show increase in the productivity of straw and industry yield and the total protein, and reduction of the fiber content of oat grains by use of nitrogen. | 2021-05-21T16:56:04.451Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "7682fe62af746128d0749fcffdde592238e4bd08",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbeaa/a/8MfFMCQ6SnChvPT3FwR5vVk/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3ea9ba82ed9072c6bfab9b8b9405871c53128ec7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
264466916 | pes2o/s2orc | v3-fos-license | The final plug: novel use of vascular plug for management of bronchoesophageal fistula
Video Video 1 Successful management of a postsurgical bronchoesophageal fistula with a combination of a self-expanding vascular plug, glue containing cyanoacrylate, and a fully covered metal stent.
INTRODUCTION
Bronchoesophageal fistula (BEF) management consists of surgical and endoscopic interventions including use of self-expandable metal stents. 1,2However, in cases with altered anatomy, such as in post-esophagectomy with esoph-agogastric anastomosis, stents may not suffice because of a mismatch in the diameter between the stents and esophagogastric conduit. 3Thus, novel solutions are required to address BEFs refractory to initial intervention with stents.We describe the successful management of a postsurgical BEF with a combination of a self-expanding vascular plug, glue containing cyanoacrylate, and a fully covered metal stent (FCMS).
CASE PRESENTATION
We present the case of 77-year-old man with a history of esophageal cancer status post-Ivor Lewis esophagectomy who developed a BEF.Initial management included esophageal stent placement.He maintained his nutritional requirements through percutaneous endoscopic jejunostomy tube feeds.However, the patient experienced a persistent cough after oral intake during the weeks preceding admission.A barium esophagram confirmed leakage from the esophagus into the right bronchus (Fig. 1).The endoscopy team was called for evaluation and management of persistent BEF.
During the endoscopy, a 4-mm fistula was found at the esophageal anastomosis, evidenced by end-tidal carbon dioxide spikes and air bubbles during positive pressure ventilation (Fig. 2).Because of the stenosed esophagogastric anastomosis with the fistula immediately proximal to that, an over-the-scope clip was not feasible.Moreover, there was concern that this could further compromise the lumen.The decision was made to fluoroscopically deploy a 7-mm self-expanding 3-part wheelbarrow-type vascular plug (AVP II; Abbott Cardiovascular, Plymouth, Minn, USA) across the fistula tract (Video 1, available on-line at www.videogie.org).The plug was inserted such that the thin wheel was deployed into the bronchial side, the middle broader wheel into the fistula tract, and the third thin wheel facing the esophageal side (Fig. 3).The plug components within the fistula and esophageal side were obtruded with contrast-laced cyanoacrylate to create a water-resistant seal, prevent leakage, and avoid bronchial obstruction.Cyanoacrylate was chosen because it solidifies fast and has proinflammatory properties when reacting with nickel (a component of the plug). 4The existing stent was removed and replaced with a 10-Â 100-mm FCMS (Viabil; Gore Medical, Flagstaff, Ariz, USA) to cover the fistula, under fluoroscopic guidance (Fig. 4) with no contrast leakage observed (Fig. 5).
At the 6-week follow-up the patient was doing well, tolerating oral intake with no cough.The patient has been symptom free for 11 months.The plug, glue, and stent are biocompatible; therefore, subsequent intervention will only be undertaken if the patient is symptomatic.
CONCLUSION
We exploited the use of a vascular plug, glue, and an FCMS to create a water-tight seal for closure of a BEF.The combination of cyanoacrylate and nickel allowed for a proinflammatory reaction and plugging of the fistulous tract.This approach is suitable for patients in whom stents failed because of challenging postsurgical anatomy and mismatch of stent diameter with gastric conduit.
DISCLOSURE
The authors disclosed no financial relationships relevant to this publication.
Figure 1 .
Figure 1.Barium esophagram with arrows illustrating contrast leak into the bronchus, confirming presence of a bronchoesophageal fistula.
Figure 3 .
Figure 3. Side-by-side images demonstrating endoscopic view of vascular plug deployment across the fistula tract (left) with fluoroscopic guidance (right).
Figure 2 .
Figure 2. Endoscopic image with arrow illustrating bubbling from bronchus into esophagus during positive pressure ventilation, confirming the location of the bronchoesophageal fistula.
Figure 5 .
Figure 5. Final location of the vascular plug within the bronchoesophageal fistula tract along with the new fully covered metal stent, demonstrating no contrast leak.
Figure 4 .
Figure 4. Final location of the vascular plug within the bronchoesophageal fistula tract along with the new fully covered metal stent. | 2023-10-26T15:43:32.156Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "28b8f52f1ad9a138b03b1089b2bb8be56346628c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.vgie.2023.10.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fdcf55c3eabc3b038e6a2189301cda46ededfcd",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235400774 | pes2o/s2orc | v3-fos-license | The influence of consumer’s “new demand” on commercial building design
With the continuous development of the society, the business environment has changed, and consumers’ consumption concepts have been updated, and consumers are in pursuit of efficient and diversified commercial space experience. Traditional commercial buildings are difficult to meet the new needs of people, and new materials and new technologies emerge in an endless stream to meet the needs of architectural development. This puts forward higher requirements to the design of commercial buildings. At present, new space in commercial buildings has emerged, and more architectural technologies have been applied in commercial buildings. Commercial space is constantly changing and reconstructing with the development of The Times and the needs of consumers.
Introduction
The Internet has played an extremely important role in the recent 20 years of commercial wave. The rapid development speed and broad coverage make the Internet have great advantages in business competition. Businessmen who can effectively use e-commerce as a means of business have made huge profits from it. However, in recent years, the dividend of e-commerce has begun to shrink, and it is difficult to find a breakthrough in the business form of simple Internet platform. Many e-commerce enterprises begin to expand to offline physical stores, seeking a new breakthrough point, integrating online and offline resources. These actions having obvious flow advantages compared with the traditional retail industry.
On October 13, 2016, Ma Yun, chairman of the board of directors of Alibaba group, first proposed the concept of "new retail" at the Hangzhou yunqi conference. According to his new retail concept, in the future, the business model that only relies on online or offline channels to provide commodity retail services will eventually be replaced by the new retail business model that integrates online, offline and logistics. Therefore, online retail enterprises need to gradually establish their own offline shopping scene, and offline enterprises must combine online technology and channels to become bigger and stronger. The new retail service system is not a simple combination of online services, offline services and warehousing and logistics services, but a deep integration and reconstruction of breaking up into zero. On November 11, 2016, the state issued the opinions on promoting the innovation and transformation of physical retail. In promoting the integration of online and offline, the opinions pointed out that the advantages of offline logistics and service experience should be integrated with online information flow, capital flow and business flow to gradually establish the 2 overall layout of intelligent network. These trends indicate that the retail industry is bound to change in the future development.
The change of commercial form makes the consumer's consumption behavior have new content, which includes the change of consumption mode, consumption content, consumption consciousness and so on, thus forming a new concept of consumption. With the increase of people's income level, the focus of consumption has changed from "buy or not" to "good or not", paying more attention to fashion and experience. Commercial building is the main space for people's consumption activities. Most of the existing commercial building spaces in China have the problems of single combination mode and weak sense of hierarchy. It is difficult to increase consumers' cognitive sense of place. It is not suitable for the existing economic development situation and can not meet the needs of new commercial formats. To improve the shopping environment and thus change the shopping space, it should be updated closely around the characteristics of contemporary consumers' consumption behavior, so that consumers can get a good shopping experience in commercial buildings. This will provide a certain perspective for future commercial space design under the influence of network.
Formatting the title, authors and affiliations
The impact of the current business environment on commercial buildings and whether the existing commercial buildings have made corresponding changes under the current situation should be investigated.
The Mixc in Qingdao is close to the central core government district, business district and Qingdao Municipal Government of Shinan District. The surrounding buildings are mainly used for office, business and hotel functions. The Mixc includes a variety of formats, with a total construction area of about 1.2 million square meters. The shopping center has six floors above ground and three floors below ground, and the second and third floors below ground are mainly for parking. It opened in 2015 and is still one of the shopping centers favored by the people of Qingdao. In 2019, it was rated as a five-star shopping center (according to China's shopping center rating standards). The reason why it has been popular with consumers is that since the beginning of the project, The Mixc city has been adjusting its commercial space according to the different needs of the businesses to meet the trend of the times. For this reason, architects have successfully renovated the two-story Apple store in Vientiane City, hoping to bring more space for young consumers to enter Vientiane city.
The architectural space design of Mixc focuses on the experience of consumers, and adequate activity space is reserved outside for various forms of commercial activities. The interior space is designed as an "N-power park" interactive shopping area, which integrates commerce, sports and catering into one area and is favored by young consumers. At present, there are many Spaces related to the current consumption concept and the development of new retail in Mixc. Taking photos and punching cards, online purchasing and offline picking up goods, live broadcasting and self-service cash register have all become popular shopping Spaces for consumers. In the peak of the flow of people, the photo and card space often needs to queue up to take photos, and then most consumers will share their photos on social platforms. Online purchase and offline pick up are mostly used by consumers who pursue fast and efficient delivery. After selecting the goods, they can pick up the goods directly to avoid the limitation of express delivery. As an emerging sales method, live broadcasting will synchronize commercial activities in physical buildings to network platforms to encourage consumers to shop online. The self-service cashier eliminates the tedious queue in front of the cashier. The payment method is to scan the online code for payment. After the payment is completed, professional staff will quickly check the payment voucher and walk out. Some fast fashion brands in the store circular broadcast recommended customers online order can be picked up in the store or delivered to the home voice, expressing that consumers are welcome to use online order to purchase.
3.Results & Discussion
Under the new retail trend, consumers' demand for shopping has been further improved compared with the previous one, which has been upgraded from simple purchase demand to all-round demand, including physical demand and psychological demand, mainly reflected in vision, shopping efficiency, social communication and other aspects.
Sensory Requirements
The five senses of shape, sound, smell, taste and touch are the main basis for people to construct the initial impression of things. Consumers prefer a business environment with more impact and impression when shopping in malls. Commercial buildings are good at using the five senses to create a commercial image. The same kind of shampoo is used in the internal space of Mixc, which can be smelled after entering. However, the sense of smell is often indescribable, and the visual impression is more likely to make the first impression on consumers. D 'Strict collaborated with American jeweler Tiffany&Co to create a stunning 4D three-dimensional architectural projection for the new flagship store in Beijing, China. The interactive experience gives people a refreshing feeling, which is very helpful to attract people. This interactive device is not only applied to the outside of the building, Alibaba unmanned hotel has set up a large screen of interactive landscape in the building lobby, giving consumers a strong visual impact. Figure 4. Alibaba unmanned hotel interactive screen
Efficient shopping demand
With the rapid development of urban society, the pace of urban life is also faster than before. Consumers need to shop more efficiently and avoid meaningless waiting. In the early years, KFC introduced the drive-through restaurant, which saved people the time of finding a parking space and then coming in to buy something. Now for normal commuting consumers, shopping time in the mall is mostly concentrated on weekends or holidays, easy to form the phenomenon of long queues in peak hours. Although the queuing space has been reserved in the supermarket design, it is still difficult to meet the use demand in peak hours. At the same time, online payment has swept through the Chinese market, with large shopping malls and small street vendors all using online payment. In China, the self-service settlement system was pioneered by Hema Xiansheng. Now, many shopping malls begin to use self-service code scanning settlement system to improve the use efficiency of settlement space. The change of the settlement method transforms the settlement space from a strip space to an open space, showing a semi-enclosed state on the whole. When designing this part of the space, the influence of new retail on the shopping mall should be fully considered, and the settlement space should be reconstructed.
Needs for social interaction
At present, one of the reasons why consumers choose physical stores for shopping is that the timeliness of logistics will affect the time to get the goods. What's more, more consumers are inclined to "shopping" instead of simply "buying". An all-round experience through the process of purchasing goods is the key to attracting consumers to physical commercial buildings. The shopping mall of "this you mountain" in Changchun City creates the terrain of scattered height in the interior. When consumers go shopping, they feel as if they are in the mountain. When they go shopping, they arrive at the high place of the shopping mall, which has a great sense of experience. Shopping has become a way of socializing. In the process of shopping, interpersonal communication is strengthened. Shopping malls, milk tea shops and coffee shops have all become carriers for consumers to carry out social activities. For example, Starbucks coffee shops are not only about selling coffee, but also about providing a place for people to communicate with each other. The featured space of the mall will attract consumers to share their photos on social platforms. In the long run, a virtuous cycle will follow, and the specific space of the mall will become a place for web celebrity clocking in. After clocking in, consumers attracted by the mall will naturally enter other Spaces, and the customer flow will expand accordingly. From this point of view, commercial and architectural social Spaces are worth the time and effort to design.
4.Conclusions
Under the new retail trend, consumer demand has put forward higher requirements for commercial building space, and the relationship between people, goods and field has been gradually reconstructed. In this situation, the function of architectural space is also affected. In the design of commercial buildings, architects should fully consider the new needs of consumers, reflect the characteristics of The Times and humanistic care of commercial buildings, and make commercial buildings with "human interest". | 2021-06-11T20:03:12.603Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e04ec4e15b86d429c2c4fe45ddf780f6eb8da87c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/787/1/012178",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e04ec4e15b86d429c2c4fe45ddf780f6eb8da87c",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
119742232 | pes2o/s2orc | v3-fos-license | Gap Theorem for Separated Sequences without Pain
We give a simple and straightforward proof of the Gap Theorem for separated sequences by A. Poltoratski and M. Mitkovski using the Beurling--Malliavin formula for the radius of completeness.
Introduction and main result
For a real discrete set Λ consider the system of exponentials The famous Beurling-Malliavin theorem gives an effective formula for the completeness radius R Λ of E Λ in terms of the so-called upper Beurling-Malliavin density D BM (Λ) (to be defined below). More precisely, put R(Λ) = sup{a : E Λ is complete in L 2 (−a, a)}.
Then the Beurling-Malliavin theorem [1] (for detailed exposition see [2,3]) states The elegance and finality of this result impresses mathematicians over 50 years. Nevertheless, the dual concept of the lower Beurling-Malliavin density D BM (Λ) had found practical use only some years ago.
Let Λ be a separated set, i.e.
The aim of our paper is to show that Theorem 1.2 can be directly derived from Theorem 1.1. So, instead of two difficult results in harmonic analysis essentially we have only one.
It should be noted that for non-separated sequences Λ the formula for gap characteristic was recently found by A. Poltoratski [7]. This formula is much more involved and includes the concept of energy. It is not clear (at least to the authors) whether this formula also can be directly derived from the classical Beurling-Malliavin theory. Here and below we put αZ = {αn : n ∈ Z}. A similar result is true for the completeness radius and the gap characteristic: Given a separated set Λ, we consider its perturbations: The third result shows that some positive perturbations do not change the gap characteristic: Observe, that condition δ < d(Λ)/4 implies thatΛ itself is a separated set. We postpone the proofs of Propositions 2.1-2.3. Now, let us prove Theorem 1.2.
Proof. We consider two cases.
(i) Assume additionally that Λ is a subset of αZ, for some α > 0. In view of Theorem 1.1 and Propositions 2.1 and 2.2, we have Theorem 1.2 is proved for the subsequences of αZ.
(ii) Fix any separated set Λ and positive δ < d(Λ)/4. Clearly, there is a setΛ (2.1) satisfying (2.2) and such thatΛ ⊂ αZ, for some sufficiently small α > 0. By Proposition 2.3, Using the definition of lower Beurling-Malliavin density (see below), one may easily check that D BM (Λ) = D BM (Λ). So, by (i), we conclude that So, we have used Theorem 1.1 for separated sets to deduce Theorem 1.2. We notice, that in fact these two results are equivalent. The converse implication is given by To check this, one may use a similar proof where instead of Propositions 2.3 one needs Proposition 2.5. Assume Λ is a separated set. There exists δ > 0 such that for all numbers |ε λ | < δ, λ ∈ Λ, the setΛ in (2.1) satisfies Clearly, this result easily follows from Theorem 1.1 and the definition of D BM . We remark that one may prove it by elementary means involving standard estimates of Weierstrass products.
Proof of Proposition 2.1
There exist at least five definitions of the upper Beurling-Malliavin density (see paper [4] which is devoted to equivalence of different definitions). We start with the most wellknown: Definition 1. We will say that the sequence Λ ⊂ R is strongly a-regular if its counting function n Λ satisfies Definition 2. The upper Beurling-Malliavin density D BM (Λ) is the infimum of numbers a such that the function n Λ∪Λ ′ is strongly a-regular for some Λ ′ ⊂ R.
This definition goes back to J.-P.Kahane. The original definition given by Beurling and Malliavin used the notion of short system of intervals, see [4, p. 397-398]. We need one more equivalent definition which was found by R. Redheffer, see [8,9].
Now we give a "dual" definition of the lower Beurling-Malliavin density.
Definition 4. The lower Beurling-Malliavin density D BM (Λ) is the supremum of numbers a such that the function n Λ ′ is strongly a-regular for some Λ ′ ⊂ Λ.
From the equivalence of Definitions 2 and 3 it follows that if D BM (Λ) = a, then for every b > a there exists Λ 0 ⊂ b −1 Z such that n Λ − n Λ 0 ∈ L 1 ((1 + x 2 ) −1 dx). Hence, for every b > a the sequence Λ ′ in Definition 2 can be taken as a subset of the arithmetic progression b −1 Z.
Let us now prove Proposition 2.1. For simplicity, using re-scaling, we may assume that α = 1.
If the system E Γ := {e iγt } γ∈Γ is not complete in L 2 (0, 2a), 0 < a < π, then there exists a non-trivial function f ∈ L 2 (R) which vanishes outside (0, 2a) and f ⊥ E Γ . Take any small positive number ǫ and consider the convolution g = f * h, where h is a smooth function supported by [0, ε]. Then g is smooth, vanishes outside (0, 2a + ε) and is orthogonal to E Γ . Since {e int } n∈Z is an orthogonal basis in L 2 (0, 2π) we obtain g(x) = n∈Z a n e inx = n∈Z\Λ a n e inx , {a n } ∈ ℓ 1 .
Proof of Proposition 2.3
We will use the following well-known fact (see e.g. [6,Lemma 2]). For the sake of completeness we give its proof here. Proof. Let µ be such that R e ibt dµ(t) = 0, |b| ≤ a. Then, for any z ∈ C, Hence, Conversely, for any b ∈ (−a, a) put Clearly H is an entire function of Cartwright class (which means that its logarithmic integral converges, see [5], Lec.16). On the other hand, by (5.2) we have lim |y|→∞ |H(iy)| = 0. Hence, H(iy) ≡ 0 and the statement follows from (5.2).
We will also need an elementary lemma: Let us, for example, check (iii). Take a positive ε satisfying a + ε < G(Λ), and choose any measure ν with spectral gap on [−a − ǫ, a + ǫ]. Then put µ = hν, where h is a fast decreasing function whose spectrum lies on [−ǫ, ǫ]. Now, we prove Proposition 2.3. | 2015-09-03T10:55:24.000Z | 2015-09-03T00:00:00.000 | {
"year": 2015,
"sha1": "e582d7642b41f3c2bc4242a3a16189ac152a05ad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e582d7642b41f3c2bc4242a3a16189ac152a05ad",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
264286919 | pes2o/s2orc | v3-fos-license | Human perception of spatial frequency varies with stimulus orientation and location in the visual field
Neuroanatomical variations across the visual field of human observers go along with corresponding variations of the perceived coarseness of visual stimuli. Here we show that horizontal gratings are perceived as having lower spatial frequency than vertical gratings when occurring along the horizontal meridian of the visual field, whereas gratings occurring along the vertical meridian show the exact opposite effect. This finding indicates a new peculiarity of processes operating along the cardinal axes of the visual field.
Human perception of spatial frequency varies with stimulus orientation and location in the visual field Wladimir Kirsch * & Wilfried Kunde
Neuroanatomical variations across the visual field of human observers go along with corresponding variations of the perceived coarseness of visual stimuli.Here we show that horizontal gratings are perceived as having lower spatial frequency than vertical gratings when occurring along the horizontal meridian of the visual field, whereas gratings occurring along the vertical meridian show the exact opposite effect.This finding indicates a new peculiarity of processes operating along the cardinal axes of the visual field.
Performance in many visual tasks is inferior for stimuli occurring in the periphery rather than the center 1 and along the vertical than along the horizontal meridian of the visual field 2 .These differences in visual perception are related to neuroanatomical characteristics of the visual system.While the link between anatomy and performance is not fully understood, increasing size of cortical receptive fields (RF) and decreasing cortical magnification (cortical surface dedicated to a particular region of the visual field) can account for a drop in visual performance with stimulus eccentricity [3][4][5] .In a similar vein, larger cortical magnification and smaller RFs in the horizontal than in the vertical meridian have been linked to better performance along the horizontal than along the vertical meridian [6][7][8] .Here we report a psychophysical finding that indicates a yet unknown peculiarity of this link between perception and brain anatomy.
We assessed the perception of spatial frequency (SF), i.e., how coarse or fine-grained stimuli are perceived.It is known that the same physical stimulus can appear more or less fine-grained to human observers.For example, objects appear coarser after adapting to a stimulus of a higher SF 9 , with a higher level of stimulus luminance 10 , with longer presentation duration 11 , and with a decrease of eccentricity 12 .
In the present study, we focused on another striking observation that horizontal gratings are perceived as coarser than vertical gratings of the same frequency [12][13][14] .This finding has been originally referred to the "verticalhorizontal illusion"-the tendency to see vertical lines as longer than equally long horizontal lines 13 .The core idea was that spatial distances are perceived as smaller along the horizontal than along the vertical meridian indicating that the perceptual space is compressed along the horizontal meridian relative to the vertical meridian.This assumption seems consistent with the asymmetries in visual performance and cortical tissue between horizontal and vertical meridians mentioned above.Notably, in the original experiments, stimuli were usually presented pairwise side by side, i.e., along the horizontal meridian of visual field [12][13][14] .We wondered whether the reported difference in perception between horizontally and vertically oriented gratings holds true when stimuli are presented along the vertical and thus examined how this effect depends on whether the stimuli vary along the horizontal or vertical meridian of the visual field.
We first confirmed previous observations that horizontal gratings appear coarser than vertical gratings when these gratings occur along the horizontal meridian.Yet, when stimuli were presented along the vertical meridian, the exact opposite was true, which challenges the proposed link between perception and anatomy as a single visual anisotropy alone, such as the suggested perceptual compression along the horizontal meridian relative to the vertical meridian (or a smaller size of RFs or larger cortical magnification for the horizontal than for the vertical meridian), cannot explain these results.
Apparatus
Exp.1 was an online-experiment in which participants performed the experiment on their own computers.The spatial resolution of the most screens was 1920 × 1080 pixels (16 participants).The remaining screens had the resolutions of 1920 × 1200 (2 participants), 1600 × 900 (2 participants), 2256 × 1504 (one participant) and 1368 × 768 (one participant).Except for one screen (75 Hz), the refresh rate was about 60 Hz.Exp.2 was performed in a normally illuminated laboratory.In Exp.2, participants sat in front of a 23-inch monitor (Eizo, EV2303W; 1920 × 1080 pixels (px); 1 px = 0.2655 mm) at a distance of 56 cm.Participants' head was supported by a chin rest.Program files were written using E-Prime software (Version 3.0; Psychology Software Tools, Pittsburgh, PA).
Stimuli
All stimuli were presented on a gray background (with RGB color space coordinates 128, 128, 128).The fixation cross (7 × 7 px) and the number-sign symbols (18 px in height) were light gray, the question mark (22 px) was displayed in green.These stimuli were presented in the center of the screen.The main stimuli were Gabor patches with raised-cosine envelops (hemming) of 162 × 162 px in size (4.4° in Exp.2).The gratings were blackand-white on a gray background (with RGB coordinates 128, 128, 128) and were oriented either horizontally (90°) or vertically (0°).In each trial, one of the two Gabor patches (standard stimulus) had always a constant SF of 0.10 cycles per pixel (3.7 cycles per degree of visual angle in Exp.2).The SF of the other patch (test stimulus) varied between 0.05 and 0.15 cycles/px (1.8 and 5.5 cycles/degree in Exp.2) in steps of 0.01 (0.368 cycles/degree in Exp.2).The standard stimulus could appear either left or right, or above or below the fixation cross (counterbalanced for both dimensions).If the standard stimulus was a grating with vertical (horizontal) orientation, then the test stimulus was a grating with horizontal (vertical) orientation.The gratings appeared at a distance of 250 px (6.8° in Exp.2) from the fixation cross.
Procedure and task
Three number-sign symbols were initially displayed for 1 s.This arbitrary chosen symbol combination indicated a new trial (as in our previous studies e.g. 15 ).Following an interval of 500 ms in which a fixation cross was visible, a pair of Gabor patches was flashed for 100 ms (while fixation cross remained visible).The Gabor patches were presented either left and right of the fixation cross or above and below it (random order).Finally, a question mark appeared and the participants had to indicate the location of the Gabor patch with a higher SF (see Fig. 1A).They were told to indicate the stimulus with a higher density of lines (i.e., that with thinner and more lines).The judgment was made by pressing arrow keys of the keyboard (left arrow key for left stimulus, right arrow key for right stimulus, up arrow key for the upper stimulus, and down arrow key for the lower stimulus).If participants pressed a wrong key (e.g., when gratings appeared left and right of the fixation cross and the upper or lower arrow key was pressed), error feedback was provided and the trial was repeated.
Design
Overall, there were 44 experimental conditions: 2 location dimensions × 2 stimulus orientations × 11 levels of test stimulus.Each of these conditions was repeated 10 times (i.e., there were 10 trials per condition) and was presented in a random order.The main experiment had four blocks of trials including 110 trials each.Participants were asked to take breaks after each block.Before the main experiment started, participants performed 20 practice trials, in which we provided visual feedback about whether judgment was correct or not.These trials were not included in the analyses.During the main experiment, no feedback was given.Participants were asked to not to move their eyes and to always look at the fixation cross.In the online experiment (Exp.1),we also asked the participants to close all open applications and to ensure that they are not disturbed by other persons or their mobile phone.No specific instructions relating to viewing distance or lighting conditions were given.
Data analysis
For each level of test stimulus, we computed the proportion of trials in which the test stimulus was judged as having a higher SF.This was done for each location and orientation condition.A local model-free fitting procedure 16 was then used to estimate psychometric functions and to determine the points of subjective equality (PSE).The data of one of the participants of Exp.2 had to be excluded from analyses due to low discrimination performance (mean r 2 below 3SD of the mean of all participants).The mean r 2 of the remaining data amounted to 0.95 (SD = 0.04) in Exp.1 and 0.97 (SD = 0.02) in Exp.2.PSE values were analyzed using analyses of variance (ANOVA) and t-tests (two-tailed, reported p-values are not corrected for multiple comparisons).
Results
Consistent with previous reports [12][13][14] , we observed that horizontal gratings were judged as having a lower SF than vertical gratings (of the same frequency).However, this was the case only when the stimuli were presented along the horizontal meridian.For stimuli presented along the vertical, horizontal gratings were judged as having a higher SF than vertical gratings.This relationship was expressed in a significant interaction between both critical experimental factors (i.e.orientation of the standard stimulus and location of stimuli) in a within subjects ANOVA of PSEs in both experiments, F(1, 21) = 50.91,p < 0.001, η p 2 = 0.708 and F(1, 20) = 32.14, p < 0.001, www.nature.com/scientificreports/η p 2 = 0.616 (see Fig. 1B,C,E,F and Fig. S1 in the supplementary materials).For Exp.1, the ANOVA also revealed a significant main effect of location, F(1, 21) = 5.99, p = 0.023, η p 2 = 0.222, indicating slightly larger PSEs when stimuli appeared above and below the fixation cross (vs. to the left and right of it).
In Exp.1, pairwise comparisons revealed significant differences between horizontal and vertical orientations of the standard stimulus for stimuli presented along the horizontal meridian, t(21) = 2.61, p = 0.016, as well as for stimuli presented along the vertical meridian, t(21) = 2.31, p = 0.031.In Exp.2,only the difference for the vertical meridian was significant, t(20) = 2.45, p = 0.024, but the difference for the horizontal meridian was not, t(20) = 1.72, p = 0.100.
Moreover, the judgment differences between horizontal and vertical orientations of the standard stimulus observed under both stimulus location conditions correlated highly with each other across participants in both experiments, r = 0.767, p < 0.001 and r = 0.735, p < 0.001 (see Fig. 1D,G).Participants who perceived horizontal gratings as coarser in one stimulus location condition tended to show the same effect in another stimulus location condition.This result indicates a consistency of individual judgments irrespective of whether the stimuli appeared along the horizontal or vertical meridian.Note that this outcome does not contradict the analyses of the mean values described above (as mean values reflect the intercept of a linear relation rather than its slope).
Discussion
Visual performance varies across the visual field.It usually decreases with retinal eccentricity and is better along the horizontal than along the vertical meridian.These performance asymmetries are explained, in essence, by changes of spatial resolution of the visual system being expressed, e.g., in larger RF size and smaller cortical magnification for the visual periphery and along the vertical meridian 1-8 .Such visual field asymmetries imply not only differences between distant locations but also perceptual and neuroanatomical differences for the neighboring locations of the same area of the visual field.The visual processing of an object such as a square presented in the center of visual field, e.g., should go along with higher spatial resolution along its width than along its height (due to horizontal-vertical asymmetry).The present results, we believe, capture such an asymmetrical distribution of spatial resolution along the horizontal and vertical directions of an object in the peripheral regions of the visual field.In other words, they suggest local distortions of cortical maps in the visual periphery.
Such local horizontal-vertical differences are usually not considered when visual performance or neuroanatomy are compared across different spatial locations.Accordingly, a generally better performance along the horizontal than the vertical meridian, e.g., does not necessarily imply a better spatial resolution of the horizontal than of the vertical dimension for an object presented somewhere in the visual field.As illustrated in Fig. 2A, a roughly circular shape of the visual field rather suggests that the spatial resolution along the cardinal directions in a cortical map can be lower than in the opposite directions.Such a local asymmetry can explain the present results.Vertical gratings are perceived to have a lower SF (than horizontal gratings) when they appear along the vertical meridian because the spatial resolution at stimulus location is higher along the horizontal than along the vertical and vice versa.Figure 2B illustrates the corresponding basic model.
It is important to note that the present results demonstrate an effect in perceived SF.Accordingly, the suggested explanation may be limited to visual appearance and not necessarily generalize to visual performance.Yet, we believe that our conclusions may also hold for visual performance.Consider, e.g., that performance differences across the visual field are reported for different orientations and tilt angles of stimuli [17][18][19][20][21] .This fact is taken into account in Fig. 2A by an overall decrease in spatial resolution with eccentricity and an overall higher spatial resolution at horizontal than at vertical meridian (cf. the extents of areas surrounded by lines either along each meridian or between them).To better visualize a horizontal meridian advantage in visual discrimination of a vertically oriented stimulus, e.g., just mentally shift the x-coordinates of the circular lines to the left so that the spatial distances between them become even smaller along the horizontal than the vertical meridian but the local asymmetry (indicated by the arrows in Fig. 2A) remain preserved.The more important prediction here is that in addition to this general advantage of the horizontal meridian, vertically oriented stimuli should be more easily discriminated along the vertical meridian than horizontally oriented stimuli and vice versa due to local asymmetries in spatial resolution.
In fact, the present results resemble the so-called radial bias-enhanced perceptual sensitivity to radiallyoriented as compared to tangentially-oriented contours (i.e. to stimuli oriented collinear as compared to orthogonal to a line intersecting the fixation point) [22][23][24] .Horizontally (vertically) oriented gratings can be construed as oriented radially (tangentially) when presented along the horizontal meridian.The opposite is true when the same gratings are presented along vertical meridian.Considered from this perspective, the present results revealed that gratings oriented radially were perceived as having a lower SF than gratings oriented tangentially.This outcome is consistent with the radial bias if a higher spatial resolution entails a lower apparent SF as we suggest (see Fig. 2).In other words, our explanation delineated in Fig. 2 can also be applied to and explain the radial bias.Accordingly, enhanced sensitivity to radially-oriented contours at a certain visual location is basically due to higher spatial resolution along the tangential direction at this location (i.e.due to local cortical distortions).As a result, deviations of the stimulus orientation from the radial orientation, e.g., can better be detected than deviations from the tangential orientation 24 .This reasoning points to an interesting question for future studies, namely whether spatial frequency is perceived as coarser along the radial direction when stimuli are presented beyond the cardinal meridians.
Our approach is consistent with earlier observations of lower apparent SF for horizontal than for vertical gratings presented along the horizontal meridian 13,14 .The proposed link between (low) spatial resolution and (small) apparent size of objects can explain why apparent SF increases with eccentricity 12 , and is in line with the tendency to perceive stimuli as smaller in peripheral compared to central vision as well as along the vertical compared to horizontal meridian 25,26 .Because the overall spatial resolution gradually decreases from the horizontal to the vertical meridian (i.e.along the arcs, see Fig. 2A) our approach also indicates why perceptual and anatomical asymmetries hold especially for cardinal axes of the visual field and decrease gradually with deviations from these axes 6 .This property predicts a gradual drop in the extent of the horizontal-vertical anisotropy at intercardinal meridians, in principle, for any stimulus orientation, including tilt angles of 45° from the horizontal e.g. 18(cf.also the last but one §).But note here than beyond changes in overall spatial resolution local asymmetries should affect visual performance consistent with the radial bias mentioned in the previous paragraph.Interestingly, it has been reported that apparent SF is higher for gratings presented along the horizontal than along the vertical meridian 27 .In that study, however, only vertically oriented gratings were used.If we only consider vertically oriented gratings in our experiments (and ignore the horizontally oriented gratings as if they served as a reference stimulus), then our results approximate the results of that study.As shown in Fig. 1 (black bars in C and F), the PSE for the vertically oriented gratings is higher for the horizontal than for the vertical meridian suggesting that perceived SF of vertical gratings was higher at the horizontal than at the vertical meridian.Moreover, the explanation we suggest for our results also applies to the main results of the mentioned study.The perceived SF of vertical gratings is higher at the horizontal meridian because the spatial resolution is higher along the vertical than along the horizontal axis of the stimuli, while the opposite is true for the vertical meridian (higher horizontal resolution; see also population responses to the vertical gratings in Fig. 2B).
It should be noted that in Exp.2 the PSE difference between the orientation conditions for stimuli presented along the horizontal meridian did not reach the significance threshold (when a two-tailed test was applied).Thus, one part of the results of the online-experiment (Exp.1) was not replicated in the lab (Exp.2).This could question the reliability of the findings.This particular aspect of the data (i.e.judgments of the horizontal gratings as coarser when presented along the horizontal meridian) has already been reported previously [12][13][14] (see also "Introduction").Thus, the effect seems reliable, but its size was obviously underestimated in Exp.2.One possible reason for this can be seen in Fig. 1D,G-there was a substantial number of participants who showed an opposite pattern of results as compared to the results of the means.This individual variability may also limit the generalizability of the main findings.Interestingly, participants were rather consistent in their judgment behavior regardless of the grating locations (see correlation analyses and Fig. 1D,G) which points to interindividual differences in the brain anatomy 8,25,28 .For example, in participants who tended to perceive horizontal gratings as coarser under both location conditions the local spatial resolution could generally be higher along the vertical than along the horizontal (this can be imagined by mentally removing an oblique line next to the vertical line in Fig. 2A that would inverse the asymmetry between the horizontal and vertical resolutions indicated by the vertical arrow).responses to a single strip of a grating.A higher spatial resolution along a certain dimension (i.e., smaller RF, larger density of RF, larger cortical magnification) produces a perceptual expansion along this dimension due to stronger population responses to single strips of the gratings.Accordingly, when the spatial resolution is lower (higher) along the horizontal than along the vertical axis for a certain location an object at this location is perceived to have a lower (higher) SF along the vertical than along the horizontal.
Another possible issue that can be raised is related to potential eye movements, which might systematically affect the results.This possibility is rather unlikely due to the timing and the specific design of the experiments that aimed to avoid such influences.For example, the stimuli were presented pairwise and for a duration that is below the usual reaction time of eye-movements.Moreover, to solve the task the participants had to attend to both stimuli and it was unpredictable whether the stimuli will appear along the horizontal or vertical meridian and which of them will be horizontally or vertically oriented.
Overall, the main limitation of the current study is that our approach rests on an assumed link between psychophysical effects and their neurophysiological basis that is not fully understood.Accordingly, the suggested explanations have to be considered as preliminary and appropriate to the extent this link is justified.
Figure 1 .
Figure 1.Procedure and results of the study.Main trial events (A) and results of Exp.1 (B-D) that was conducted online, and of Exp.2 (E-G) that was performed in a lab.Participants judged which of two Gabor patches presented either left and right of the fixation cross, or above and below it, is of a higher spatial frequency.(B,E) Shows the judgment data of a single participant for each experiment and corresponding psychometric functions.(C,F) Illustrates mean PSE values of all participants derived from psychometric functions fitted to the individual judgment data.(D,G) Shows individual PSE differences between horizontal and vertical orientations of the standard stimulus under both stimulus location conditions.Arrows denote the participants whose data are shown in (B,E).Error bars are standard errors.Asterisks denote statistical significance (p < 0.05).Stimuli shown in (A) are not drawn to scale. https://doi.org/10.1038/s41598-023-44673-8
Figure 2 .
Figure 2.An explanation for why apparent spatial frequency varies depending on orientations and locations of two gratings.(A) Schematically illustrates an overall decrease in spatial resolution with eccentricity and a higher spatial resolution along the horizontal meridian in the right upper quadrant of the visual field (indicated by the density of adjacent lines).The map entails an asymmetry between the resolutions along the vertical and the horizontal axes (indicated by the arrows) that decreases for intercardinal meridians.(B) Shows how such a local asymmetry can be expressed in the perception of SF.Gray bell-shaped curves denote tuning curves (i.e., RF) of hypothetical neurons coding neighboring retinal locations.Black bell-shaped curves indicate populationresponses to a single strip of a grating.A higher spatial resolution along a certain dimension (i.e., smaller RF, larger density of RF, larger cortical magnification) produces a perceptual expansion along this dimension due to stronger population responses to single strips of the gratings.Accordingly, when the spatial resolution is lower (higher) along the horizontal than along the vertical axis for a certain location an object at this location is perceived to have a lower (higher) SF along the vertical than along the horizontal. | 2023-10-19T06:18:16.061Z | 2023-10-17T00:00:00.000 | {
"year": 2023,
"sha1": "bf15079dfc54ba5c656851c7ac4a0d42b2593886",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-44673-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50e62c2d5640854735d4ce75aecf455b1e5d1430",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252454136 | pes2o/s2orc | v3-fos-license | Promoting Online Civility Through Platform Architecture
. This study tests whether the architecture of a social media platform can encourage conversations among users to be more civil. It was conducted in collaboration with Nextdoor, a networking platform for neighbors within a defined geographic area. The study involved: (1) prompting users to move popular posts from the neighborhood-wide feed to new groups dedicated to the topic and (2) an experiment that randomized the announcement of community guidelines to members who join those newly formed groups. We examined the impact of each intervention on the level of civility, moral values reflected in user comments, and user’s submitted reports of inappropriate content. In a large quantitative analysis of comments posted to Nextdoor, the results indicate that platform architecture can shape the civility of conversations. Comments within groups were more civil and less frequently reported to Nextdoor moderators than the comments on the neighborhood-wide posts. In addition, comments in groups where new members were shown guidelines were less likely to be reported to moderators and were expressed in a more morally virtuous tone than comments in groups where new members were not presented with guidelines. This research demonstrates the importance of considering the design, structure, and affordance of the online environment when online platforms seek to promote civility and other pro-social behaviors. including creating a theory-based set of indicators of civility/incivility and drawing on existing dictionaries based on models of positive/negative words and phrases. This study found that these two approaches converged in their identification of both civil and uncivil content, suggesting that both are valid indicators of platform discussions.
What happens on platforms is not limited to the digital sphere. Social media conversations also affect the offline communities in which platform users live, work, and play. Online platforms such as Facebook can provide a way for people to seek emotional support when faced with a cancer diagnosis (Bender, Jimenez-Marroquin, and Jadad 2011), while sites such as Tinder can facilitate lifelong romantic partnerships. At the local level, platforms such as Front Porch Forum can build online communities to allow neighbors to interact with each other to discuss anything from improving trash collection to spending local tax dollars. Similarly, Nextdoor is an online platform dedicated to "bringing neighbors and organizations together, [to] cultivate a kinder world where everyone has a neighborhood they can rely on" (Nextdoor 2022a). Thus, whether connecting users to friends across the world or neighbors down the street, these platforms facilitate diverse forms of collaborative and productive social interactions.
There is no shortage of high-profile examples demonstrating the unintended consequences and negative externalities that can occur when the entire world connects online. Online bullying, cyber stalking, expressions of hate speech, and coordinated disinformation campaigns can have negative psychological and behavioral implications for users (Gahagan, Vaterlaus, and Frost 2016;Rieger et al. 2021). For example, community members in the groups we studied on Nextdoor faced racist and homophobic slurs, belittling scorn, and even overt threats. Recognition of this possibility has driven significant investments and efforts by researchers and online platforms to identify and regulate various forms of undesirable speech and behaviors. Trying to moderate undesirable content to reduce harm is an ongoing challenge.
When social media platforms emerged, many were initially viewed as providing an opportunity to connect with people around the world. The goal was not harm reduction, but to build social relations both on and off platforms, thereby improving individual psychological well-being and ameliorating the social, political, and economic vitality of realworld communities. However, scholars have focused on examining platform design and regulation that seeks to reduce negative psychological and behavioral impacts for users (Jhaver et al. 2019;Tyler et al. 2021); comparatively little research on encouraging positive behaviors on social media platforms. Hence, the goal of this study is to refocus on the potential of social media platforms to promote individual and community well-being. This research was conducted in collaboration with the neighborhood-based social media platform Nextdoor. It analyzes comments and platform behavior to test whether platform architecture-the designs, affordances, and structures that shape user interactions on a platform-can positively influence the civility and moral values of online discussions.
Specifically, this study examines the relationship of two interventions on the neighborhoodbased social media platform Nextdoor-(1) creating a group dedicated to discussing a given topic and (2) announcing (or not) community guidelines to new members of the group-and the civility and moral values of online discussions. For the first intervention, authors of posts that generated active conversations within the community were asked if they would like to create a group dedicated to the topic of their post. For those who chose to create a new group, we conducted a pretest-posttest analysis to examine what happens when these popular conversations move into a group setting by comparing the levels of civility and moral values exhibited in the user comments on the original post (Neighborhood Post Comments) with the comments made in the newly formed group (Group Posts Comments) on Nextdoor (see Table 1 on page 6 for definitions). In the second intervention, we used a randomized controlled experiment to test the effect of showing basic community guidelines to users joining these newly formed groups on the civility of group interactions.
The results demonstrate that platform architecture can be used to encourage users to engage in more civil interactions. The comments made in the new groups were associated with higher levels of civility, more virtuous moral values, and fewer incidents of user reports relative to the comments on the original Neighborhood Post. In addition, showing guidelines to new group members caused an increase in comments with more virtuous moral values and a decrease in user reports of comments. These findings demonstrate the need for social media platforms to design and architect platforms that can clearly communicate expectations and norms to users to encourage more civil interactions.
The Effect of Social Media Platform Architecture and Affordances on User Discussions
Thus far, scholars have explored how various aspects of social media platform architecturethe design components, affordances, and structures-influence and shape users' behaviors in online and social media environments. First, a group of scholars have focused on the existence of anonymity on online platforms affects the civility of users' discussions (Coe, Kenski, and Rains 2014;Santana 2013;Ruiz et al. 2011). Coe, Kenski, andRains (2014) and Ruiz et al. (2011) found that user registration (which connects individual usernames with personally identifiable information) discouraged hostile comments on newspaper websites. Santana (2013) also found that non-anonymous comments were more civil than anonymous comments in online newspaper discussion forums. Rowe (2014) compared comments on political news between the Washington Post site and the Washington Post Facebook page and found that comments on Facebook were more civil and polite than on the news site due to the lack of anonymity. However, Hille and Bakker (2014) found that anonymous comments on news sites were more elaborate than non-anonymous comments on Facebook. These mixed results suggest that anonymity itself is ethically neutral (Reader 2012) and should not be the only factor affecting users' civil interactions.
In addition to anonymity, scholars have focused on how other affordances can enable or constrain user behaviors and affect civil discussions (Jaidka, Zhou, and Lelkes 2019). Jaidka, Zhou, and Lelkes (2019) analyzed a large volume of tweet responses to U.S. politicians and found that doubling the character limit of tweets encouraged users to engage in less uncivil, more polite, and constructive discussions. Seering et al. (2019) conducted a survey experiment and found that CAPTCHAs containing stimuli designed to prime positive emotions and mindsets could increase the positivity of sentiments and the levels of complexity and social connectedness in participants' comments on politically charged comment threads.
Other studies have investigated how online platforms' specific moderation policies affect users' civil discussions (Ksiazek 2015;Jhaver et al. 2019;Lampe et al. 2014;Matias 2019;Ribeiro, Cheng, and West 2022;Tyler et al. 2021). For example, Ksiazek (2015) analyzed online public discussions on U.S. news organization websites and found that pre-moderation (i.e., an automatic filter system) and post-moderation (i.e., flagging) increased civility, while offering a private messaging option boosted hostility. Lampe et al. (2014) tested how distributed moderation systems affect civil conversations on Slashdot, a membership-based online news and discussion site. Ribeiro, Cheng, and West (2022) assessed community participation in Facebook groups after the group admins turned on post approvals (requiring a group admin to review and approve posts before they were shown to group members); they found that adopting the post approvals feature reduced the number of posts to the group, however these posts received more comments and were reported less often by community members.
Finally, Matias (2019) conducted a large-scale field experiment to examine how community rules influence who chooses to join a group and how they behave on Reddit (r/science). Their results demonstrated that announcing the community guidelines increased norm compliance among first-time participants in the group. Jhaver et al. (2019) also found that users who had posts removed from Reddit were less likely to have future posts removed when provided with explanations. Similarly, Tyler et al. (2021) determined that users who were provided education about platform rules in the week following their post removal were less likely to have future posts removed. Katsaros, Yang, and Fratamico (2022) conducted a field experiment on Twitter that asked users who posted content with offensive language if they would like to reconsider their post. This intervention resulted in 31% of participants editing or deleting their post.
While much prior research has suggested that platform architecture and affordances would affect online user discussions across different platforms, our study explores whether similar approaches can be used to shape civil and moral interactions on the neighborhoodbased platform Nextdoor.
Civility in Online Discussions
Previous studies have employed different approaches to explore the abstract concept of civility as a norm, custom, moral obligation, strategy, formality, set of requirements, and mechanism (Calhoun 2000;Papacharissi 2004;Waldron 2013). For instance, Whitman (2000) suggested that civility is related to showing respect to others, which requires individuals to acknowledge each other as equals. Waldron (2013) defined civility as the hard work of staying present in the discussion, even when facing deep-rooted disagreement. Civility has also been linked to politeness (Walter and Lipsitz 2021), but unlike mere politeness, civility entails explicitly affirming another's values or ideas, even those one finds disagreeable (Han, Brazeal, and Pennington 2018). There is a general consensus that civility is a necessary component to maintaining and promoting an effective democracy by respecting different views (Smith and Bressler 2013).
However, establishing a definition of civility is challenging because the notion involves conforming to socially created norms or rules (Calhoun 2000). As Papacharissi (2004) suggested, civility can be regarded as respect for the collective traditions of democracy that are accepted by particular local cultures. Because the concept is tied to particular, contingent, and contextually specific social rules regarding behavior, critiques have pointed out how civility as a concept reinforces the status quo and imposes the norms of a dominant group on minorities, which can be morally ambiguous Jamieson et al. (2017) and Zurn (2013). 1 Still, civility does not only involve following a particular norm or rule and conforming to the local culture. It should also be regarded as a general moral virtue itself in communicating with others, such as demonstrating tolerance and respect toward others beyond conforming to a specific set of social rules (Calhoun 2000). Civility can be tied to a specific culture by avoiding unnecessarily disrespectful words toward the discussion participants. Many contemporary efforts to conceptualize civility focus on the nature of deliberation and discussions involving appropriate forms of disagreement on moral matters (Jamieson et al. 2017). A willingness to listen to others based on tolerance and respect of others in discussions is also an important moral aspect of civility (Rawls 1996).
By focusing more on the mode of interaction, our study defines civility as demonstrating tolerance and respect toward others in the discussion-even in the face of those who have opposing views and ideas. The extant literature on civility has highlighted these two prominent and general components of civility-tolerance and respect toward others (Han, Brazeal, and Pennington 2018;Waldron 2013;Whitman 2000). Specifically, tolerance involves recognizing that others have different views and ideas and providing a neutral environment in which citizens can exchange viewpoints. By acknowledging that others can have different viewpoints, tolerance enables us to disagree on controversial subjects and debate these differences in a civil and nonviolent manner. Such debates are necessary for resolving disputed issues and developing fair and enlightened public policies. Respect toward others is defined as acknowledging the autonomy and dignity of every citizen as a free and equal human being, regardless of his or her specific traits or opinions. In other words, to uphold the value of respect toward others, online and offline discussion communities should be inclusive, and every member should be able to share his or her ideas and challenge those of others (Benn and Benn 1988).
As use of social media has grown, particularly platforms that allow anonymous usage, so too has the prevalence of rude, uncivil discussion and hate speech (Santana 2013).
Prior studies in this area have analyzed the degree of civil and uncivil language conveyed in online discussions (Coe, Kenski, and Rains 2014;Papacharissi 2004), various factors affecting individuals' uncivil discussions online (Blom et al. 2014), and the effect of online platform policies, including user registration, anonymity, and moderation of civil discussion in online spaces (Ksiazek 2015;Lampe et al. 2014;Santana 2013). Although there has been considerable research on the nature and extent of incivility online, there is startlingly little scholarship on the prevalence and mechanisms of civility online. More importantly, scholars have commonly operationalized civility as the absence of incivility (Papacharissi 2004;Santana 2013) or focused on incivility or aggressive language use in online discussions (Coe, Kenski, and Rains 2014;Ksiazek 2015) rather than focusing on the original concept itself. This study takes a more pro-social approach by (1) developing our own codebook to measure civility and (2) defining and measuring civility as a moral virtue.
Moral Values in Online Discussions
Based on cultural psychology, the Moral Foundation Theory (MFT) defines moral values (i.e., morality) as a set of values, practices, institutions, and psychological mechanisms that suppress selfishness and regulate social life for group cohesiveness and harmony (Haidt 2008). The core of MFT is that different cultures share basic moral values. Haidt and Joseph (2004) originally identified four "moral modules" that they later refined into five "moral foundations": (1) care/harm, (2) fairness/cheating, (3) loyalty/betrayal, (4) authority/subversion, and (5) sanctity/degradation (Haidt and Graham 2007). Graham and Haidt (2012) has since added a sixth foundation, liberty/oppression, and others have recommended additional foundations such as equality (as distinct from proportionality).
The care/harm foundation is related to basic concerns about others' suffering by caring, nurturing, and protecting vulnerable individuals (Graham, Haidt, and Nosek 2009;Haidt, Graham, and Joseph 2009). The fairness/cheating foundation is based on concerns about meritocracy-and, to a lesser extent, equality-and generates the idea of justice (Haidt, Graham, and Joseph 2009;Graham and Haidt 2012). The loyalty/betrayal foundation is closely connected to commitment to and self-sacrifice for the sake of a group. The authority/subversion foundation is linked to the social order and obligations of hierarchical relationships, including deference and respect for tradition, leaders, and hierarchical organization (Haidt, Graham, and Joseph 2009). The sanctity/degrada-tion foundation is based on concerns about "physical and spiritual contagion, including virtues of chastity, wholesomeness, and control of desires" (Haidt, Graham, and Joseph 2009). The first three foundations are closely connected to individuals' freedom and rights (i.e., individualizing foundations), while the other three bind individuals to a group or collective (i.e., binding foundations). Walter and Lipsitz (2021) suggests that those who hold individualizing foundations tend to have a stronger emotional response to uncivil discussion than those who hold binding foundations. The new liberty/oppression foundation is based on the resentment one feels towards domination, bullying, and oppression (Graham and Haidt 2012).
Previous linguistic and computer science studies have analyzed large volumes of textual data to develop moral dictionaries (Araque, Gatti, and Kalimeri 2020;Hoover et al. 2021) and examined the types of moral values represented in social media users' discussions (Grover et al. 2019). In addition, scholars have investigated how individual characteristics (especially political predisposition) affect the endorsement of each moral value (Graham, Haidt, and Nosek 2009;Haidt and Graham 2007). More importantly, moral values have been regarded as the set of values and practices that suppress selfishness and regulate civil life for group cohesiveness and harmony (Haidt 2008), and eventually lead people to engage in pro-social behaviors (Nilsson, Erlandsson, and Västfjäll 2016;Welsch 2020). Thus, we use MFT to measure the degree of moral values in comments as an indicator of social media users' pro-social behavior on the platform.
Hypotheses
Building on previous literature, we analyzed comments and platform behavior to test whether an online platform's design architecture can positively influence the degree to which discussions among users are civil and reflect moral values. We did so through one pretest-postest analysis and one field experiment conducted in collaboration with the neighborhood-based social media platform Nextdoor involving two interventions.
First, we analyzed the result of Nextdoor encouraging authors of highly commented Neighborhood Posts to create a new group dedicated to the topic of their post. For this intervention, we used a single-group pretest-posttest analysis design to compare comments made on the original Neighborhood Post to comments made in the newly formed Group to test the following hypotheses: • H1a: Group Post Comments are more civil than comments on the corresponding Neighborhood Posts.
• H1b: Group Post Comments include more virtuous moral values than comments on the corresponding Neighborhood Posts.
• H1c: Group Post Comments have fewer comments reported by users than comments on the corresponding Neighborhood Posts.
In the second intervention we used a random assignment experiment design to test the effect of announcing community guidelines to new group members on interactions within that newly formed group. Previous findings indicate that announcing guidelines or rules can affect users' behaviors (Matias 2019;Jhaver et al. 2019;Tyler et al. 2021). While many prior studies focus on reinforcing a particular set of rules aimed at reducing antisocial behaviors, we investigate whether similar approaches can be used to shape civil interactions and increase pro-social behaviors. This part of the experiment tests our second set of hypotheses: • H2a: Providing members with guidelines before they enter a newly formed group will result in more civil Group Comments in groups with guidelines compared to Group Comments in groups without guidelines.
• H2b: Providing members with guidelines before entering a newly formed group will result in more virtuous moral values in Group Comments in groups with guidelines compared to Group Comments in groups without guidelines.
• H2c: Providing members with guidelines before entering a newly formed group will result in fewer user reports of Group Comments in groups with guidelines compared to Group Comments in groups without guidelines.
Study Design
In our study conducted in collaboration with Nextdoor, we tested the influence of two important architectural features: (1) prompting authors of highly engaging Neighborhood Posts to create a new group dedicated to a specific issue and (2) announcing community guidelines to new members of this newly formed group. For clarity, we have provided a table which defines terms for specific content types analyzed in this study shown in Table 1 on the preceding page.
Nextdoor's privacy boundaries are designed to replicate physical geographic neighbor-hood boundaries: users on Nextdoor can only see the posts, comments, and activities from their actual neighbors. To register for a Nextdoor account, users must confirm their location through a physical piece of mail sent to their address. As a result, interactions on this platform can connect people who share membership in a particular geographical area. The platform's goal is to leverage this shared membership to create positive and constructive interactions about shared problems and issues in users' communities.
Neighborhood Posts can only be seen and commented on by other users in the post author's neighborhood. While anyone in the neighborhood can engage with Neighborhood Posts, groups can be a way to discuss a specific topic among a smaller subset of the neighborhood. These groups can be open (anyone in the neighborhood can join the group and participate in discussions) or private (anyone in the neighborhood can view the group and request to join, but the group admin must approve a membership request before a member can participate in discussions) (Nextdoor 2022b).
Intervention 1: New Group Formation
For our group formation intervention, when any Neighborhood Post received its 70 th Neighborhood Post Comment within our study, the platform messaged the Neighborhood Post author. Neighborhood Posts with 70 Neighborhood Post Comments indicated that the conversation was of significant interest to the neighborhood. 2 This message indicated that their post appeared to be generating a lot of conversation within the community, and invited the post author to create a new group dedicated to the issue(s) discussed in the post. For this intervention, we used a single-group pretest-posttest analysis design to compare the first 70 Neighborhood Post Comments (before the Neighborhood Post author was asked to create a new group) to Group Comments made in the newly formed group.
Intervention 2: Guidelines vs. No Guidelines
For the second intervention, all of these newly formed groups were randomly assigned into one of two conditions: Guidelines or No Guidelines. In the Guidelines condition, any new member joining the newly formed group was shown a set of guidelines; those in the No Guidelines condition were not shown any guidelines. These guidelines were minimally intrusive on a new member's experience (a single page shown before entering the group for the first time). Members were provided four short guidelines designed to promote more civil interactions within the group (see Figure 1 on the next page). The guidelines reflect the four antecedents of procedural justice: voice, respect, neutrality, and trustworthiness (Tyler, Jackson, and Bradford 2014).
Civility
To measure the level of civility in user discussions on Nextdoor, we developed a codebook through a literature review and an iterative labeling process (more details available in the supplementary material). A team of 14 undergraduate students used the codebook to label 7,816 comments over a 2-month period. The final codebook consisted of 13 civil labels and 13 uncivil labels. As noted above, in building our codebook we did not define civility as merely the absence of uncivil language (and visa versa). Two binary Figure 1: Group Guidelines shown to members entering newly formed groups on Nextdoor classifications were generated for each comment reviewed: (1) a civil classification (either "civil" or "not civil") indicating the presence (or lack thereof) of civil discussion and (2) an uncivil classification (either "uncivil" or "not uncivil") indicating the presence (or lack thereof) of uncivil discussion. As such, a single comment can contain any two combinations of these four labels. Figure 2 on the next page displays a confusion matrix showing this overlap for all labeled comments. It shows that nearly two-thirds of all comments we labeled were either only civil (42.0%) or only uncivil (17.4%), while over one-third contained neither (37.3%); few comments contained both civil and uncivil language (3.3%).
Three student labelers blindly reviewed each comment, and a majority vote of the three was used to classify the comment. In total, 7,816 individual comments were labeled, totaling 23,448 distinct comment-labeler pairs. S1 in the supplementary materials describes the coding procedures in more detail and provides inter-rater reliability measures.
Moral Values
Labeling individual comments using our civility guidelines can provide very high fidelity data, but it comes at a high operational cost and results in only a small amount of data being analyzed, which limits our ability to detect potentially small effect sizes that may exist. As such, we also needed a more automated method to generate quantitative metrics for measuring civility across a larger set of comments.
We employed the Moral Foundation Dictionary version 2 (MFD-2) (Frimer et al. 2017) to measure the level of moral values present in a given comment. comment is more similar to the vice, and positive values closer to 1 indicate a comment is more similar to the virtue. We also computed the average across the five foundations for each comment and call this the "average MFD score." The supplementary material contains more technical details on how these scores were calculated.
Before looking into our hypotheses, analyses investigated whether there is any relationship between the civility labeling data and MFD scores. MFD scores were calculated for all comments that were reviewed and labeled by students. Figure 3 on the next page shows the distribution of the average MFD score for comments with civil and uncivil labels. Across all five foundations, comments labeled as civil tended to have higher MFD scores (closer to the virtue) than those labeled as not civil. The average MFD score for comments labeled civil was 0.053 compared to 0.032 for those labeled not civil (Cohen's 3 = 0.52). A Welch's two sample t-test confirms that the difference in mean is statistically significant ( ? < 2 4−16 ). Similarly, comments labeled uncivil tended to have lower MFD scores (closer to the vice) than comments labeled not uncivil. The average MFD score for comments labeled uncivil was 0.018 compared to 0.047 for comments labeled not uncivil (Cohen's 3 = 0.88. A Welch's two sample t test confirms that the difference in mean is statistically significant ( ? < 2 4−16 ). While the civility labeling can provide a more accurate insight into our specific definition of civility, the MFD approach appears to have enough overlap with our civility labeling to prove useful in analyzing much larger datasets.
User Reports
Nextdoor users can report other users' comments they deem offensive or otherwise inappropriate for the platform (Nextdoor 2022c). A user can choose to report a comment for any reason, though the platform provides tools for a user to indicate why they are reporting that comment. Comments that are reported are not necessarily uncivil, inappropriate, or otherwise offensive. Reported comments are sometimes reviewed and removed by other users in the neighborhood, while in some cases the platform reviews Figure 3: Distribution of the algorithmically calculated MFD scores (low scores indicate language associated with moral vice; high scores indicate language associated with moral virtues) on comments that were labeled using the civility labeling task. On the left, comments assigned a civil label (blue) had a higher average MFD of 0.053, compared to 0.032 for comments that did not have a civil label (yellow). On the right, comments assigned an uncivil label (yellow) had a lower average MFD of 0.018, compared to 0.047 for comments that did not have an uncivil label (blue). and removes them. Our dataset from Nextdoor included information on whether a comment was reported, but not the result of the report (whether or not it was deemed to violate any platform rules).
Of the three types of metric used here, user reports are particularly important because they reflect some action users took. The first two indices used in this study (civility labels and MFD scores) are inferences made about the civility of the online discussion based on its content. By contrast, complaining to the platform through these reports is an action taken by a user to flag incivility. Unfortunately, there is no corresponding action that users can take to signal civil discussions.
Dataset
This study collected two different datasets from Nextdoor. The first, smaller, dataset was used exclusively to source comments for the civility labeling task to test H1a and H2a. This dataset was comprised of 100 Neighborhood Posts that were randomly sampled among all the Neighborhood Posts created in October 2020 within the study that became groups after prompting from Nextdoor. Half of these sampled Neighborhood Posts (50 of 100) were converted into a group that was assigned to the Guidelines condition, while the other 50 were assigned to the No Guidelines condition. This dataset also included the corresponding Group Comments.
The second dataset was a much larger dataset used for the more quantitative analyses. This dataset was used to calculate the MFD scores to test hypotheses H1b and H2b and to analyze the user reports to test hypotheses H1c and H2c.
Neighborhood Post Comments vs. Group Comments
To investigate what happens when a popular conversation moves into a group setting (H1a/b/c), this study analyzed Neighborhood Posts in which the author chose to create a group after being prompted to do so. We compared our three measures between the first 70 Neighborhood Post Comments-before any group had been created-to the Group Comments made within the group that was later created.
We randomly sampled and manually labeled 4,000 comments. First, 20 Neighborhood Post Comments were randomly sampled from among the first 70 Neighborhood Post Comments on 100 different Neighborhood Posts, totaling 2,000 labeled Neighborhood Post Comments made before the post author was asked to create a group. We then compared these comments to 20 randomly sampled Group Comments made from each of the 100 newly formed groups that were created, totaling 2,000 labeled Group Comments. This allowed us to compare the civility of comments made on the Neighborhood Posts before any intervention from Nextdoor (pre-test) to the civility of Group Comments made in the group created to discuss the same topic (post-test).
Using a chi-squared test, we found a significant difference in the proportion of both civil and uncivil comments. The proportion of comments classified as "civil" on the Neighborhood Post Comments was 0.41 compared to 0.56 for Group Comments ( ? < 2 4−16 ). Similarly, the proportion of comments classified as "uncivil" was 0.23 for Neighborhood Post Comments compared to 0.15 for Group Comments ( ? = 1.3 4−13 ) (Figure 4) A similar analysis was conducted using the much larger dataset for MFD scores and user reports. Again limiting ourselves to Neighborhood Posts that created groups after prompting from Nextdoor, we compared MFD scores on the first 70 Neighborhood Post Comments to scores on all Group Comments in the newly formed groups. Across all five foundations, we observed statistically significant increases in MFD scores for Group Comments compared to Neighborhood Post Comments (Table 3). Note: The Cohen's 3 is the size of the difference in mean relative to the standard deviation of the data. A larger value indicates a stronger relative effect size. All differences in mean in the above tables are statistically significant under hypothesis testing.
Lastly, we used the larger dataset to analyze reports of comments made by users. While reporting comments is a relatively rare occurrence, given the number of comments in our dataset, we still observe meaningful differences. The proportion of Group Comments with one or more reports was 0.2%, while 1.4% of the first 70 Neighborhood Post Comments received one or more reports ( ? < 2 4−16 ). This indicates that Nextdoor members were significantly more likely to report comments made on Neighborhood Posts than Group Comments.
Across the three methods used-human labeling, MFD scores, and member reports-we observed consistent results. Comments made within these groups were more likely to be labeled civil, less likely to be labeled uncivil, had relatively higher MFD scores, and were reported less often than comments made on the Neighborhood Posts. All of these results showed strong support for H1a, H1b, and H1c.
The Effect of Announcement of Group Guidelines on Civil Discussion
When Neighborhood Post authors chose to create a new group from their Neighborhood Post, that newly formed group was randomly assigned to either a Guidelines or No Guidelines condition in which new members joining this new group were shown or not shown a set of guidelines (Figure 1 on page 9). Comments made in these newly formed groups were sampled and labeled using the civility codebook. Twenty Group Comments were randomly sampled from each of 100 groups for a total of 2,000 labeled Group Comments. These 2,000 Group Comments were evenly split between groups in each condition (20 Group Comments from 50 Guidelines groups and 20 Group Comments from 50 No Guidelines groups). Comparing these two sets of Group Comments using a chi-squared test reveals a small and significant difference in the proportion of comments classified as civil; the No Guidelines group had a slightly higher proportion of Group Comments classified as civil. However, there was no statistically significant difference in the proportion of Group Comments labeled uncivil. The proportion of Group Comments with civil labels in Guidelines groups was 0. Given the small effect on civility that was observed in the manually labeled data, the much larger dataset provides us an opportunity to more easily detect smaller effects that may result from the presentation of guidelines. Across all five moral foundations, there was a small but statistically significant increase in MFD scores for Group Comments in Guidelines groups compared to No Guidelines groups (Table 4 on the next page).
Lastly, we found that Group Comments in Guidelines groups were less likely to be reported by members than Group Comments in No Guidelines groups. The proportion of Guidelines groups' Group Comments that had one or more reports was 0.33%, compared to 0.72% for the Group Comments in No Guidelines groups ( ? < 2 4−16 ). This indi- Note: The Cohen's 3 is the size of the difference in mean relative to the standard deviation of the data. A larger value indicates a stronger relative effect size. All differences in mean in the above tables are statistically significant under hypothesis testing.
cates that Nextdoor members were significantly less likely to report Group Comments made in groups with guidelines than in those without guidelines.
Our second set of hypotheses examined the effect of showing or not showing guidelines to new group members on their participation in the newly formed groups. Here, the results are mixed. With the relatively small amount of human labeling, we saw a very small decrease in the proportion of civil comments but no difference in the proportion of uncivil comments in groups with guidelines compared to those without. Therefore, H2a was not supported. However, we did observe statistically significant changes in the MFD scores and member reports of Group Comments, which supports H2b and H2c. Overall, we found some support for H2: showing guidelines to new members does appear to have a positive, albeit small, impact on the conversations that follow in those groups.
Discussion
The nature of online platform content has become a widespread concern among policy makers, legal scholars, and the general public (Gorwa 2019;Klonick 2017). These concerns have led to normative questions about whether and how content should be managed alongside empirical research on how such management might be possible. Many platforms have utilized traditional legal frameworks to manage certain platform behaviors, which involve progressively severe sanctions ranging from takedowns to user suspensions and exclusions (Tyler et al. 2021). These efforts have been primarily directed at managing and regulating the amount of antisocial behavior and negative experiences. This focus on reducing negative or otherwise offensive content can often come at the cost of exploring how to foster more positive, pro-social content and connections on the platforms. This study works to fill that gap by promoting more positive, civil content in online interactions.
Our results demonstrate that more civil interactions among users can be encouraged by altering the design and architecture of the online environment within which the interaction occurs. The level of civility and moral values of Neighborhood Post Comments increased, while the number of reports of those comments made by users and the level of incivility decreased when conversations about a given topic were moved away from Neighborhood Posts and into new groups. Both architectural features-encouraging the formation of a group and proactively providing guidelines about civil interactions-were associated with improvements in discussions and behaviors. The findings complement previous research (Matias 2019;Tyler et al. 2021) which shows that announcing com-munity guidelines or providing education not only decreases antisocial behaviors (as shown in previous research); it can also lead to more pro-social interactions among users. This research advances the existing literature by demonstrating the need for online environments that clearly communicate expectations and norms to users in order to encourage more pro-social interactions.
Regarding group formation, there have been general concerns about decreased membership in civic groups and civic engagement in recent decades (Putnam 2000). Although groups on social media may enhance online user engagement and offline participation (especially in politics) (Conroy, Feezell, and Guerrero 2012), these groups have been criticized for spreading misinformation and hate speech, as well as creating group polarization (Del Vicario et al. 2016;Merrill and Åkerlund 2018). However, our findings suggest that online groups might encourage users to participate in more civil interactions if appropriate guidelines are provided to new group members.
Although calls for civility have a strong moral appeal, we acknowledge that such an approach may be used to silence and harass feared or subordinate groups (Jamieson et al. 2017;Zurn 2013). Incivility might sometimes be beneficial in terms of draw attention and passionate engagement from others, or might even be required for some groups to get their point across (Cohen 1960). However, hate speech or uncivil discussion on social media platforms is often cited as a reason that many choose not to engage in discussions online (Kruse, Norris, and Flinchum 2017); minority groups may be more likely to be targeted with hate speech and incivility online (Vogels 2021). Moreover, compared to content moderation, such as removing uncivil content from platforms (which potentially suppresses free speech), encouraging civil behavior on social media platforms via indirect platform interventions would help promote individual and community well-being without silencing minority views.
This study makes theoretical contributions by providing empirical evidence of how platform architecture influences users' behaviors by analyzing a large volume of social media user data. In addition, future studies can use the codebook we developed for this study to operationalize and measure the concept of civility to examine how any number of platform architectures and strategies not only decrease uncivil conversations but also increase civil interactions. Finally, by collaborating with a social media platform, this study applied the theory of civility in a robust and practical setting.
However, this study is not without limitations. First, our analyses focused on conversations in which individuals chose to create groups from Neighborhood Posts. Since this was a relatively rare occurrence (only 4.1% of Neighborhood Posts resulted in a group being created), we cannot assume that all conversations on a given social media platform would be affected by similar interventions. The conversations that became groups appeared to, on average, start at a higher level of civility than the majority of conversations that did not become groups. On the one hand, this could suggest that such platform architecture interventions could have an even greater impact on the conversations that did not opt in to this intervention in our study. On the other hand, it could suggest that the civil conversations that self-selected into this group-forming intervention were more easily nudged towards even greater civility. As is the case with many interventions designed to shift community norms, there will never be a "silver bullet," and the two interventions presented here should be considered alongside other design patterns as platforms build their online environments.
A second limitation is that this study assumes that users' moral and pro-social beliefs are reflected in the language they use in social media posts and comments. Language has typically been regarded as the most common and reliable way for people to indicate their thoughts and emotional states, thereby reflecting who they are and their social re-lationships (Tausczik and Pennebaker 2009). However, scholars have argued that social media users use different types of language based on their perceptions of a post's potential audience. Future studies could examine how social media users employ specific types of language to indicate their moral and pro-social beliefs, depending on the specific contexts. In addition, individual coders who conducted our civility labeling might have differing moral beliefs that affect their interpretation of civility in the comments they evaluated. To avoid this influence, we conducted multiple training sessions and tested inter-coder reliability. Nevertheless, it would be worth examining how individual coders' moral beliefs may affect their coding of the moral and pro-social beliefs reflected in the posts.
Another area for future inquiry is the exact pathway through which the group setting encourages more civil conversations. In this study, authors of popular posts were encouraged to create groups to facilitate a more focused discussion of a topic that was clearly resonating with their neighborhood. Our results indicate that these groups facilitated more civil conversations than the equivalent discussions that occurred at the neighborhood-wide level. The most basic distinction between these two settings (neighborhoodwide and group) is the size of the audience and the number of participants. However, groups also require those conversing to opt in to having a conversation about a given topic with others in the group. The group setting can also permit organizers to moderate some aspects of the discussion. Our second intervention illustrated that basic guidelines and ground rules for a conversation can be established. Further research should explore which factors within these group settings may contribute to the civility of conversations, and how.
Given that online engagement can be related to offline civic engagement (Putnam 2000), platform-level interventions might have other potential benefits to society. Future work could examine other consequences of group formation and announcing guidelines to new group members, including users' perceptions of or general attitudes toward the platform and their offline civic engagement or participation.
Finally, most previous efforts to measure the content of online interactions have focused on identifying negative content (e.g., hate speech, nudity). This study demonstrates that there are several viable mechanisms for capturing both positive and negative content, including creating a theory-based set of indicators of civility/incivility and drawing on existing dictionaries based on models of positive/negative words and phrases. This study found that these two approaches converged in their identification of both civil and uncivil content, suggesting that both are valid indicators of platform discussions. | 2022-09-23T15:34:32.073Z | 2022-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "53d8cd4b9bf63696d238e10f6ab6e03bbe526f8d",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.tsjournal.org/index.php/jots/article/download/54/37",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "780ec1fd58ad5b721a7b3a657e4fc3bbcdee16f8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
136152915 | pes2o/s2orc | v3-fos-license | Characteristics of cold atmospheric plasma source based on low-current pulsed discharge with coaxial electrodes
This work investigates the characteristics of the gas discharge system used to create an atmospheric pressure plasma flow. The plasma jet design with a cylindrical graphite cathode and an anode rod located on the axis of the system allows to realize regularly reproducible spark breakdowns mode with a frequency ∼ 5 kHz and a duration ∼ 40 μs. The device generates a cold atmospheric plasma flame with 1 cm in diameter in the flow of various plasma forming gases including nitrogen and air at about 100 mA average discharge current. In the described construction the cathode spots of individual spark channels randomly move along the inner surface of the graphite electrode creating the secondary plasma stream time-average distributed throughout the whole exit aperture area after the decay of numerous filamentary discharge channels. The results of the spectral diagnostics of plasma in the discharge gap and in the stream coming out of the source are presented. Despite the low temperature of atoms and molecules in plasma stream the cathode spots operation with temperature of ∼ 4000 °C at a graphite electrode inside a discharge system enables to saturate the plasma by CN-radicals and atomic carbon in the case of using nitrogen as the working gas.
1.Introduction
The source under discussion belongs to the class of devices named in literature «Cold Plasma Jet». This term is used for sources of non-equilibrium atmospheric pressure plasma flow with temperature of heavy particles in the range of 300-1000 К that is at least one order of magnitude lower than the electron temperature in such plasma [1]. Application of these devices allows treatment of thermosensitive materials with simultaneous reduction of cost as compared with low-pressure plasma technologies as there is no need to use vacuum installations.
The results of works on modification of materials in cold atmospheric plasma [2,3] show that such treatment most often leads to simultaneous development of surface morphology (increase of roughness after plasma-chemical etching) and increase of hydrophilicity because of doping of polar functional atomic groups containing oxygen and nitrogen onto the surface. One of important advantages of cold atmospheric plasma treatment as compared with thermal treatment or electrolytic processes is modification of the only surface layer without mechanical or thermal damage of the material bulk. Energy consumption and environmental hazard of production process are reduced as well.
Characteristics of the source of a new type creating a jet of cold atmospheric plasma near 1 сm in diameter for different working gases including nitrogen and atmospheric air are studied in this work. The main feature of the source operation is the chaotic movement of cathode spots of low-current pulse discharge inside a cylindrical graphite cathode with the creation of secondary plasma flow after decay of numerous spark current channels.
2.Methods of experiment
Plasma source diagram is presented in figure 1. Discharge system with coaxial geometry contains a steel rod anode on the axis and a hollow tubular cathode made of isotropic graphite mounted at the end of quartz pipe through which the working gas is fed into the discharge gap under excess pressure. In the process of source operation, decaying plasma is forced out of the discharge gap by the flow of the working gas through the cathode aperture ⌀ 8 mm, forming a plasma flame. Technically pure argon and nitrogen were used as working gases as well as atmospheric air of natural humidity pumped by a compressor. Electric power for discharge system was supplied by a high voltage pulse power source with a pulse rate regulated in the range of 10 Hz -7.5 kHz. To prevent the discharge from transition into the arc mode, ballast resistor 4 kΩ was installed into the discharge circuit. The amplitude of high voltage pulses of positive polarity was 5 kV. Pulses of open circuit voltage have rise time ~10 μs, the time of descent being approximately 100 μs. The duration of discharge current pulses was 40 μs.
The measurement of average values of discharge current and discharge voltage was fulfilled by means of magneto-electric system devices. Pulse discharge current was measured by 1.3 Ω shunt. Low discharge currents were measured by the potential drop on a ballast resistor. Pulse voltage on discharge gap was measured with high-voltage probe with galvanic decoupling of input and output signals. Current and voltage pulses were registered using a digital oscilloscope.
Spectrograph ISP-30 equipped with photo-electronic cassette MORS-6 with a spectral range 196 -930 nm was used for spectral diagnostics of the plasma. Exposure could be altered from 240 ms to 25 s. This provided a possibility of spectrum registration in linear regime of variation of the intensity of studied spectrum lines. The measurement was performed from 20 сm distance to the spectrograph slot: in the line of the hollow cathode axis and against the axis of plasma flame expansion. Figure 2a presents a photograph of the glow generated by plasma jet of the source when compressed air is used as a working gas. Maximum length of plasma jet as estimated by the zone of visible glow reaches 0.5-2 сm and varies depending on the features of the pulse discharge burning: frequency, amplitude, duration of discharge current pulses, the kind of the working gas and speed of its flow. Operation range of the average discharge current (pulse frequency) in which plasma jet is formed is 50-150 mА (2.5-7.5 kHz). The lower limit is connected with substantial weakening of plasma jet when the area of discharge working zone is decreased at the inner surface of a graphite cathode at low average currents (at low frequency). The upper limit of average current was restricted by maximum power of the used pulse power supply. The gas flow providing stability and cooling (nonequilibrium) of plasma jet varies in the range 1-10 l/min. The photo of the sequence of radially distributing spark current channels (from the end of discharge system) is presented at figure 2b. The parameters of gas discharge system and gas flow are selected in such a way that chaotic movement of cathode spots of discharge inside the aperture in the graphite cathode generates a flow of secondary plasma distributed upon the total area of cathode outlet after the decay of numerous filamentary current channels. According to [4], low-current cathode spots on isotropic graphite are characterized by chaotic movement and lead to uniform erosion of all the working surface of the electrode.
3.Results and discussion
The increase in the area of the inner surface of the cylindrical cathode at which cathode spots are generated and chaotically move, can be visually observed by the increase in the pulse frequency of the power supply (along with growth of average discharge current). When working at a minimum frequency of power supply ~10 Hz the spark channels are situated only at the shortest distance between the cathode and anode. The point of binding of current channels on the end of rod anode remains at that almost immovable from pulse to pulse.
In figure 3 the comparison of two oscillograms -of discharge current and voltage drop at ballast resistor when working with nitrogen is given for two magnitudes of pulse frequency of power supply -70 Hz and 5 kHz. At high working frequency providing average discharge current at the level of 100 mА, weak current of glow discharge between pulses of magnitude 2-3 mА is observed. When the high voltage pulse is applied, spark breakdown of the gap happens with the monotonous current increase up to ~ 460 mА. At low frequency (70 Hz) the absence of current between pulses is characteristic and the growth of the main spark current takes place after passing of the pre-breakdown pulse with amplitude 18 mА with delay of the main pulse for 10 μs.
In figure 4 the waveforms of discharge current and voltage on the gap at average current 100 mА and frequency 5 kHz, working on nitrogen, are presented. The voltage of low-current glow discharge between pulses remains practically equal to discharge voltage (~ 500 V) during the main current pulse passing. The amplitude and duration of current pulses along with the change of the kind of gas and speed of flow are not substantially altered, being at the level of 500 mА and 40 microseconds (for values of average discharge current 100 mА and frequency 5 kHz). Gas temperature in plasma jet was evaluated according to the data of spectral diagnostics basing on rotational temperature of N2 molecules. Such estimate of temperature is allowed for nitrogen at atmospheric pressure because the condition of equilibrium between rotational and translational temperatures of molecules [5] is fulfilled. The rotational temperature was estimated by 337.13 nm band of the second positive system C 3 Πu→B 3 Πg of nitrogen. The determined temperature of heavy plasma particles is 370 ± 19 К at discharge current 150 mА. The temperature of the gas in the jet may be decreased to 50 ºС along with the reduction of the average discharge current and increase in the distance from the cathode outlet.
The data of spectral diagnostics also permitted to evaluate the temperature of substance in cathode spots on the graphite cathode; its maximum value at the conditions of study is ~ 4000 ºС. The process of carbon sublimation during the functioning of cathode spots at isotropic graphite allowed to get plasma containing a substantial part of CN-radicals as well as ions of atomic carbon, according to obtained optical emission spectra of plasma jet in nitrogen flow ( figure 5). In addition to CN molecules, the highest peaks on the derived spectra of plasma jet were generated by N2, NO and N2 + molecules. For plasma jet in air flow, beside radiation bands of excited molecules CN, N2, NO and N2 + , lines of atomic oxygen have sufficient intensity as well. The presence of these components in a plasma jet can be used for modification of properties and for plasma-chemical etching of the surfaces of materials, including polymers because the developed plasma source permits selection of moderate temperature regime of treatment.
To study the effect of a plasma jet on carbon materials, a series of experiments on treatment of carbon fiber and pyrolytic graphite samples in air and nitrogen plasma was held. In figure 6 microphotos of carbon fiber treated in plasma made with the electronic microscope are shown.
Because of plasma treatment of the samples, development of regular surface relief (figure 6) is observed that leads to the increase in roughness. The results of measurement of roughness parameter Ra, obtained with a probe microscope when scanning 1x1 µm 2 sites on samples of treated and untreated pyrolytic graphite, show approximately threefold increase in roughness after processing by air plasma of the studied source.
4.Conclusion
The characteristics of discharge and plasma of the atmospheric plasma jet source based on pulsed spark discharge with a hollow graphite cathode with coaxial geometry have been studied. Conditions of stable formation of plasma jet on the outlet of the source were determined for different plasma generating gases. It was shown that source plasma on base of air or nitrogen is non-equilibrium and temperature of heavy particles does not exceed ~100 ºС at average discharge current up to 150 mА.
In the composition of plasma jet in air or nitrogen flow, beside the main excited molecules specific for this class of devices, a substantial part of CN-radicals was observed. Spectroscopic temperature measurements on cathode surface indicate the presence of cathode spots on graphite having temperature sufficient for carbon sublimation. Functioning of such spots leads to saturation of plasma with CN molecules in nitrogen or air flow.
The hybrid composition of cold plasma jet including besides plasma forming gas, atoms and molecules formed during graphite sublimation allows to enhance the possibility of usage of cold atmospheric plasma sources for materials treatment and coating deposition. | 2019-04-29T13:16:42.735Z | 2017-05-04T00:00:00.000 | {
"year": 2017,
"sha1": "4655b943c6b38f290ba8f3a482c7081b23f0620d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/830/1/012051",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "90ba32478ce8550f70fbe8030f14de4df924af24",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
225492562 | pes2o/s2orc | v3-fos-license | THE STATUS OF Oryctes rhinoceros Nudivirus (OrNV) INFECTION IN Oryctes rhinoceros (Coleoptera: Scarabaeidae) IN INDONESIA
Oryctes rhinoceros is a major problem on oil palm in Indonesia, especially during replanting. Oryctes rhinoceros Nudivirus (OrNV) is a virus that infects both larvae and adults of O. rhinoceros. An extensive survey of OrNV infection on O. rhinoceros in Indonesia has not been conducted. The objective of the research is to identify the rate of OrNV infection in its host from various sampling sites in Indonesia. Adults and larvae of O. rhinoceros were collected from Sumatra, Belitung, Java, Kalimantan and Sulawesi. Infected larvae were determined by their physical character, i.e. prolapsed rectum, while infected adults were determined by dissection to observe the swollen midguts. The incidence of OrNV infection in larvae was difficult to estimate, as only 11 out of 417 larvae showed prominent symptoms. OrNV infection rates in adults O. rhinoceros in oil palm plantations in Sumatra, Belitung and Kalimantan were between 64% and 90%, and female O. rhinoceros could still produce eggs even when they were infected by OrNV. In Sulawesi and Java, which are not major oil palm plantation centres, OrNV infection rates were below 16%. It is suspected that most of the O. rhinoceros population from areas intensively cultivated with oil palm is persistently or latently infected by OrNV and the beetles remain fertile.
INTRODUCTION
The current status of Oryctes rhinoceros in Indonesia is that it is the main pest in coconut and oil palm plantations (Susanto et al., 2012) and harmful outbreaks of O. rhinoceros frequently developed in replanted areas (Zelazny et al., 1992;Chenon and Pasaribu, 2005;Salim and Hosang, 2013). Decomposed coconut and oil palm trunks provide extensive O. rhinoceros breeding sites, and zero burning regulations have been contributing to rising O. rhinoceros populations (Purba and Sudharto, 2000;Abidin et al., 2014). Extremely large numbers of O. rhinoceros have been found in oil palm plantations, with a 2 x 2 x 0.2 m 3 sample of decomposed empty fruit bunches containing 4441 larvae, 12 pupae and 201 adults having been reported (Purba et al., 1999). Larval populations in natural coconut breeding sites were smaller than in oil palms, at only 3 -48 individual for 1 x 1 x 0.5 m 3 (Indriyanti et al., 2017).
The discovery of the pheromone active ingredient, ethyl 4-methyl octanoate, as an attractant for O. rhinoceros adults revolutionised (Renou et al., 1998;Chenon et al., 2001). Chenon and Pasaribu (2005) reported that thousands of adults had been trapped with pail pherotraps in North Sumatra. Combination of several control methods for O. rhinoceros have been suggested, such as manual pick-up of larvae from decomposed palm trunks, bio-traps made from decomposed empty fruit bunches, Metharizium anisopliae application in breeding sites, camphor treatment in young shoots, manual pick-up of adults from their tunnels, and pheromone traps (Purba et al., 1999;Susanto et al., 2012). Although various control methods have been used, O. rhinoceros remains a substantial problem. Pheromones have been found to be effective but expensive, and only large private oil palm companies are capable of using pheromones for control so far. The cost factor has prevented the use of commercial pheromone products by small farmers.
OrNV is non-occluded Baculovirus belong to family Nudiviridae, genus Alphanudivirus (Jehle et al., 2013). OrNV infects larval and adult stages of O. rhinoceros in midgut epithelial cells and the fat body (Huger, 2005;Jehle et al., 2013). Bedford (2014) states that OrNV has been used as a main control of O. rhinoceros in non-endemic areas such as the Pacific; unfortunately, however, in areas such as Indonesia and Malaysia, OrNV has not been a main method of control. Hopes for effective OrNV utilisation lie in the differences of pathogenicity between isolates. There are some slight enzymatic restrictions of EcoRI and HindII patterns in Indonesian isolates (Kobayashi and Somowiyarjo, 1995). The most virulent OrNV isolate, isolate B, has delivered significant results for O. rhinoceros control in Malaysia (Ramle et al., 2005). Benefits of OrNV as an O. rhinoceros biological control are that it is environmental-friendly, cheap and permanent.
Survey of OrNV incidence across a wide area in Indonesia have not been conducted. Indonesia is a place of origin of OrNV (Huger, 1966;Bedford, 2014) and it is thought to be found as various isolates with different levels of pathogenicity. This article presents preliminary results of the status of OrNV infection rate in O. rhinoceros taken from the survey. Preserved O. rhinoceros raw organ containing OrNV will be used for pathogenicity tests.
P R E S S Collection of O. rhinoceros Adults and Larvae
Adults were trapped by pail pherotraps equipped with vanes. Larvae were collected from decomposed coconut and oil palm trunks, decomposed sawdust and other organic materials. Collected samples were sent to the Insect Pathology Laboratory, Plant Protection Department, Bogor Agricultural University between January and April 2018.
Observation of OrNV Infection in Larvae and Adults by Physical Characters
Larvae were identified using a simplified field key for O. rhinoceros (Beaudoin-Ollivier et al., 2000). Infected larvae were identified by swollen, transparent abdomens. Sometimes the abdomen has a white, shiny, pearlescent appearance (Huger, 1966). Heavily infected larvae also show signs of prolapsed rectum (Huger, 2005). Larvae were observed for the OrNV infection by external characters as described by Huger (1966;2005) and also on their digestive tracts. The digestive tract was cleaned with sterilised aquadest and the midgut sections were observed. Adults were measured from horn to abdominal tip. Fresh adults were opened along the line between the dorsal and ventral abdomen. Infected adults were characterised by a whitish swollen gut (Crawford and Zelazny, 1990;Burand, 1998;Huger, 2005) and their eggs were counted.
OrNV Infection Rate in Larvae and Adults
Fifty larvae were obtained from each of the nine sampling points except for Morowali, Central Sulawesi, from which only 17 were collected ( Table 1). Prolapsed rectum was found in only three larvae from Riau, while only four larvae with white swollen abdomens were found in each of the Riau and Belitung sites (Figures 1 and 2). Larvae with a prolapsed rectum had more hemolymph liquid but there were no differences between their guts and those of the healthy larvae. Infected larvae can only be identified by their external body symptoms. There were 406 larvae that had no external symptoms, and a molecular method might be more accurate for OrNV detection than by relying on physical observation. Table 1). There were 207 adults with white swollen midguts ( Figure 6); the rest had transparent, beige, brownish, or black midguts (Table 2; Figures 3, 4 and 5). Those adults with white midguts had gut diameters ranging from 2 to 4 mm. Those with a larger midgut diameter contained a large amount of white liquid and were fragile. Adults with brown, beige, and transparent midguts were assumed to be healthy (Crawford and Zelazny, 1990). Those with black midguts appeared to be ailing and were moving less than healthy specimens. Population of adults from eight sampling sites had different degrees of OrNV infection. Adults from Riau and North Sumatra had high rates of OrNV infection, at 90.2% and 89.5%, respectively, while adults from Java and Sulawesi had low OrNV infection rates of below 16%. OrNV infection rates in Belitung and Central Kalimantan were also high at above 60% and 70% (Figure 7). It has been suggested that there is an interaction between intensity of oil palm plantation and the incidence of OrNV infection.
Oil palms have been cultivated since 1911 in North Sumatra, 1922 in South Sumatra, and1981 in Kalimantan (Pamin, 1998;Suprianto et al., 2016). High OrNV incidence was found to be concomitant with intensive oil palm cultivation. In contrast, Java and Sulawesi are not intensively cultivated with oil palms and have low OrNV incidence. Areas under oil palm cultivation are 36 163 ha in West Java and 354 000 ha in Sulawesi (Directorate General of Estate Crop RI, 2016;Gapki, 2015). Bedford (2013) has reported low OrNV incidence of 7%-25% in the Philippines and medium-to-high OrNV incidence of 41%-75% in India. Oil palm is the most important commodity crop in Malaysia with total planted areas of over 4.9 million hectares (MPOB, 2011). OrNV incidence in Malaysia has been reported as high as 75%-100% (Ramle et al., 2005). High OrNV incidence in Malaysia is also concomitant with intensive oil palm cultivation.
P R E S S Number of Eggs in Infected Females
Healthy females O. rhinoceros and those infected by OrNV have almost the same number of eggs, 128 and 122. Of those females with OrNV infection, 57% had eggs in their abdomens (Figure 8). Females O. rhinoceros from oil palm intensive cultivation areas, i.e. Sumatra, Belitung and Kalimantan still produced eggs even when they were infected by OrNV ( Figure 9). While, infected females O. rhinoceros from Java and Sulawesi tended not to have eggs ( Figure 10).
Although there were high OrNV incidence in Sumatra, Belitung and Kalimantan, O. rhinoceros still survived in good reproductive condition. Zelazny and Alfiler (1991) stated that OrNV was an important control of O. rhinoceros in South-east Asia, but O. rhinoceros outbreaks continue to occur even though some of the individuals in populations are infected. Evidence suggests that the extensive availability of organic material at replanting is the reason for O. rhinoceros outbreaks (Chenon and Pasaribu, 2005;Salim and Hosang, 2013). The OrNV may have had significant opportunities to change its genetics and circulate to other abundant hosts. Crawford and Zelazny (1990) stated that OrNV genomics always changed with nonlethal infections.
OrNV is a species in the genus Alphanudivirus, family Nudiviridae (Jehle et al., 2013). Nudivirus adopts with vertical and horizontal modes of infection (Williams et al., 2017). Vertical infection is also known as latent or persistent infection (Wang and Jehle, 2009) and changes the virus to become asymptomatic. Latent infections burst into active depending on host environmental resistance or host physiological stresses. Activated viruses from a symptomless condition cause lethal infections and eventually horizontal transmission occurs (Cory, 2015). It is suspected that most Indonesian O. rhinoceros populations from oil palm intensive cultivated areas are persistently or latently infected by OrNV and the beetles remain fertile.
Adults Attracted by Pheromone and Size of the Adults
Pheromones attracted more females than males, with the highest percentage of 77.2% in North Sumatra and data average from eight sampling sites was 61.5% (data not shown). Other research has also indicated that females are more strongly attracted by pheromones than males, with results of 68% in India, 60% in Malaysia and 81% in North Sumatra. These females may have been searching for mates or seeking breeding sites (Bedford, 2014;Zelazny and Alfiler, 1991;Morin et al., 1996). Attracted females may come from breeding sites or the crowns of oil palms. Emergent adults fly to the crowns of palms for five weeks then fly to lay eggs in breeding sites where they stay for seven weeks, before returning to drill into oil palm crowns to feed (Norman and Basri, 2004).
Both male and female O. rhinoceros had a median length of 4 cm. First quartile (Q1) female length was 3.8 cm, with 4.3 cm for third quartile (Q3). Q1 male length was 3.7 cm, with 4.2 cm for Q3. Adult length is dependent on nutritional values during larval phases (Pallipparambil, 2015). The number of female O. rhinoceros that is attracted by pheromone and the adult size captured from field seem not to be affected by the presence of OrNV infection. OrNV infected larvae can be identified by external body characteristics, while infected adults can be identified by internal identification of whitish, swollen midgut. High incidence of OrNV infection in O. rhinoceros can be correlated with intensive oil palm cultivation such as found in Sumatra, Belitung and Kalimantan. In some cases, infected adults from Sumatra, Belitung and Kalimantan were found to be alive and with eggs in their abdomens. | 2020-07-30T02:02:21.139Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "e035f7eb1ef7f4fc5317cd783223e11d6263d7da",
"oa_license": null,
"oa_url": "http://jopr.mpob.gov.my/wp-content/uploads/2020/12/joprv32dec2020-sat.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9526b9f23c996e7dbbfa093789662463dbcab6b0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
198167785 | pes2o/s2orc | v3-fos-license | Active tactile exploration enabled by a brain-machine-brain interface
Brain-machine interfaces (BMIs)1,2 use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. While BMIs aim to restore the normal sensorimotor functions of the limbs, so far they have lacked tactile sensation. Here we demonstrate the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and enables the signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex (S1). Monkeys performed an active-exploration task in which an actuator (a computer cursor or a virtual-reality hand) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in primary motor cortex (M1). ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search and discriminate one out of three visually undistinguishable objects, using the virtual hand to identify the unique artificial texture (AT) associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic, or even virtual prostheses.
almost exclusively on visual feedback. Prosthetic sensation has been studied in the context of sensory substitution 14 and targeted reinnervation 15 , however these approaches have limited application range and channel capacity. To provide a proof-of-concept method for sensorizing neuroprostheses, we implemented a BMBI that extracts movement commands from the motor areas of the brain while delivering ICMS feedback in somatosensory areas 1,2,16 to evoke discriminable percepts [17][18][19][20] . This idea received support from our pilot study 16 in which a monkey responded to ICMS cues with the movements of a BMIcontrolled cursor. However the ICMS cue did not provide feedback of object-actuator interactions in this previous demonstration.
The BMBI developed here enabled active tactile exploration 21 during BMI control (Fig. 1a). Two monkeys (M and N) received multielectrode implants in M1 and S1 (Fig 1b). They explored virtual objects with either a computer cursor or a virtual image of an arm ( Supplementary Fig. 1a,b). In hand control (HC), the monkeys moved a joystick with their left hands to position the actuator. They searched through a set of virtual objects, selected one with a particular artificial texture (AT) conveyed by ICMS, and held the actuator over that object to obtain reward ( Fig. 1a; Supplementary Fig. 1c,d). During brain control (BC), the joystick was disconnected and the actuator was controlled by the activity of righthemisphere M1 neurons 9,22,23 . The behavioural tasks varied in the number of objects on the screen, ATs employed, and the actuator type ( Fig. 2a) and were more difficult than previously reported BMI tasks because of the presence of multiple objects in the workspace, a prolonged object selection period, and the necessity of interpreting ICMS feedback.
ICMS was delivered through two pairs of microwires to the hand representation area of S1 in monkey M, (Fig. 1c) and through one pair of microwires to the leg representation in monkey N. Each AT consisted of a high-frequency pulse train presented in packets at a lower secondary frequency ( Fig. 1d; Supplementary Fig. 2a). The rewarded AT (RAT) consisted of 200 Hz pulse trains delivered in 10 Hz packets. The comparison ATs were represented by 400 Hz pulse trains delivered in 5 Hz packets (unrewarded artificial texture, UAT) or by an absence of ICMS (null artificial texture, NAT).
The major challenge solved here was the real-time coupling of ICMS feedback with the BMI decoder. As ICMS artefacts masked neuronal activity for 5-10 ms after each pulse (Fig. 1d, e), we multiplexed neuronal recordings and ICMS with a 20 Hz clock rate ( Supplementary Fig. 2a). The interleaved intervals proved adequate for online motor control and artificial sensation-a result that was not clear a priori since S1 stimulation could have affected M1 processing through the connections between these areas. BMBI performance improved with training. In Task I (Fig. 2a-I), monkey M surpassed chance performance after nine sessions; monkey N after four (P < 0.001, one-sided binomial test). Improvement continued with more difficult tasks (Tasks II-V) (Fig 2a,b; Supplementary Fig. 3a). In particular, the time spent exploring unrewarded ATs decreased ( Fig. 2c; Supplementary Fig. 3b). Additionally, performance improved within daily experimental sessions (Fig. 2d). Psychometric analysis of RAT stimulation amplitudes indicated that at least 8 nC per ICMS waveform phase (100 μs wide current pulses of 80 μA) was needed for the AT discrimination (P < 0.001, one-sided binomial test). Performance was at chance level for catch trials (in Task II), where ICMS was not delivered (P = 0.90, onesided binomial test).
Additional hallmarks of active exploration were seen in the conditional probabilities of selecting different ATs (Figure 3b,d). During HC trials, the monkeys stayed over the first encountered AT (arrows that loop back to the same AT in Fig. 3b,d) with high probability if it was RAT (P=0.70 for monkey M and 0.76 for monkey N), but with low probability if it was UAT (0.05 and 0.01) or NAT (0.0 and 0.0) (Fig. 3b,d, left). After examining the second AT, the monkeys could identify the correct AT either by apprehending it directly or through a process of elimination. This follows from the increase in the probability of moving to RAT from NAT or UAT from chance to approximately 0.7 and the decrease in the probability of revisiting UAT or NAT to approximately 0.2 (Fig. 3b,d right). Similar effects were observed for BC (Fig. 3e, red text).
In BCWOH, task requirements were eased: the object selection period was reduced to 300-500 ms and monkeys were allowed to overstay at an incorrect object. Performance for monkey M, measured as the number of rewards per minute, steadily improved from 1.021 ± 0.007 to 2.962 ± 0.005 (mean ± s.e.m.) (Fig. 2d). Similar improvements were observed for HC and BCWH (inset in Fig. 2d). The average frequency of actuator displacements, calculated from power spectra, was correlated with the improvement in performance during BCWOH (R 2 =0.16 for the X-coordinate and R 2 =0.26 for the Y-coordinate, P<0.001, F-test), which indicated that the monkey modulated its brain activity to scan the targets faster. This behaviour was not random, as the exploration interval for NAT (3,620 ± 350 ms, mean ± s.e.m.) was significantly shorter (P<0.02, Wilcoxon rank sum test) than for UAT (4,270 ± 310 ms). The exploration of RAT (2,255 ± 94 ms) was the shortest due to the reduced selection period. For monkey N, BCWOH performance (2.084 ± 0.085 rewards per minute) did not change within sessions, and the differences in exploration intervals were not significant.
In agreement with others 26-30 , we observed that M1 neurons represented the movements of the actuator even when it was passively observed by the monkey (Supplementary Fig. 7). Actuator movements (task V) replayed for the monkeys could be reconstructed from M1 activity, using a separately trained decoder (Fig. 4d), with similar accuracy to reconstructions made for HC (Fig. 4c). M1 representation of the passively viewed actuator is consistent with our suggestion that a neuroprosthetic limb might become incorporated in brain circuitry 1 .
Our BMBI demonstrated direct bidirectional communication between a primate brain and an external actuator. As both the afferent and efferent channels bypassed the subject's body, we propose that BMBIs can effectively liberate a brain from the physical constraints of the body. Accordingly, future BMBIs may not be limited to limb prostheses, but may include devices designed for reciprocal communication between and among neural structures and with a variety of external actuators.
METHODS SUMMARY
All animal procedures were performed in accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals and were approved by the Duke University Institutional Animal Care and Use Committee. Two rhesus monkeys were implanted with micro-wire arrays in both hemispheres. These implants were used for both recordings and ICMS (symmetric, biphasic, charge-balanced pulse trains; 100-200 μs, 120-200 μA). Monkeys manipulated a joystick to produce reaches with an actuator (computer cursor or a virtual reality arm) towards up to three objects displayed on a computer monitor. The task required searching for the object with particular artificial tactile properties. Objects consisted of a central response zone and a peripheral feedback zone. Artificial tactile feedback was delivered when the actuator entered the feedback zone and continued in the response zone. Monkeys held the actuator over the correct object for 0.8-1.3 s to receive a fruit juice reward. Holding over an incorrect object cancelled the trial. In brain control trials, the actuator was controlled by cortical ensemble activity decoded using an Unscented Kalman Filter 23 . An interleaved scheme of alternating recording and stimulation subintervals (50 ms each, 50% duty cycle) was implemented to achieve concurrent afferent and efferent operations. In all offline analyses, ICMS periods were excluded from calculations of neuronal firing rates. The virtual reality arm was animated using Motion Builder (Autodesk, Inc., San Rafael, CA).
Subjects and implants
Two adult rhesus macaque monkeys (Macaca mulatta) participated in this study. Each monkey was implanted with four 96-micro-wire arrays constructed of stainless steel 304. Each hemisphere received two arrays: one in the upper and one in the lower limb representation areas. These array sampled neurons in both primary motor (M1) and primary sensory (S1) cortex. We used recordings from the right hemisphere arm arrays in each monkey, since both manipulated the joystick with their left hands. Within each array, microwires were grouped in two 4-by-4 uniformly-spaced grids of 16-triplets of electrodes. The separation between triplets of electrodes was 1 mm. The electrodes of each triplet had three different lengths, staggered at 300 μm intervals. The penetration depth of each triplet was adjusted with a miniature screw. After adjustments during the month following the implantation surgery, the depth of the triplets was fixed. The longest electrodes in all triplets protruded to 2 mm in length measured from the cortical surface.
Tasks
The monkeys were trained to manipulate a computer cursor or a virtual reality arm and to make reaches towards objects displayed on a computer monitor. The objects were visually identical, but exhibited different tactile properties as conveyed by ICMS of S1. In manual control, each trial commenced when the monkey held the joystick with their working hand. Then, a target appeared in the centre of the screen. The monkey had to hold the actuator (cursor or virtual reality monkey arm) within that centre target for a random hold time uniformly drawn from the interval 0 to 2 s. After this, the central target disappeared and was replaced by a set of virtual objects radially arranged about the centre of the screen. Each of these consisted of a central response zone and a peripheral feedback zone, distinguished by their shading (Supplementary Fig. 1c). Tactile feedback was delivered in the feedback zone or corresponding response zone. For monkey M, the radius of the response zone varied from 1.5 to 4.0 cm and the radius of the feedback zone varied from 4.5 to 7.25 cm, across all tasks and sessions. For monkey N, the radius of the response zone varied from 1.5 to 4.5 cm and the radius of the feedback zone varied from 4.75 to 9.5 cm, across all tasks and sessions. A trial was concluded when the monkey placed the actuator within the response zone for a hold interval (800 to 1300 ms for HC, depending on the session; 300 to 500 ms for BC) or the monkey released the joystick handle (in manual control trials). The next trial could commence after an inter-trial interval (500 ms). The sequence of events was the same during brain control trials. In some brain control sessions, the joystick was removed from the behavioural setup. For these, each new trial commenced following the previous inter-trial interval without the requirement for the monkey to hold the joystick. For Tasks I through III, monkeys chose from a set of two objects. For Task I, the monkeys had to choose between RAT and NAT for fixed object locations. For Task II, RAT and NAT were presented on the screen at different angular locations on each trial. For Task III, object number and spatial arrangement were the same as in Task II, but RAT and UAT were used. For Task IV, three objects were used (RAT, UAT and NAT), whose arrangement on the screen varied from trial to trial. Finally, for Task V, the virtual reality monkey arm replaced the computer cursor.
Psychometric measurements
Psychometric measurements determined the minimum ICMS amplitude that the monkeys could discriminate. In these measurements, the ICMS amplitude was different on every trial. In each psychometric session a range of amplitudes was selected so that about half were in a range clearly above the monkeys' threshold for discrimination and half were in a range of unknown discriminability.
Catch trials
In some sessions, a small percentage of trials (typically 1%) were designated for catch trials. For these trials, the microstimulator delivered pulse trains with zero amplitude, however all other aspects of the behavioural task remained the same. This allowed us to confirm that there were no unintentional sources of information that the monkeys could use to perform the tasks.
Algorithms
An Nth-order Unscented Kalman filter (UKF) 23 used for BC predictions. Up to a 10th-order UKF was used in some sessions, but in most sessions we found that the 3rd-order UKF was sufficient. The filter parameters were fit based on the hand movements of the monkeys while they performed the task using a joystick or based on passive observation of actuator movements while the monkeys' arms were restrained.
ICMS
Symmetric, biphasic, charge-balanced pulse trains were delivered in a bipolar fashion across pairs of microwires. The channels selected had clear sensory receptive fields in the upper limb (monkey M, two pairs of microwires with synchronous pulse trains) or lower limb (monkey N, one pair of microwires). For monkey M, the anodic and cathodic phases of stimulation had a pulse width of 105 μs; for monkey N the pulse width was 200 μs. The anodic and cathodic phases of the stimulation waveforms were separated by 25 μs.
Interleaved ICMS and recordings
We implemented an interleaved scheme of alternating recording and stimulation intervals ( Supplementary Fig. 2a). Our BMI had a 10 Hz update rate. That is, 100 ms of past neural data are used to make predictions about the desired state of the actuator. We broke up each 100 ms interval into two 50 ms sub-intervals. In the first sub-interval (Rec), neural activity was recorded as usual and the measured spike count was used to estimate the firing rate for the whole 100 ms interval. The second sub-interval (Stim) was reserved exclusively for delivering ICMS; all spiking activity occurring in this subinterval was discarded. Whenever the actuator was in contact with a virtual object at the start of a Stim interval, an ICMS pulse train was delivered. For RAT, nine pulses of ICMS were delivered; For UAT, 18 pulses of ICMS were delivered; for NAT, no pulses of ICMS were delivered. The neural activity in the Stim interval was discarded even in the case of NAT, so that there would be no bias induced by ICMS-occluded neural data.
Virtual reality monkey arm
In Task V, we introduced a novel brain-controlled virtual reality arm with realistic kinematic movements and spatial interactions. The control loop rate was 50 Hz, with visual refreshing at 30 Hz. The arm model was designed to depict a rhesus conspecific. We presented a first person perspective of the virtual reality arm to the monkey, who controlled the position of the hand. Arm posture was controlled using a mixture of direct control of end effectors and inverse kinematics, constrained by the physical interdependencies of the joints.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. | 2016-01-11T18:29:14.669Z | 2011-09-02T00:00:00.000 | {
"year": 2011,
"sha1": "aa021b1cf63517a570a975626f29f9b136c17e4a",
"oa_license": "unspecified-oa",
"oa_url": "https://europepmc.org/articles/pmc3236080?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a90c2ccd7b86a9a06041ed9f0024771534ec8ae",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
} |
255497578 | pes2o/s2orc | v3-fos-license | Identification of ubiquitination-related gene classification and a novel ubiquitination-related gene signature for patients with triple-negative breast cancer
Background: Ubiquitination-related genes (URGs) are important biomarkers and therapeutic targets in cancer. However, URG prognostic prediction models have not been established in triple-negative breast cancer (TNBC) before. Our study aimed to explore the roles of URGs in TNBC. Methods: The Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and the Gene Expression Omnibus (GEO) databases were used to identify URG expression patterns in TNBC. Non-negative matrix factorization (NMF) analysis was used to cluster TNBC patients. The least absolute shrinkage and selection operator (LASSO) analysis was used to construct the multi-URG signature in the training set (METABRIC). Next, we evaluated and validated the signature in the test set (GSE58812). Finally, we evaluated the immune-related characteristics to explore the mechanism. Results: We identified four clusters with significantly different immune signatures in TNBC based on URGs. Then, we developed an 11-URG signature with good performance for patients with TNBC. According to the 11-URG signature, TNBC patients can be classified into a high-risk group and a low-risk group with significantly different overall survival. The predictive ability of this 11-URG signature was favorable in the test set. Moreover, we constructed a nomogram comprising the risk score and clinicopathological characteristics with favorable predictive ability. All of the immune cells and immune-related pathways were higher in the low-risk group than in the high-risk group. Conclusion: Our study indicated URGs might interact with the immune phenotype to influence the development of TNBC, which contributes to a further understanding of molecular mechanisms and the development of novel therapeutic targets for TNBC.
Introduction
Breast cancer ranks first in terms of incidence among all cancers according to statistics from the International Agency for Research on Cancer (IARC) (Siegel et al., 2022). Triple-negative breast cancer (TNBC) is the most malignant and aggressive molecular subtype of breast cancer that lacks the expression of the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). Compared to other molecular subtypes of breast cancer, TNBC exhibits highly aggressive biological behavior including early recurrences, distant metastases, and a poor survival rate (Waks and Winer, 2019). As endocrine therapy and anti-HER2targeted therapy were unsuitable for TNBC, chemotherapy and surgery remain the first-line treatments for TNBC with limited efficacy. Although novel therapies including targeted therapy and immune therapy are implemented in clinical practice and clinical trial design (Keenan and Tolaney, 2020;Vagia et al., 2020;Bianchini et al., 2022;Tarantino et al., 2022), clinical outcomes for TNBC remain unsatisfactory. Therefore, the identification of molecules that contribute to risk stratification and clinical decision-making is critical to improve prognosis of TNBC.
Ubiquitination is one of the most common and important posttranslational modifications (PTMs). The ubiquitin-proteasome system is a highly specific, ATP-dependent pathway regulating specific protein degradation in eukaryotes. Ubiquitination is a reversible process that is mediated by three types of enzymes, namely, E1 ubiquitin-activating enzyme, E2 ubiquitin-conjugating enzyme, and E3 ubiquitin ligase (Scheffner et al., 1995). E1 activates ubiquitin and transfers it to its activation site Cys in an ATP-dependent manner. E2 transports ubiquitin to E2 itself by binding E1. E3 recognizes substrate proteins and catalyzes ubiquitin transfer from E2 to the substrate. Proteins labeled with ubiquitin are finally taken to the proteasome for degradation. There are other ubiquitin-like modifications, including small ubiquitin-like modifier (SUMO) modification (SUMOylation), pupylation, and ISGlation (Hochstrasser, 2009). The process can be reversed by using deubiquitinating enzymes (DUBs) to cleave ubiquitin and ubiquitin-like molecules from the substrate. In addition, ubiquitin also has many non-degradative functions (Chen and Sun, 2009). As reported by other studies (Ulrich and Walden, 2010;Berndsen and Wolberger, 2014), ubiquitination plays important roles in many cell signaling pathways and biological processes, such as protein activation and transactivation, DNA replication and repair, cell cycle, chromatin dynamics, transcription signaling transduction, autophagy, and immune response, suggesting they are important biomarkers and therapeutic targets. One study constructed a SUMO-related prognostic classifier based on the expression of SUMO1/2/3 and the disease-free survival of TNBC patients (Lin et al., 2021). However, ubiquitination-related gene (URG) prognostic prediction models have not been established in TNBC before.
In the present study, we used the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO) databases to screen prognostic URGs. Based on these prognostic URGs, we identified a novel URG-based molecular classification of TNBC. Moreover, we constructed the 11-URG signature with good performance for patients with TNBC. Our analysis suggests that URGs play important roles in TNBC and are potential prognostic biomarkers and therapeutic targets.
Data collection and processing
The gene expression quantification data (HTSeq-FPKM) and corresponding clinic data of patients with TNBC were retrieved from the METABRIC database (http://www.METABRIC.org/) and the GEO database (http://www.ncbi.nlm.nih.gov/geo/). We excluded patients with a survival time of less than 30 days. The gene expression profiles included 297 TNBC patients in METABRIC and 106 TNBC patients in GSE58812. We combined the METABRIC and GSE58812 and removed the batch effects using the ComBat function in the "sva" package (Johnson et al., 2007). We applied principal component analysis (PCA) to test the batch effects. The URGs were downloaded from the ubiquitin and ubiquitin-like conjugation database (Gao et al., 2013) (UUCD) (http://uucd. biocuckoo.org). We merged the URGs and gene expression profiles of TNBC patients to acquire URG expression in both the METABRIC database and the GSE58812 dataset. As a result, 403 TNBC patients with 525 URG expression data and baseline data were included for subsequent analysis.
Classification of TNBC based on URGs
The univariate Cox proportional hazard regression analysis was used to explore the association of URGs with TNBC patients' overall survival (OS) and OS time. Those URGs with p-value <.01 were considered prognostic URGs. To identify the value of prognostic URGs, we performed non-negative matrix factorization (NMF) analysis to cluster the 297 METABRIC patients and 106 GSE58812 patients. The clustering number K was set as 2-10. We determined the average profile width of a common member matrix by using the NMF package (Gaujoux and Seoighe, 2010) in R with the minimum member numbers of each subclass set to 10. The optimal number of clusters was determined according to indexes including cophenetic, dispersion, evar, residuals, rss, silhouette, and sparseness. Then, we performed Kaplan-Meier curve and log-rank method analysis to evaluate the survival difference between clusters by applying the survminer package in R language. We also calculated the human leukocyte antigen (HLA) expression of different clusters. Based on the Estimation of Stromal and Immune cells in Malignant Tumors using Expression data (ESTIMATE) algorithm (Yoshihara et al., 2013), the immune score, stromal score, ESTIMATE score, and tumor purity of different clusters were determined. Next, we analyzed the enriched pathways between different clusters by applying gene set variation analysis (GSVA) in R. By using "GSEABase" and GSVA R packages (Hänzelmann et al., 2013), we performed single-sample gene set enrichment analysis (ssGSEA) to quantify the extent of the immune-related infiltration of each sample. From a previous study, we collected the gene sets for the evaluation of immune-related characteristics including different types of human immune cell subtypes and immune-related activities (Charoentong et al., 2017;Ru et al., 2019). The enrichment scores calculated using the ssGSEA algorithm indicated the relative degree of each immunerelated characteristic expression in each sample. Finally, we applied microenvironment cell populations-counter (Becht et al., 2016) (MCP-counter) and cell-type identification by estimating relative subsets of RNA transcript (Newman et al., 2015) (CIBERSORT) Frontiers in Genetics frontiersin.org methods to assess the distribution of immune cell infiltration in different clusters.
Construction, evaluation, and validation of the URG signature in TNBC
The prognostic URGs were entered into the least absolute shrinkage and selection operator (LASSO) method analysis to identify the prognostic multi-gene signature by using the glmnet (Friedman et al., 2010) package in R. Based on the corresponding coefficients and expression of selected genes, the URG signature was constructed as follows: risk score = (β 1 *Gene 1 Exp + β 2 * Gene 2 Exp + β 3 * Gene 3 Exp + / + β n * Gene n Exp). In this formula, β represents the coefficients in the LASSO Cox regression analysis. Then, we calculated the risk score for each TNBC patient and classified TNBC patients into a high-risk group and a low-risk group according to the median risk score. The Kaplan-Meier curve and log-rank method were performed to evaluate the OS difference between the high-risk and low-risk groups. The distribution of risk scores, survival statuses of TNBC patients, and expression profiles of prognostic URG were exhibited using R software. We used a time-dependent receiver operating characteristic (ROC) curve to assess the sensitivity and specificity of the URG signature by calculating the area under the curve (AUC) (Heagerty et al., 2000;Blanche et al., 2013). We applied PCA and t-distributed stochastic neighbor embedding (t-SNE) analysis to explore the distribution of 11 URG expression profiles between the high-risk and low-risk groups.
Estimation of chemotherapy drug sensitivity in TNBC
We used the GDSC database and the pRRophetic package (Geeleher et al., 2014) to estimate the sensitivity of TNBC patients to chemotherapy drugs. We compared the half-maximal inhibitory concentration (IC50) of chemotherapy drugs between high-risk and low-risk group patients. The IC50 values of drugs were negatively correlated with drug sensitivity.
Construction and evaluation of the nomogram model in TNBC
To verify the independence of the prognostic value of the URG signature and clinicopathological factors (including age, grade, tumor size, lymph node, Nottingham prognostic index (NPI), cellularity, tumor mutation burden (TMB), menopause status, and breast surgery procedure), we performed univariate and multivariate Cox regression analysis to explore their associations with OS of TNBC patients. Factors with p-value <.05 in the univariate Cox regression analysis were selected to construct the nomogram model. We used the concordance index (C-index) of 1,000-sample bootstrap and ROC curve to evaluate the prognostic prediction ability of the nomogram model. We also applied calibration curves to further validate the nomogram model.
Functional enrichment analysis
In order to reveal the heterogeneity between high-risk and lowrisk group patients, we performed gene set enrichment analysis (GSEA). Gene Ontology (GO) items of biological process (BP), cellular component (CC), molecular function (MF), the reactome pathway, and the hallmark gene set were selected as the reference gene sets. The results of GSEA were visualized using the enrichplot package in R language.
Immune infiltration and tumor immune microenvironment analyses
To further explore the immune phenotype between high-risk and low-risk group patients, we conducted immune infiltration and tumor immune microenvironment analyses. The infiltrating score of 16 immune cells and the activity of 13 immune-related pathways were determined by the ssGSEA function of the "gsva" package in R. Based on the ESTIMATE algorithm, we calculated the tumor purity, stromal score, immune score, and ESTIMATE score between high-risk and low-risk group patients. In addition, we performed Spearman's analysis to explore the correlation of the risk score with the tumor microenvironment.
Statistical methods
All statistical analyses were performed by R software (version 3.6.1). The Wilcoxon rank-sum test was used to compare the difference between the two groups, and the Kruskal-Wallis test was performed to compare the difference among the four groups. The Kaplan-Meier curve and log-rank method were performed to evaluate the OS difference between groups. The ROC curves were plotted to assess the sensitivity and specificity of the URG signature and nomogram. The correlation between two sets of quantitative data was estimated by Spearman's correlation test. A two-tailed p-value <.05 was considered statistically significant.
Identification of prognostic URGs in TNBC
The workflow of the present study is presented in Figure 1. Based on the URGs and gene expression profiles of TNBC patients in both the METABRIC database and the GSE58812 dataset, we identified 403 TNBC patients with 525 URG expression data and baseline data. The results of PCA showed that the METABRIC database and the GSE58812 dataset had notable batch effects (Figure 2A), which were removed using the ComBat function in the "sva" package ( Figure 2B). These URGs were further assessed for their association with the survival of TNBC. A total of 17 URGs were found to be significantly associated with the OS of TNBC by univariate Cox proportional hazard regression analysis ( Figure 2C). Among these 17 URGs, UBA1, PIAS4, TRIM3, PCGF1, RNF123, LRSAM1, STC1, GRWD1, GNB2, USP30, OTUB2, and ATXN3L were found to be the risk factors for TNBC patients [hazard ratio (HR) > 1], and BIRC3, Frontiers in Genetics frontiersin.org EED, STAMBPL1, and PARP11 were the protective factors of TNBC patients (HR < 1).
Classification of TNBC based on prognostic URGs
A total of 17 prognostic URGs were used as variables of consensus clustering by using the NMF package. Figure 3A depicts the consensus matrix heatmap with K from 2 to 10. According to the cophenetic, dispersion, and silhouette curves ( Figure 3B), the optimal number of subgroups was determined as 4 (K = 4). Then, 403 TNBC patients could be classified into four robust clusters, including 41 patients in cluster 1, 154 patients in cluster 2, 164 patients in cluster 3, and 44 patients in cluster 4. The expressions of 17 prognostic URGs among four clusters are significantly different (Supplementary Figure S1). The Kaplan-Meier curve showed that patients in different clusters have significantly different prognoses (p < .0001, Figure 4A). Cluster 2 patients had the worst OS, and cluster 4 patients had the best OS among all clusters. To explore the mechanism of survival difference between clusters, we analyzed HLA expression, the immune microenvironment, and pathways between these four clusters. As to HLA expression, 14 of 15 HLA expressions were lowly expressed in cluster 2 and highly expressed in cluster 4. Among these four clusters, cluster 2 patients had higher tumor purity and lower immune and ESTIMATE scores, while cluster 4 patients had lower tumor purity and higher immune and Frontiers in Genetics frontiersin.org 05 2, while M2 macrophage was significantly upregulated in cluster 2. Immune-related activities including cytolytic activity, inflammationpromoting activity, and type II IFN response were inhibited in cluster 2. Cluster 2 exhibited the immune desert phenotype, and cluster 4 exhibited the immune-enriched phenotype, which may account for the OS difference between clusters 2 and 4. Construction, evaluation, and validation of the URG signature in TNBC A total of 17 prognostic URGs were fit into the LASSO Cox analysis to identify the optimal prognostic URGs in the training group (METABRIC). We identified 11 URGs (HECTD3, PCGF1, RNF123, STC1, GRWD1, USP30, OTUB2, ATXN3L, BIRC3, STAMBPL1, and PARP11) using LASSO Cox analysis and constructed a prognostic signature by integrating the 11 URG expression profiles and corresponding Cox regression coefficients ( Figures 6A, F). We calculated the risk score for each patient in the training group and ranked them into a high-risk group (n = 148) and a low-risk group (n = 149) according to the median risk score. The Kaplan-Meier curve showed that patients in the high-risk group have significantly worse OS than patients in the low-risk group (p < .001, Figure 6B). The prognostic power of the 11-URG signature was evaluated by calculating the AUC. The results showed that the AUC of the 11-URG signature for predicting 3-, 5-, and 8-year survival of TNBC patients was 0.708, 0.702, and 0.744, respectively, which indicated good performance ( Figure 6G). PCA and t-SNE analysis showed that TNBC patients between the high-risk and low-risk groups can be distinguished well according to this signature ( Figures 6D, E). To verify the reliability of the 11-URG signature in TNBC, we applied this signature to the test set (GSE58812). We calculated the risk score for each patient in the test group and ranked them into a high-risk group (n = 46) and a low-risk group (n = 60). As presented in Figure 6C, patients in the high-risk group have significantly worse OS than patients in the low-risk group (p = .002). The AUC of the 11-URG signature for predicting 3-, 5-, and 8-year survival in the test set was 0.662, 0.738, and 0.720, respectively ( Figure 6H). The results of PCA and t-SNE analysis in the test group also showed that patients in the highrisk and low-risk groups were distributed in two directions according to this signature ( Figures 6I, J). The distribution of risk score, survival status of TNBC patients, and the expression profiles of 11 prognostic URGs in the training and test sets are displayed in Figure 7. The mortality was much higher for patients with a high risk score than those with a low risk score, and patients in the high-risk group have a tendency toward higher expression of HECTD3, PCGF1, RNF123, STC1, GRWD1, USP30, OTUB2, and ATXN3L and lower expression of BIRC3, STAMBPL1, and PARP11.
Construction and evaluation of the nomogram model in TNBC
Univariate Cox regression analysis suggested that the 11-URG signature-based risk score, lymph node status, NPI, menopausal status, and breast conserving surgery were significantly associated with patients' survival ( Figure 8A). We further performed multivariate Cox regression analysis using these factors. The results revealed that the 11-URG signature was the only factor related to the OS of TNBC patients ( Figure 8B). Using age, menopausal status, lymph node status, tumor size, surgery, NPI, and risk score, we constructed a prognostic nomogram model to predict the OS of individual TNBC patients ( Figure 9A). The AUC of this nomogram for predicting 1-, 3-, and 5-year OS was 0.695, 0.733, and 0.760, respectively ( Figure 9B). We applied calibration curves to further assess the predictive effect of the nomogram model on the OS of TNBC patients. As shown in Figure 9C, the nomogram-predicted OS of TNBC patients had good consistency with the actual OS of TNBC patients. The C-index of the nomogram model and 11-URG signature-based risk score were higher than other clinicopathological factors ( Figure 9D), suggesting the favorable predictive ability of the nomogram model and 11-URG signature-based risk score. Frontiers in Genetics frontiersin.org Figure S4, high-risk group patients were more sensitive to A-443654, JW-7-52-1, NSC-87877, and PF-4708671 therapy, while low-risk group patients were more sensitive to the remaining chemotherapy drugs such as AZD2281 (olaparib), gefitinib, and nilotinib. The AZD2281 target is at PARP1/2 to influence genome integrity, gefitinib target at the EGFR signaling pathway, and nilotinib target at the ABL signaling pathway. The PI3K/ mTOR signaling pathway is the target of A-443654, JW-7-52-1, and PF-4708671, which indicates that high-risk group patients may benefit from therapy targeting at the PI3K/mTOR signaling pathway. To explore the difference in biological characteristics between high-risk and low-risk group patients, we performed GSEA. The GSEA results are presented in Figure 10. The BP of immune response, T-cell activation, and differentiation were enriched in the low-risk group, while the BP of DNA repair, DNA replication, and meiotic cell cycle were enriched in the high-risk group. The CC of genes in the low-risk group was mainly enriched in endocytic vesicles, while the CC of genes in the high-risk group was mainly enriched in chromosomal regions and microtubules. Chemokine and cytokine activities were the enriched MF in the low-risk group; however, ATP hydrolysis activity and catalytic activity were the mainly enriched MF in the high-risk group. Interferon α response, IL6-JAK-STAT3 signaling, complement, and inflammatory response were the mainly enriched hallmark gene sets in the low-risk group, while E2F target, G2M checkpoint, glycolysis, and MYC target were the mainly enriched hallmark gene sets in the high-risk group. As for the reactome pathway, chemokine receptors that bind chemokines and complement cascades were the mainly enriched pathways, while the cell cycle-related pathway and DNA repair were the mainly enriched pathways.
FIGURE 11
Immune infiltration and tumor immune microenvironment analyses. (A,B) ssGSEA scores of 16 immune cells and 13 immune-related functions between the high-risk and low-risk groups. (C-F) Violin plot of tumor purity, stromal score, immune score, and ESTIMATE score between the high-risk and low-risk groups. (G-J) Scatter plot of the correlation of the risk score with tumor purity, stromal score, immune score, and ESTIMATE score. *p < .05; **p < .01; ***p < .001.
Immune infiltration and tumor immune microenvironment analyses
To further identify the potential mechanism of the heterogeneity between high-risk and low-risk group patients, we conducted tumor immune microenvironment and immune infiltration analyses. All of the immune cells and immune-related pathways were higher in the low-risk group than in the high-risk group (Figures 11A, B). Compared with the low-risk group, the high-risk group exhibited higher tumor purity ( Figure 11C) and lower stromal score ( Figure 11D), immune score ( Figure 11E), and ESTIMATE score ( Figure 11F). In addition, the risk score was positively correlated with tumor purity ( Figure 11G) and negatively correlated with the stromal score ( Figure 11H), immune score ( Figure 11I), and ESTIMATE score ( Figure 11J). As shown in Supplementary Figure S5, the MCP-counter analysis showed that most immune cells (except fibroblasts and neutrophils) are enriched in the low-risk group, which is consistent with ssGSEA. The CIBERSORT analysis (Supplementary Figure S6) showed that B cells, plasma cells, CD4 T cells, and DC cells are enriched in the low-risk group, and M2 macrophage is enriched in the high-risk group. Therefore, both CIBERSORT and MCP-counter analyses are consistent with ssGSEA and ESTIMATE analyses.
Discussion
In the present study, we identified 17 prognostic URGs and constructed four molecular classifications of TNBC. The immune signatures were significantly different among distinct TNBC clusters. Then, we developed the 11-URG signature with good performance for patients with TNBC. According to the 11-URG signature, TNBC patients can be classified into a high-risk group and a low-risk group with a significantly different OS. The predictive ability of the 11-URG signature was validated in the test set (GSE58812). Univariate and multivariate Cox regression analyses showed that the 11-URG signature was an independent risk factor for TNBC patients. Moreover, we constructed a nomogram comprising the risk score and clinicopathological characteristics with favorable predictive ability. GSEA showed that enriched GO terms, hallmark gene sets, and reactome pathways were evidently different between the high-risk and low-risk groups. In addition, tumor immune microenvironment and immune infiltration analyses also exhibited a significant difference.
TNBC is a heterogeneous cancer. Tailored treatment based on molecular subtypes is meaningful for improving the outcomes of TNBC. In our study, we constructed four clusters of TNBC patients, including 41 patients in cluster 1, 154 patients in cluster 2, 164 patients in cluster 3, and 44 patients in cluster 4. As cluster 2 patients had the worst OS and cluster 4 patients had the best OS among all clusters, we conducted further analysis to reveal the mechanism. GSVA of the pathway showed that glycolysis, cholesterol homeostasis, hypoxia, and DNA repair pathways were the mainly upregulated pathways in cluster 2, while interferon response and inflammatory response were the mainly upregulated pathways in cluster 4. As is known, metabolic reprogramming is one of the hallmarks of cancer. Metabolic reprogramming is used by TNBC to fulfill bioenergetic and biosynthetic demands; maintain the redox balance; and further promote oncogenic signaling, cell proliferation, and metastasis (Wang P et al., 2020). Metabolic reprogramming mainly consists of glycolysis, amino acid metabolism, and lipid metabolism. It has been reported that TNBC cells predominantly use glycolysis for energy production regardless of abundant oxygen availability (Wu et al., 2020). The upregulation of enzymes involved in the glycolytic pathway including hexokinase (Lucantoni et al., 2018), phosphofructokinase (Coelho et al., 2011), pyruvate kinase (Wahdan-Alaswad et al., 2018), and lactate dehydrogenase (McCleland et al., 2012) contributes to the "Warburg effect" in TNBC. The glycolytic phenotype favors TNBC to synchronize with an accelerated rate of proliferation, migration and invasion, and chemotherapy resistance (Arundhathi et al., 2021;Wiggs et al., 2022). Cholesterol, a component of cell membranes, also serves as a precursor for steroid hormones, bile acids, and vitamin D. As a critical molecule for cell growth and function, cholesterol has been recognized as a characteristic of some malignancies (Tosi and Tugnoli, 2005). Statins and hypocholesterolemic drugs that selectively inhibit hydroxymethylglutaryl coenzyme A reductase (HMGCR) also show anticancer activity (Clendening and Penn, 2012). Nevertheless, the function of cholesterol in breast cancer is conflicting (Nelson, 2018;Garcia-Estevez and Moreno-Bueno, 2019). Some researchers found that cholesterol has a protective effect, while other authors concluded that cholesterol is a risk factor, and some found no effect. Importantly, deregulation of cholesterol homeostasis leading to an imbalance of intracellular cholesterol is a crucial regulator for breast cancer (Nazih and Bard, 2020). Hypoxia has long been considered one of the hallmarks of cancer (Gilkes et al., 2014). Several studies (Kapinova et al., 2018;Zheng et al., 2020;Sun et al., 2021;Yang et al., 2021) have systematically analyzed the hypoxia-related gene and constructed prognostic models based on hypoxia-related genes in TNBC. As a result, the difference in enriched pathways between clusters 2 and 4 may be one of the reasons that cluster 4 patients have better OS than cluster 2 patients. We further analyzed the immune microenvironment in four clusters. First, we found that cluster 2 TNBC patients had high tumor purity, a low immune score, and low expression of HLA. In addition, the results of infiltration analysis showed that T cells, CD8 T cells, NK cells, myeloid dendritic cells, cytotoxic lymphocytes, and B lineage were significantly downregulated in cluster 2 and upregulated in cluster 4, while M2 macrophage was significantly upregulated in cluster 2. Finally, immune-related activities including cytolytic activity, inflammation-promoting activity, and the IFN response were also inhibited in cluster 2. Taken together, cluster 2 exhibited the immune desert phenotype, and cluster 4 exhibited the immune-enriched phenotype, which may account for the OS difference between clusters 2 and 4. Our analysis provides new insight into the molecular classification of TNBC patients.
To the best of our knowledge, there are numerous specific genebased prognostic prediction models for TNBC such as the hypoxiarelated gene signature (Zheng et al., 2020), immune-related gene signature (Wang Z et al., 2020;Sun and Zhang, 2022), and autophagy-related gene signature (Yan et al., 2022). However, few have considered the integrated roles of URG set in TNBC. Herein, we developed the 11-URG signature with good performance for patients with TNBC. Of the 11 URGs, HECTD3, PCGF1, RNF123, STC1, GRWD1, USP30, OTUB2, and ATXN3L are the risk factors of TNBC, and BIRC3, STAMBPL1, and PARP11 are the protective factors of TNBC. We further discuss the functions of these URGs. In reviewing the literature, no data were found on the association of breast cancer with PCGF1, RNF123, GRWD1, USP30, ATXN3L, and PARP11. The Frontiers in Genetics frontiersin.org remaining five URGs were reported to be implicated in breast cancer. HECTD3 is an oncogene and could promote breast cancer cell survival (Li et al., 2013;Jiang et al., 2020), which is in line with the result of our study that HECTD3 is a risk factor for TNBC. Our study supports the evidence that STC1 is a biomarker of breast cancer and promotes tumor growth and metastasis (Chang et al., 2015;Avalle et al., 2022). OTUB2 was reported to promote the progression of gastric cancer and colorectal cancer (Ouyang et al., 2022;Yu et al., 2022). As for breast cancer, a recent study reported that OTUB2 deubiquitinated and activated YAP/TAZ to promote cancer stemness and metastasis (Zhang et al., 2019), which confirms the reliability of our results. Interestingly, BIRC3 can play a tumor-suppressing role or act as an oncogene in different types of cancer (Frazzi, 2021). As for breast cancer, the function of BIRC3 has not yet been fully characterized. Our results corroborate the findings that BIRC3 functions as a tumor suppressor. The study by Liu et al. (2022) found that STAMBPL1 interacts with MKP-1 and stabilizes MKP-1 via deubiquitination, further promoting breast cancer cell resistance to cisplatin. Moreover, STAMBPL1 could regulate snail stability by deubiquitination mechanisms in breast cancer (Ambroise et al., 2020). The aforementioned studies showed that STAMBPL1 is a risk factor for breast cancer, which is contrary to our result. Nonetheless, whether STAMBPL1 influences the prognosis of breast cancer patients is unclear and requires further research. Although the 11 URGs have been suggested to be involved in multiple cancers, studies concerning the effects of these URGs on TNBC are lacking. Therefore, the roles of these URGs in TNBC remain unexplored. According to the 11-URG signature, TNBC patients can be classified into a high-risk group and a low-risk group with significantly different OS. We first analyzed the sensitivity of TNBC patients to chemotherapy drugs. Low-risk group patients are more sensitive to the majority of the chemotherapy drugs than patients in the high-risk group, which may partially explain the preferable OS of lowrisk group patients. Then, we conducted GSEA to explore the mechanism. What stands out in the GSEA is the pathway difference between the high-risk and low-risk groups. Cell cycle-and glycolysisrelated pathways were the mainly enriched pathways in the high-risk group, while chemokine receptors that bind chemokines, the inflammatory response, and complement cascades were the mainly enriched pathways in the low-risk group. As discussed earlier, the glycolytic phenotype promotes the proliferation, migration, and invasion of TNBC cells. Deregulation of the cell cycle is also a hallmark of cancer that enables limitless cell division and is frequently observed in breast cancer (Thu et al., 2018;Sofi et al., 2022). Therefore, the pathway difference may also explain the prognosis difference between the high-risk and low-risk groups. Furthermore, all of the immune cells and immune-related pathways were higher in the low-risk group than in the high-risk group. The OS difference between the high-risk and low-risk groups could be attributed to the immune infiltration difference between the two groups.
Conventional clinicopathological predictors such as age, gender, and the TNM staging system are insufficient to predict the prognosis of breast patients due to molecule complexity and biological heterogeneity of breast cancer. To provide a quantitative tool for predicting the survival rate of TNBC patients, we constructed a nomogram comprising the 11-URG signature-based risk score and clinicopathological characteristics. The ROC curve and calibration curve suggested that the nomogram is a stable and reliable predictor for OS of TNBC patients.
Admittedly, our study has some limitations because it was based only on high-throughput RNA-sequencing, array profiles, and data analysis. The roles of these prognostic URGs require further in vitro and in vivo studies because of their strong relevance to the prognosis of TNBC. In conclusion, we identified four novel URGs based on the molecular classification of TNBC and constructed the 11-URG signature with good performance for patients with TNBC. Our analysis suggests that URGs play important roles in TNBC and are potential prognostic biomarkers and therapeutic targets.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. | 2023-01-07T15:16:18.781Z | 2023-01-06T00:00:00.000 | {
"year": 2022,
"sha1": "01a3eff29e2806e0b7f08d2bcc85275c468ab3b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "01a3eff29e2806e0b7f08d2bcc85275c468ab3b2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
225168949 | pes2o/s2orc | v3-fos-license | POLR1C variants dysregulate splicing and cause hypomyelinating leukodystrophy
Objective To further clarify the molecular pathogenesis of RNA polymerase III (Pol III)-related leukodystrophy caused by biallelic POLR1C variants at a cellular level and potential effects on its downstream genes. Methods Exome analysis and molecular functional studies using cell expression and long-read sequencing analyses were performed on 1 family with hypomyelinating leukodystrophy showing no clinical and MRI findings characteristic of Pol III–related leukodystrophy other than hypomyelination. Results Biallelic novel POLR1C alterations, c.167T>A, p.M56K and c.595A>T, p.I199F, were identified as causal variants. Functional analyses showed that these variants not only resulted in altered protein subcellular localization and decreased protein expression but also caused abnormal inclusion of introns in 85% of the POLR1C transcripts in patient cells. Unexpectedly, allelic segregation analysis in each carrier parent revealed that each heterozygous variant also caused the inclusion of introns on both mutant and wild-type alleles. These findings suggest that the abnormal splicing is not direct consequences of the variants, but rather reflect the downstream effect of the variants in dysregulating splicing of POLR1C, and potentially other target genes. Conclusions The lack of characteristic clinical findings in this family confirmed the broad clinical spectrum of Pol III–related leukodystrophy. Molecular studies suggested that dysregulation of splicing is the potential downstream pathomechanism for POLR1C variants.
RNA polymerase III (Pol III)-related leukodystrophy is characterized by hypomyelination in the CNS with various additional manifestations such as hypogonadotropic hypogonadism, hypodontia, cerebellar ataxia, and atrophy of the corpus callosum. After the proposal of multiple clinical entities, 1,2 the discovery of pathogenic variants in genes encoding 2 major subunits of Pol III, POLR3A and POLR3B, 3,4 in the majority of patients led to the concept of Pol III-related leukodystrophy being emerged. 5 Recently, variants in yet another gene coding for Pol III complex, POLR1C, were identified in patients who were negative for, but showed clinical features similar to those with, POLR3A and POLR3B variants. 6 Here, we report a patient with novel POLR1C pathogenic variants, who showed clinical and imaging features compatible with hypomyelinating leukodystrophy without additional features characteristic of Pol III-related leukodystrophy. We also propose a potential molecular mechanism of POLR1C variants involving dysregulation of splicing.
Methods
This study was approved by the Institutional Review Board of the National Center of Neurology and Psychiatry. Genomic DNA and total RNA were extracted from the peripheral blood of the patient and parents. For DNA diagnostic testing, we performed quantitative PCR for the screening of PLP1 duplication, followed by exome sequencing for Mendelian disease panel (TruSight One, Illumina), as we previously performed according to the manufacturer's protocol. 7 POLR1C complementary DNAs were obtained by reverse transcriptase (RT)-PCR, which were cloned into an expression vector, pcDNA3.1 (Invitrogen) with FLAG-tag at the N-terminus, for subsequent Sanger sequencing and transient expression studies in HeLa cells for Western blotting and fluorescent immunostaining. For long-read next-generation sequencing, barcoded RT-PCR products (control, father, mother, and patient) were sequenced on a single MinION R9.4 flow cell (Nanopore).
Data availability
Any data not published within the article will be shared by request from any qualified investigator.
Case report
The patient was a Japanese boy without a family history of neuromuscular diseases and had normal neurodevelopment during infancy. At age 2 years, he developed action tremor of his fingers, had difficulty in writing, and showed early signs of motor dyspraxia. At age 3 years, he developed amblyopia secondary to hypermetropia and astigmatism. Myopia was not noted. At age 3 years and 10 months, he presented with action tremors in fingers, but there were no other neurologic abnormalities. He showed a developmental quotient of 105 (Enjoji analytical developmental test for infants and toddlers). Subsequently, he became neurodevelopmental abilities stagnated and regressed in his daily activities and nystagmus became apparent. On his visit at age 5 years and 9 months, he had a short stature with 101.4 cm (−2.1 SD) tall without apparent microcephaly, facial abnormalities, or ambiguous genitalia. No delay or abnormal order in dentation was noted. He exhibited lateral nystagmus, action tremors, and slurred speech. The finger-to-nose, pronosupination, and tandem walk tests showed mild dysmetria and ataxia. Deep tendon reflexes of the lower limbs were increased. He exhibited a staggering wide-based gait and was unable to stand on 1 leg for more than 2 seconds. Both parents were intellectually and physically normal with no neurologic findings.
We performed several tests at age 5-6 years. Laboratory tests revealed prepubertal patterns of pituitary gonadotropins and testosterone. The Wechsler Intelligence Scale for Children, fourth edition, showed regression with a full-scale intelligence quotient of 70. Peripheral nerve conduction velocities and auditory brainstem responses were normal. EEG was normal. MRI showed diffuse T2 hyper-and T1 iso-intensities in the white matter, indicating hypomyelination (figure 1, A and B). T1 and T2 shortening in the optic radiation, the ventrolateral thalamus, and the dentate nucleus was noted, as typically observed in Pol III-related leukodystrophy (figure 1, A-C). 8 Cerebellar atrophy or thinning of the corpus callosum was not evident (figure 1, C and D). MRIs of the parents were not available.
Results
After PLP1 duplication was excluded, the panel exome sequencing identified 2 novel heterozygous missense variants in exon 3 and exon 6 of the POLR1C gene (NM_203290.3: c.167T>A, p.M56K and c.595A>T, p.I199F, respectively; figure 1E). Parental segregation analysis confirmed compound heterozygosity. In silico prediction analyses revealed both variants to be pathogenic at different levels ( figure 1F). RT-PCR using the patient's sample showed increased proportion of splicing variants with a combination of full intron 3 and/or half/full intron 4 inclusions, all of which are presumably nonfunctioning variants with premature termination codons on sequence validation (figure 2, A and B). The patient's major transcript was the variant including both intron 3 and intron 4. Three representative variants expressed in HeLa cells showed that p.M56K alone did not change the protein stability, but the nuclear localization was modestly diminished (figure 2, C-G). p.M56K with intron 3 and intron 4 inclusion significantly decreased the protein level. Meanwhile, p.I199F Glossary Pol III = polymerase III.
caused cytosolic punctation and reduced protein expression (figure 2, C-H). The punctation did not overlap with lysosomal marker Lamp1, an autophagosome marker LC3, or with the proteosome marker, ubiquitin (data not shown).
To our surprise, both parents also showed increased proportion of the intron 3/4 inclusion variant (figure 2A), which prompted us to use long-read next-generation sequencing to obtain deep reads of all variants with allelic segregation. Mapping patient POLR1C transcripts on genome demonstrated that they were biallelic, and more than 85% of correctly mapped transcripts were intron-containing variants (figure 3A). Both parents also showed apparently increased proportion of intron-containing variants (64% in the father and 52% in the mother). These findings suggested 2 possible mechanisms: (1) both c.167T>A and c.595A>T variants directly affected the splicing to properly remove intron 3/4, or (2) splicing abnormality resulted from impaired function of Pol III target genes that play roles in the maintenance of splicing. To delineate these 2 possibilities, the parental reads of each allele were remapped to determine whether each variant affected splicing in cis or trans. Surprisingly, the proportion of intron-containing variants was equivalent between wild-type and mutant alleles in both parents, indicating that both variants affect splicing in trans (figure 3B). Because it is unlikely that the variant in one allele directly affects the splicing of the other, this trans effect is probably driven by the latter hypothesis.
Discussion
In this study, we reported 1 family with novel POLR1C compound heterozygous variants. Clinically, our patient so far had no characteristic features of Pol III-related leukodystrophy, such as dental abnormality and hypogonadism, and no cerebellar atrophy and thinning of the corpus callosum. Although these features are common in patients with POLR3A or POLR3B variants, 9 thereby serving as diagnostic key features, findings other than hypomyelination are not necessarily present in all cases. 10 Previously reported 32 cases with POLR1C variants presented with at least one of these features. 6,11,12 Thus, the present case also suggested that lacking characteristic features of Pol III-related leukodystrophy does not exclude the presence of POLR1C variants.
The molecular mechanisms underlying POLR1C variants causing hypomyelination remain unknown. We showed that the POLR1C variant on each allele does not only affect the subcellular localization and/or amount of the protein but also affects the splicing that removes the intron 3/4. Allelic actin (loading control). Cells were harvested after 24 hours of transfection. An anti-FLAG antibody was used to visualize exogenous POLR1C. The sizes of the M2 band appear to be the same as WT, suggesting that these introns are partially spliced out before translation. Truncated protein was not observed, presumably due to the removal by nonsense-mediated messenger RNA decay before translation. (E) Quantification of the POLR1C protein level. Experiments were performed in triplicates. POLR1C was normalized to EGFP. The y-axis indicates relative value to the average of wild type. (F) Fluorescent immunostaining of HeLa cells transiently expressing wild-type and mutant POLR1C. Subcellular localization of exogenous protein was determined using anti-FLAG antibody. Bar indicates 20 μm. Wild-type POLR1C showed strong nuclear expression (WT). The M56K mutant showed reduced nuclear staining (M1). The I199F mutant showed prominent cytosolic punctations (yellow arrowheads; M3). DAPI nuclear staining shows blue signals. Images were obtained using Keyence BZ-X710 fluorescence microscope (Keyence, Japan). (G) Quantification of nuclear localization. Using imaging software (Keyence), the ratio of signal intensities of the nucleus and cytosol was measured (n = 30 cells per group). (H) The proportion of cells with cytosolic punctations was calculated in more than 300 cells (10 fields [30-40 cells per field] at 200× magnification). Error bars indicated standard errors. *p < 0.05, **p < 0.01, ***p < 0.001. One-way analysis of variance. 167T and c.167A allele reads were selected from sequence reads of the father, and the c.595A and c.595T allele reads were selected from those of the mother. Ten thousand reads of each allele were aligned. There was no obvious difference in the proportion of variants with intron inclusions between the 2 alleles in each parent. segregation studies using a long-read sequencing technology in the patient revealed a large proportion of abnormal splicing variants. Moreover, the analyses in the parental samples showed 2 unexpected findings. First, in addition to the exon 3 variant, the exon 6 variant also affected splicing even in a heterozygous status, indicating that haploinsufficiency of POLR1C caused molecular deficits despite the autosomal recessive mode of inheritance. Second, each allelic variant caused the inclusion of introns on both alleles. This most likely resulted from altered transcription of Pol III target genes that play a role in the regulation of splicing of POLR1C. As one such candidate, we examined the expression of U6 snRNA, which plays a central role in spliceosome, 13 but was not altered in either the patient nor his parents (data not shown). Although the exact effectors downstream of the POLR1C variants remain unknown, our findings provide a potential molecular mechanism for POLR1C variants that affect the activity of Pol III and transcription of its target genes, which may lead to dysregulation of splicing of genes including POLR1C.
In conclusion, we reported 1 family with hypomyelinating leukodystrophy caused by novel POLR1C variants. Both pathogenic variants resulted in changes in subcellular localization and reduction in protein levels, as well as inclusion of introns, which presumably resulted in loss of function. Allelic segregation analyses of full-length transcripts in both patient and parents revealed that the aberrant splicing variants are not direct consequences of the coding variants, but rather reflect the downstream effect of the variants in dysregulating splicing of POLR1C, and potentially other target genes. | 2020-10-28T19:08:50.957Z | 2020-10-13T00:00:00.000 | {
"year": 2020,
"sha1": "310cdcb55b1163c71111595123c9e6798dfa2c19",
"oa_license": "CCBYNCND",
"oa_url": "https://ng.neurology.org/content/nng/6/6/e524.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83dd13c1a9d8111e0883ba644cc5f99903e8456b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233381636 | pes2o/s2orc | v3-fos-license | Polymeric metal-containing ionic liquid sorbent coating for the determination of amines using headspace solid-phase microextraction.
This study describes the design, synthesis, and application of polymeric ionic liquid sorbent coatings featuring nickel metal centers for the determination of volatile and semi-volatile amines from water samples using headspace solid-phase microextraction. The examined polymeric ionic liquid sorbent coatings were composed of two ionic liquid monomers (tetra(3-vinylimidazolium)nickel bis[(trifluoromethyl)sulfonyl]imide [Ni2+ (VIM)4 ] 2[NTf2 - ] and 1-vinyl-3-hexylimidazolium [HVIM+ ][NTf2 - ]), and an ionic liquid crosslinker (1,12-di(3-vinylimidazolium)dodecane [(VIM)2 C12 2+ ] 2[NTf2 - ]). With these ionic liquid monomers and crosslinkers, three different types of coatings were prepared: PIL 1 based on the neat [Ni2+ (VIM)4 ] 2[NTf2 - ] monomer; PIL 2 consisting of the [Ni2+ (VIM)4 ] 2[NTf2 - ] monomer with addition of crosslinker, and PIL 3 comprised of the [HVIM+ ][NTf2 - ] monomer and crosslinker. Analytical performance of the prepared sorbent coatings using headspace solid-phase microextraction GC-MS was compared with the polydimethylsiloxane and polyacrylate commercial coatings. The PIL 2 sorbent coating yielded the highest enrichment factors ranging from 5500 to over 160000 for the target analytes. The developed headspace solid-phase microextraction GC-MS method was applied for the analysis of real samples (the concentration of amines was 200 μg L-1 ), producing relative recovery values in the range of 90.9-120.0 % (PIL 1) and 83.0-122.7 % (PIL 2) from tap water, and 84.8-112.4 % (PIL 1) and 79.2-119.3 % (PIL 2) from lake water. This article is protected by copyright. All rights reserved.
sources (biomass burning, oceans, and vegetation) [1]. Amines are easily released to the environment through groundwater, rivers, lakes, and soil as well as industrial effluents or chemical decomposition products [2]. Most compounds of this class are hazardous and toxic to humans and animals. They can react with nitrosylating agents and, as a consequence, be converted to carcinogenic nnitroamines [3]. Even though contamination of amines can exist in the environment at trace levels, they may have a mutagenic and toxic effect on animals and humans. Therefore, the concentration of amines in the surrounding water must be continuously monitored, and it is vital to find fast, easy, effective, and sensitive methods for their determination. SPME is a solvent-free extraction technique and is an alternative to conventional extraction methods, such as LLE and SPE [4]. The procedure for extracting analytes from an aqueous solution using SPME consists of the following three main steps: (a) exposing the fiber to the sample by direct-immersion (DI) or the headspace (HS) above the sample, (b) absorbing/adsorbing the analytes by the sorbent, and (c) desorption of the analytes from the coating by thermal or solvent extraction. The SPME method has become widely used to determine a wide range of analytes from complex matrices, such as samples of food products, biological substances, pharmaceuticals, and environmental samples [5][6][7][8]. SPME has numerous advantages over other extraction methods, such as being simple, rapid, easily automated, eliminates the use of toxic solvents, and allows for the collection of samples in situ and in vivo. One of the most important limitations of the method is the choice of commercially available SPME sorbent coatings. Therefore, on-going research in the field is focused on developing new sorbent coatings that extend the range of analytes that can be effectively extracted. Various materials such as nanoparticles of noble metals, sorbents based on silica (silicon dioxide), ionogels, molecularly imprinted polymers, conductive polymers, carbon nanotubes, metal and/or metal oxide nanoparticles, graphene and graphene oxide, metal organic frameworks, ionic liquids (ILs), and polymeric ionic liquids (PILs) [9][10][11][12][13][14][15][16][17] hold promise due to their desired tunability. The above drawbacks are related to the general limitations of the SPME method. However, in the case of amine determination by SPME coupled to GC, a number of additional problems arise due to their high aqueous solubility, volatility, polarity, and highly basic character (i.e., stronger sorption to polar stationary phases is observed with the decrease in the molecular mass of amines) [18].
ILs are a well-known group of materials that have been used as SPME sorbent coatings. These compounds consist of a bulky organic cation and a smaller inorganic/organic anion and are present as liquids below 100 • C. The physical and chemical properties of ILs are highly tunable based on selecting an appropriate anion and cation pair. When ILs are used as a polymerizable monomer, PILs are formed and retain many of the unique physicochemical properties of ILs as well as additional advantages that include higher thermal and chemical stability, and a negligible change in viscosity when subjected to high temperatures [19,20]. PILs have been used for the determination of various classes of analytes, including polycyclic aromatic hydrocarbons, fatty acid methyl esters, esters and benzene derivatives, carbon dioxide, estrogens, alcohols and amines, genotoxic or structurally alerting alkyl halides and aromatics, pyrethroids, and contaminants of emerging concern [10,11,21].
Another interesting subclass of ILs is metal-containing ionic liquids (MCILs). MCILs are formed by incorporating transition and/or rare earth metals into their chemical structure [22]. In addition to possessing fundamental properties of ILs, the paramagnetic metals in MCILs impart magnetic, catalytic, and optical properties, which significantly increases the scope of their applications [23,24]. MCILs have been used in various areas of analytical chemistry including extractions and microextractions, chromatographic separations, membrane applications and gas absorption, electrochemistry, and sensors [25,26]. Incorporation of metal ions (Ni 2+ , Mn 2+ , Co 2+ , Dy 3+ , Gd 3+ , Nd 3+ ) into the IL chemical structure significantly influences their interactions with analytes originating from different organic subclasses such as alcohols, ketones, chlorinated alkanes, aromatic compounds, and amines [22,27]. It has been observed that nickel-containing MCILs exhibit unique selectivity toward amines [27]. The viscosity of MCILs drops significantly with an increase in temperature, rendering them impractical in SPME. To overcome this obstacle, the creation of a polymerizable sorbent coating is required. A procedure using vinylimidazole ligands coordinated to silver ion and subsequent free radical polymerization to form a polymeric MCIL was previously reported [28].
Using this approach, nickel-based PILs were synthesized in this study with or without the addition of cross-linker to investigate its effect on amine extraction performance. According to a previous study by Ho et al. [29], cross-linked PILs possess higher durability, stability, and robustness compared to linear PIL-based coatings. The effect of nickel cation in the PIL chemical structure was studied by further comparing the extraction performance to a structurally similar PIL coating lacking the metal center. To benchmark the sorbent coatings, extractions of the targeted amine analytes were compared to commercially available fibers consisting of polydimethylsiloxane (PDMS) and polyacrylate (PA) sorbent coatings. The developed HS-SPME-GC-MS method was finally applied for the analysis of real samples, including tap and lake water.
The commercial SPME fibers, featuring PA (85 μm) and PDMS (100 μm) coatings, were provided as gifts from Millipore-Sigma. Elastic nitinol wires were used as solid supports in the preparation of the SPME fibers (Nitinol Devices & Components, Fremont, CA, USA). Blank SPME assemblies (24 Ga) were provided by Millipore-Sigma (Bellefonte, PA, USA).
Real water samples were collected from the laboratory tap in Ames (IA, USA) and from Lake LaVerne on the campus of Iowa State University, respectively. The samples were stored in the dark using glass bottles at 4 • C before use. Prior to analysis, NaCl was added to a concentration of 30% (w/v).
Instrumentation
Amine separations were carried out using an Agilent Technologies 7890B GC equipped with a 5977A MS detector (single quadrupole) on a CP-Sil8 CB capillary column (length 30 m, 0.25 mm ID, 0.25 μm film thickness). Ultrapure helium was used as carrier gas at a flow rate of 1 mL/min. The inlet was operated in splitless mode with an inlet temperature of 190 • C. The following oven program was used: initial temperature equal to 40 • C, then the temperature was increased at 2 • C/min up to 80 • C, followed by an increase to 270 • C at 30 • C/min, and finally held for 1 min. The MS employed electron ionization (EI) at 70 eV and gain factor mode. The transfer line temperature was set at 250 • C, while the source and quadrupole temperatures were fixed at 230 and 150 • C, respectively. Data were acquired in single ion monitoring mode. Identification of the amines was accomplished by considering the retention time, presence of quantifier and qualifier ions for each analyte (see Supporting Information Table S1), and the ratio between those ions. The peak area corresponding to the quantifier ion was used for quantitative purposes. Scanning electron microscopy (SEM) images of the nickel-based PIL fibers were obtained using a FEI Quanta-250 microscope (FE-SEM). [30,31]. The monomer and cross-linker were characterized using 1 H NMR and ESI-MS and spectra are shown in Supporting Information Figures S1-5.
Preparation of SPME fibers
All coatings were prepared by on-fiber UV-initiated polymerization following previously described procedures [29]. The composition and mass ratio of components used for the preparation of the IL-based coatings are shown in Table 1 and Figure 1. -] monomer and cross-linker. The PIL3 coating was used as a reference to study the effect of Ni 2+ metal center in the PIL on extraction performance. The IL monomer, IL cross-linker (if applied), and free radical initiator were mixed in appropriate proportions at 55 • C. The homogenous mixture was manually placed on the surface of a previously derivatized nitinol support. Derivatization of elastic nitinol wires was carried out according to a previously published method [32] and glued into a commercial SPME device. The coatings were exposed to UV irradiation (360 nm) for 2 h to promote polymerization. Finally, the obtained fibers were thermally conditioned in the GC injection port at 200 • C for 30 min.
HS-SPME procedure
Samples for HS-SPME were placed in 20-mL glass vials closed with open-top caps and polytetrafluoroethylene/ silicone septa (Supelco). For all experiments, the sample volume was maintained at 10 mL. Before exposure of the SPME fiber, samples were thermostated at the temperature of extraction for 15 min using a hotplate with an accuracy of ±0.5 • C. Subsequently, samples were spiked with required volumes of the stock solution and mixed at 800 rpm for 5 min. A Corning PC-420D magnetic stirring hotplate (Corning, NY, USA) and a stir bar (1 cm length × 0.5 cm diameter; Fisher Scientific) were used for mixing. The fiber was exposed to the HS of the sample solution for 10-70 min at 25-70 • C. After extraction, the fibers were immediately inserted into the GC injection port for thermal desorption at 190 • C for 10 min. The temperature and desorption time were optimized in advance by studying carry-over effects.
The amino functional groups within amine molecules are responsible for strong and specific interactions with silane groups and siloxane bridges. Consequently, this leads to their strong retention, which often results in broad, asymmetrical chromatographic peaks and low sensitivity. To avoid these issues, it is necessary to choose an optimal GC column. The following capillary columns were examined to identify the most effective separation of amines:
Characterization of nickel-based polymeric IL fibers
The introduction of transition metals into the IL chemical structure may significantly affect its interactions with organic compounds, thereby influencing extraction properties. This feature was exploited in the determination of phenolics, polycyclic aromatic hydrocarbon, insecticide compounds, and lipophilic organic UV filters using MCILs in micro-liquid extraction techniques [33,34]. More systematic studies evaluating the selectivity of MCILs were performed by studying the retention of anaytes obtained from inverse gas chromatography when the MCIL is used as a stationary phase [27]. MCILs with anions containing metals (Ni 2+ , Mn 2+ , Co 2+ , Dy 3+ , Gd 3+ , Nd 3+ ) with acetylacetonate ligands were investigated. The results indicated an exceptionally high affinity of amines (especially pyridine) for nickel-containing MCILs. However, due to their liquid state, these compounds cannot be directly applied as extraction phases in SPME. In this study, the struc-tural features of the MCIL were modified by incorporating 1-vinylimidazole ligands featuring terminal double bonds that can be polymerized while exploiting the amine function of the ligand to coordinate to the nickel metal ion. A similar approach has been used successfully to produce UV curable SPME PIL coatings featuring silver ion for the selective extraction of unsaturated compounds [28].
In the current study, the following three types of SPME sorbent coatings were prepared: (1) Table 1.
As the analytes should be released from the fiber by thermal desorption, the thermal stability of PIL sorbent coating is very important. The fibers were tested by exposure to the GC inlet. All PIL fibers were stable up to 200 • C. In comparison, nonpolymerized ILs with the same anion and containing Ni 2+ ion showed significantly lower decomposition temperatures of 164 and 178 • C, respectively [22]. It can be concluded that the increased thermal resistance in the tested coatings results from the formation of a stable polymer structure [35].
The visual appearance, thickness, and regularity of the developed nickel-based PIL sorbent coatings were evaluated using SEM. Figure 2 shows the images of fibers with and without added cross-linker.
Optimization of the extraction procedure
When optimizing extraction methods in SPME, the most important parameters are extraction and desorption times, sampling and desorption temperatures, salt concentration, and sample agitation. In the case of compounds that dissociate in aqueous solutions, sample solution pH should also be taken into account.
Amines are weak bases and can undergo hydrolysis in aqueous solutions. To facilitate their transfer to the HS, they must remain undissociated. Thus, the pH of the aqueous samples must be adjusted accordingly. The p K a values describing acidity of protonated amines are listed in Supporting Information Table S1. Tripropylamine lies at the upper limit with a p K a equal to 10.58. According to the general rule of equilibrium, it is widely accepted that keeping the pH of the solution two units above the pK a value of analytes ensures its quantitative presence in the neutral F I G U R E 2 Scanning electron microscopy (SEM) images of the nickel-based PIL fibers examined in this study: PIL 1 (500×) ≈ 8 ± 1.3 μm (A) and PIL 2 (500×) ≈ 11 ± 1.9 μm (B) form [36]. Therefore, the pH of the aqueous solution was increased by the addition of a strong base (NaOH) to 13.
The values of some parameters mentioned above were assumed a priori. The thermal desorption temperature of the analytes should permit their quickest possible transfer to the chromatographic column. Typically, the limitation is the thermal stability of the sorbent material and analytes. Based on the previously determined durability of the fibers (approximately 200 • C), it was assumed that analytes could still be desorbed at a slightly lower temperature of 190 • C.
The optimal time for thermal desorption was determined using a pre-determined temperature of the GC injection port. For optimization purposes, extraction of analytes using the tested fibers was carried out from samples using a concentration two times higher than the planned working range (i.e., 400 μg/L). The desorption studies were carried out using times ranging from 2 to 20 min. The longest desorption time of 10 min was required for PIL 1 to avoid carry-over effects. To ensure uniform conditions for all investigated fibers, the identical desorption time was applied in all further experiments.
Stirring using a magnetic stir bar was utilized as the sample agitation method. It is well known that increasing agitation of the sample is often accompanied by an increase in the mass transfer caused by convection [5]. In this study, the highest stirring speed was limited by the formation of droplets of samples on the fiber surfaces caused by too vigorous agitation. Thus, as optimal stir rate of 800 rpm was used for all experiments in the study.
Influence of salt content
Previous HS-SPME studies reported that the addition of salts can affect the extraction efficiency due to the salting out effect [37]. The interaction of water with these additional ions may have a significant impact on the extraction process as their solubility in water can be expected to increase or decrease. Amines may be particularly susceptible to the salting out effect [5]. Since the change in solubility of analytes in water is related to the sample HS equilibrium, the salting out effect was investigated with fibers containing the PIL 1 and PIL 2 sorbent coatings. Aqueous samples that did not contain NaCl yielded very low extraction efficiency and high experimental error (RSD > 15%). Hence, NaCl concentrations of 5, 15, and 30% (w/v) were evaluated at 40 • C with an extraction time of 30 min. The obtained results are shown in Supporting Information Figure S6. The results indicate that in the case of the tested amines, the addition of salt significantly reduced their solubility in water and increased their extraction efficiency from the HS. The most pronounced effect was observed for pyridine using PIL 1 where the chromatographic peak area increased by 44-fold; in contrast, the extraction efficiency for 2,6-DTBP using PIL 2 increased only by twofold. For the analyzed analytes, the effect of NaCl addition was qualitatively and quantitatively similar for both PIL 1 and PIL 2. Based on obtained results, solutions with a concentration of 30% (w/v) were chosen for further experiments.
Influence of extraction temperature
It is known that an increase in the extraction temperature often leads to an increase in the diffusion coefficient, thereby increasing the rate of mass transfer to the HS [5]. The effect of extraction temperature on the extraction efficiency was investigated in the temperature range from 25 to 70 • С (in 15 • С increments) for 30 min using 30% (w/v) NaCl in the aqueous sample solution. The obtained extraction temperature profiles are shown in Supporting Information Figure S7 of SM. In the case of the HS mode, an equilibrium is formed for the analyte between the aqueous solution, HS, and sorbent coating. Partitioning of the analytes between the solution and the fiber depends on the values of the partition coefficients between the sample and the HS as well as the HS and fiber coating. As can be observed, the extraction efficiency evolves with increasing temperature in four ways: reaches a maximum (3-ethylpyridine, pyridine), decreases (triallylamine, 2,6-DTBP), increases (tripropylamine), and remains constant (aniline). Increasing extraction temperature may also impede analyte absorption on sorbents because of it being an exothermic process [38]. A decreasing trend in extraction efficiency for volatile analytes (i.e., triallylamine, 2,6-DTBP, and pyridine) at higher temperatures is likely due to their absorption on PIL sorbents being more hindered compared to the case of less volatile analytes at the given conditions. The profiles obtained for both fibers (PIL 1 and PIL 2) follow a similar trend, except for tripropylamine, where in the case of the cross-linked sorbent coating the extraction efficiency increases significantly with temperature. A compromised temperature condition of 45 • C was identified and chosen for all amines.
Influence of extraction time
The optimal extraction time was determined by analyzing the analyte extraction efficiency upon exposing the fiber to the sample HS at times varying from 10 to 70 min using an extraction temperature of 45 • С and 30% (w/v) NaCl in the aqueous sample solution. Supporting Information Figure S8 shows the effect of extraction time on the extraction efficiency of PIL 1 and PIL 2 sorbent coatings. It can be observed that for different analytes, various trends were obtained. For aniline and 2,6-DTBP, the extraction efficiency did not change significantly when extraction times longer than 10 min were tested. Pyridine, tripropylamine, and 3-ethylpyridine were more efficiently extracted at longer extraction times, while a drop in extraction efficiency occurred for triallylamine.
There is no single physicochemical property that explains the differences in time profiles obtained for different analytes as the most volatile analyte (pyridine) does not reach equilibrium in the tested time range, while the least volatile 2,6-DTBP undergoes equilibration after 10 min. Also, it could be expected that for compounds with higher values of the enrichment factor (EF) their transport within the fiber coating will require an extended equlibration time. Meanwhile, triallylamine shows even a slight decrease in extraction efficiency in PIL 2 with time, while the less efficiently extracted tripropylamine is far from equilibrium. In determining the optimal extraction time, the compounds that did not reach equilibrium within the studied time (tripropylamine, 3-ethylpyridine, and pyridine) were taken into account. For those compounds, extending the extraction time from 50 to 70 min increased the sum of chromatographic peaks areas by less than 20%. Thus, in further studies, an extraction time of 50 min was used for all fibers.
Analytical performance of the developed HS-SPME method and evaluation of extraction efficiency
Partial validation of the HS-SPME method utilizing the PIL and commercial fibers involved determination of coefficient of determination (R 2 ), LODs, LOQs, repeatability (RSD), and relative recovery (RR). The working ranges for all amines and each of the investigated SPME fibers are presented in Supporting Information Table S2. A working range from 5 to 200 μg/L was used for the studied PIL and commercial fibers. For all fibers within the working range, satisfactory linearity with the R 2 values above 0.990 was found (see Supporting Information Table S2). Supporting Information Table S3 shows the sensitivities of the methods expressed by the calibration slope for each fiber. In the case of most amines (with the exception of 2,6-DTBP), the nickel-based fibers (PIL 1 and PIL 2) exhibited higher sensitivity than commercial fibers and the PIL 3 fiber.
The LODs were calculated on the basis of an S/N of 3, and the LOQs as ten times the above-mentioned ratio. TA B L E 2 LOD (S/N = 3) and RSD obtained for amines when performing HS-SPME-GC-MS using different SPME fibers. LOD determined using samples spiked to 5 and 50 μg/L. RSD calculated using results obtained for samples with a concentration of analytes equal to 200 μg/L Table 2). LODs of pyridine, triallylamine, tripropylamine, and aniline obtained with the PIL-based SPME method were similar to or lower than the LOD values measured with other analytical methods such as capillary electrophoresis, GC-MS, and dispersive liquid-liquid microextraction coupled with GC-MS, which ranged from 0.07 to 42 μg/L [39][40][41][42]. The LOQ values for all fibers are shown in Supporting Information Table S4. The repeatability of the methods was calculated by performing triplicate extractions of aqueous samples spiked with amines at a concentration 200 μg/L, and are presented in Table 2. The RSD values for PIL fibers did not exceed 15% for all analytes and are comparable with those obtained using commercial fibers. RR was calculated as the ratio of the concentrations of spiked solutions to its value determined in the course of the analytical procedure. Calculated RR values are summarized in Supporting Information Table S5. The RRs ranged from 84.5 to 116.3% for all fibers, proving the usefulness of the PIL fibers in analytical applications. The performance of each type of PIL fiber began to decrease after ∼70 extraction/desorption cycles. Therefore, 60 cycles was selected as the optimal lifetime of the sorbent coatings.
In addition, a comparison was made between the extraction efficiency of the developed nickel-based PIL fibers and selected commercial fibers (PA and 100 μm PDMS) using the EF [43]. This parameter is a suitable tool for comparing the extraction ability by considering the nature of the sorption coatings of fibers, regardless of its geometrical dimensions. The EF parameter was calculated as the ratio of the analyte concentration in the fiber and the analyte concentration in the aqueous sample. The obtained values of EFs are shown in Supporting Information Table S6 and represented in Figure 3. An example chromatogram is shown in Supporting Information Figure S9. The EF values for both nickel-based PIL fibers (PIL 1 and PIL 2) are higher than those obtained for the both commercial fibers (PA and PDMS) and standard PIL fiber (PIL 3) in the case of all target analytes, with the exception of 2,6-DTBP for the PIL 1 fiber. For most analytes, the EF values of PIL 2 were higher than PIL 1. This is due to the increased surface area of the cross-linked PIL 2 resulting in enhanced analyte-sorbent interactions [44,45]. Therefore, the PIL 2 fiber was most suitable for the extraction of amines by HS-SPME in this work. This sorbent coating may be a particularly attractive alternative to commercially available coatings for the determination of volatile amines.
Analysis of real samples
Lake and tap water were analyzed to evaluate the matrix effect of environmental samples on the performance of the developed nickel-based PIL fibers and the HS-SPME-GC-MS method for the determination of amines. The water samples were spiked with amines at a concentration of 200 μg/L and extracted under optimal conditions using the PIL 1 and PIL 2 fibers. The matrix effect was evaluated in terms of RR and repeatability (
CONCLUDING REMARKS
In this work, two nickel-based PIL sorbent coatings were successfully developed and applied in HS-SPME coupled to GC-MS for the determination of volatile and semivolatile amines, including pyridine, triallylamine, tripropylamine, 3-ethylpyridine, aniline, and 2,6-DTBP, from water samples. After optimizing the extraction parameters, the extraction efficiency of the developed fibers (PIL 1 and PIL 2) was evaluated and compared with commercial fibers using the EF normalization parameter.
The nature of the nickel-based sorbent coatings provided higher efficiencies than the conventional PIL coating as well as commercial PA and PDMS coatings for all of the analytes studied. Additionally, the results from this study show that the analytical performance of the nickel-based PIL, PA, and PDMS (100 μm) fibers were comparable in terms of working range, coefficient of determination, LOD, LOQ, reproducibility, and RR. Low LODs were achieved for developed fibers, and ranged from 0.003717 to 1.56 mg/L and from 0.000077 to 0.84 μg/L for PIL 1 and PIL 2, respectively. Finally, the developed HS-SPME-GC-MS method was applied for the analysis of real samples, including tap and lake water. The RRs of PIL 2 ranged from 83.0 to 122.7% for tap water and from 79.2 to 119.3% for lake water. | 2021-04-25T06:16:18.934Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "019bcb51b1b997c574a0645fedf3e0cf9d60599e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jssc.202100119",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "4bc17e826cbc6169441a7d5b0f3f147689f74cc8",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23966121 | pes2o/s2orc | v3-fos-license | Human xeroderma pigmentosum group A protein interacts with human replication protein A and inhibits DNA replication.
Human replication protein A (RPA; also known as human single-stranded DNA binding protein, or HSSB) is a multisubunit complex involved in both DNA replication and repair. While the role of RPA in replication has been well studied, its function in repair is less clear, although it is known to be involved in the early stages of the repair process. We found that RPA interacts with xeroderma pigmentosum group A complementing protein (XPAC), a protein that specifically recognizes UV-damaged DNA. We examined the effect of this XPAC-RPA interaction on in vitro simian virus 40 (SV40) DNA replication catalyzed by the monopolymerase system. XPAC inhibited SV40 DNA replication in vitro, and this inhibition was reversed by the addition of RPA but not by the addition of DNA polymerase α-primase complex, SV40 large tumor antigen, or topoisomerase I. This inhibition did not result from an interaction between XPAC and single-stranded DNA (ssDNA), or from competition between RPA and XPAC for DNA binding, because XPAC does not show any ssDNA binding activity and, in fact, stimulates RPA's ssDNA binding activity. Furthermore, XPAC inhibited DNA polymerase α activity in the presence of RPA but not in RPA's absence. These results suggest that the inhibitory effect of XPAC on DNA replication probably occurs through its interaction with RPA.
Human replication protein A (RPA; also known as human single-stranded DNA binding protein, or HSSB) is a multisubunit complex involved in both DNA replication and repair. While the role of RPA in replication has been well studied, its function in repair is less clear, although it is known to be involved in the early stages of the repair process. We found that RPA interacts with xeroderma pigmentosum group A complementing protein (XPAC), a protein that specifically recognizes UV-damaged DNA. We examined the effect of this XPAC-RPA interaction on in vitro simian virus 40 (SV40) DNA replication catalyzed by the monopolymerase system. XPAC inhibited SV40 DNA replication in vitro, and this inhibition was reversed by the addition of RPA but not by the addition of DNA polymerase ␣-primase complex, SV40 large tumor antigen, or topoisomerase I. This inhibition did not result from an interaction between XPAC and single-stranded DNA (ssDNA), or from competition between RPA and XPAC for DNA binding, because XPAC does not show any ssDNA binding activity and, in fact, stimulates RPA's ssDNA binding activity. Furthermore, XPAC inhibited DNA polymerase ␣ activity in the presence of RPA but not in RPA's absence. These results suggest that the inhibitory effect of XPAC on DNA replication probably occurs through its interaction with RPA.
Replication protein A (RPA; 1 also known as human singlestranded DNA binding protein, or HSSB), is a eukaryotic single-stranded DNA binding protein that contains three tightly associated subunits of 70, 34, and 11 kDa (p70, p34, and p11, respectively) (1)(2)(3). It is required for DNA replication, nucleotide excision repair, and homologous recombination (1)(2)(3)(4)(5)(6), suggesting that it has multiple functions in DNA metabolic processes. The p34 subunit of RPA is phosphorylated at the G 1 /S boundary and dephosphorylated during mitosis (7,8). This phosphorylation event can also be induced by DNA damage (9,10). Since DNA damage induces the inhibition of replication, RPA and the phosphorylation of its p34 subunit may play a role in the regulation of DNA replication (10).
During the initiation of simian virus 40 (SV40) DNA replication, RPA interacts with SV40 large tumor antigen (T-ag) and the DNA polymerase ␣-primase complex (pol ␣-primase) (11,12), which appears to be essential for DNA unwinding (12). Human RPA cannot be replaced at the initiation of replication by RPA from other species, suggesting that the interaction of RPA with other replication proteins may be crucial in this process. After unwinding, RPA is believed to both stabilize the unwound DNA and stimulate DNA polymerase ␣ (pol ␣) and DNA polymerase ␦ (pol ␦) activities, as determined by the elongation of primed DNA templates (13).
In nucleotide excision repair, the requirement for RPA can be bypassed by incising DNA with the E. coli UvrABC enzyme. This observation suggests that RPA is involved in an early stage of UV excision repair (14). Although the role of RPA in repair is not yet well defined, the protein complex cannot be replaced by RPA from other species, indicating that specific interactions between RPA and other repair proteins are involved in the repair process (14).
Xeroderma pigmentosum (XP) is a genetically recessive human disorder. Patients with XP are defective in excision repair of ultraviolet light (UV)-damaged DNA and consequently suffer from a high incidence of skin cancer. At least seven complementation group proteins (XP-A to XP-G) have been identified thus far (15,16). The XP group A complementing protein (XPAC) is involved in an early stage of nucleotide excision repair and is also a key protein in the recognition of UVdamaged DNA (17)(18)(19). The XPAC gene contains a zinc finger motif that is required for XPAC function in repair (20,21). XPAC was recently shown to interact with rodent excision repair cross-complementing protein 1 (ERCC1) and ERCC4 (XP-F) (22,23).
In this report, we show that XPAC also interacts with RPA. Further, XPAC inhibits SV40 DNA replication in vitro, and this inhibition can be reversed by the addition of RPA. XPAC inhibited pol ␣ activity in the presence of RPA but did not inhibit this polymerase in RPA's absence. Taken together, these results indicate that the XPAC-RPA interaction alters RPA's ability to stimulate pol ␣ activity, which, in turn, results in the inhibition of DNA replication. We discuss how these observations support the hypothesis that the repair and replication functions of RPA are differentially regulated.
RPA Interaction with XPAC-Protein interaction was determined using the ELISA described previously (12). Briefly, well plates were coated with 1.0 g of protein (XPAC, SV40 T-ag, or bovine serum albumin) and incubated overnight at 4°C. The wells were then washed with PBS and blocked with 3% bovine serum albumin in PBS for 1 h at 37°C. After blocking, various amounts of RPA were added, and the plates were reincubated for a further hour at 37°C before being washed extensively with PBS. The amount of bound RPA was measured by incubating the ELISA plates with a peroxidase-conjugated monoclonal antibody to RPA p70 (70C; see Ref. 27) for 1 h at 37°C. After extensive washing with PBS, the chromogenic substrate, 2,2-azido-bis(3-ethylbenzothiazoline-6-sulfonic acid, and hydrogen peroxide were added, and the colorimetric reaction was monitored at 415 nm. In Vitro SV40 DNA Replication-The reactions were carried out as described previously (28). In brief, reaction mixtures (40 l) contained 40 mM creatine phosphate/di-Tris salt (pH 7.7), 1 g of creatine kinase, 7 mM MgCl 2 , 0.5 mM DTT, 4 mM ATP, 200 M UTP, GTP, and CTP, 100 M dTTP, dGTP, and dCTP, 20 M [␣-32 P]dATP (specific activity, 20,000 cpm/pmol), 0.8 g of SV40 T-ag, 0.3 g of SV40 origin-containing DNA (pSV01⌬EP), and various amounts of pol ␣-primase, topo I, and RPA. In the SV40 dipolymerase system, various amounts (see Fig. 3B) of PCNA, A1 (RF-C), pol ␦, and topo II were also added. The reaction mixtures were incubated for 90 min at 37°C and then stopped with 80 l of a stop solution containing 20 mM EDTA, 1% SDS, and E. coli tRNA (0.5 mg/ml). One-tenth of the reaction mixture was used to measure the acid-insoluble radioactivity. Replication products in the remaining reaction mixture were analyzed by electrophoretically separating the isolated DNA in a 1.2% alkaline agarose gel (40 mM NaOH and 1 mM EDTA) for 12-14 h at 2 V/cm as described previously (28). The gel was subsequently dried and exposed to x-ray film.
RESULTS
RPA Interacts with XPAC-In SV40 replication, a defined origin sequence is recognized by the origin-binding protein SV40 T-ag, which interacts with RPA and pol ␣-primase to form an initiation complex (31)(32)(33)(34). This complex is essential for DNA replication because mutant RPA that poorly interacts with SV40 T-ag cannot effectively support DNA replication (12). RPA is also required for nucleotide excision repair, wherein the DNA lesions are specifically recognized by the repair initiator protein, XPAC (19). We reasoned that RPA may function in repair by interacting with XPAC. Accordingly, we examined whether these two proteins interact with each other in vitro. The RPA complex was purified to near homogeneity from insect cells coinfected with recombinant baculoviruses encoding all three subunits (70-, 34-, and 11-kDa subunits) ( Fig. 1A; Ref. 25), while bacterially produced histidine-tagged XPAC was induced by isopropyl-1-thio--Dgalactopyranoside from an XPAC expression vector. As described by others (19), the final stage of XPAC preparation contained a protein doublet, and both bands reacted with antisera raised against peptides deduced from the cDNA of XPAC (Ref. 17; S-HL data not shown). Both RPA and XPAC purified from these expression systems were functionally active in replication and repair, respectively (Refs. 19 and 25; data not shown). An ELISA that successfully detected the interaction of RPA with SV40 T-ag (11,12) was used to detect the interaction between RPA and XPAC. As with SV40 T-ag, XPAC interacted with RPA (see Fig. 1B).
XPAC Inhibits SV40 DNA Replication in Vitro-Having established that XPAC interacts with RPA, we examined the effect of XPAC on SV40 DNA replication in vitro using a reconstituted SV40 replication system. Addition of increasing amounts of XPAC quantitatively inhibited SV40 DNA replication catalyzed by the monopolymerase system (the monopolymerase system contains SV40 T-ag, DNA pol ␣-primase complex, topo I, and RPA), whereas buffer alone had no apparent effect ( Fig. 2A), indicating that the inhibition was indeed due to XPAC. In the monopolymerase system, as described previously (35), pol ␣ alone can synthesize both the leading (half the length of the plasmid; 1.4 -1.6 kilobases) and lagging strands (200 -300 nucleotides long), which are shown as two discrete bands (Fig. 2B, lanes 2-6). The syntheses of both strands were inhibited by XPAC. However, since RPA is involved in both the initiation and elongation stages of replication, it is not clear which particular stage XPAC inhibits.
We also examined the effect of XPAC on the dipolymerase system, which contains, in addition to the monopolymerase components, pol ␦, PCNA, and activator 1 (RF-C). Again, DNA synthesis was quantitatively inhibited by XPAC (Fig. 3) albeit to a lesser extent than with the monopolymerase system. For example, in the presence of 1.2 g of XPAC, 82% of the replication activity was inhibited in the monopolymerase system, whereas only 24% was inhibited in the dipolymerase system ( Fig. 2A versus Fig. 3A). XPAC affected the sizes of the replication products produced in the SV40 dipolymerase system in that the size of the lagging strand increased as the concentration of XPAC increased. There was also a significant diminution of the leading strand synthesis (Fig. 3B).
XPAC Inhibition Can Be Reversed by the Addition of RPA-If this inhibition targets the function of a particular protein, then reversal of inhibition may simply require the addition of excess targeted protein. The effect of XPAC on SV40 monopolymerase system was effectively reversed by RPA addition but not by the addition of SV40 T-ag, pol ␣-primase, or topo I (Fig. 4, A and B). This supports the idea that the inhibition of replication by XPAC may result from its interaction with RPA. The size product distribution in the reversed reaction (Fig. 4B, lanes 5-7) is somewhat different from that of the control reaction (Fig. 4B, lane 1), in that the products of leading strand DNA In lanes 2-10, 0.8 g of SV40 T-ag was included. In lanes 3-6, increasing volumes of buffer were added as described in the legend to Fig. 2. Once the reactions were complete, the reaction mixtures were analyzed for acid-insoluble radioactivity (A) and in a 1.2% alkaline agarose gel (B). ssl represents the position to which the single-stranded linear plasmid DNA migrated. n.t., nucleotides. synthesis diffused into the smaller products. This can be explained in terms of the RPA concentration in these reactions; excessive amounts of RPA inhibit leading strand synthesis in the monopolymerase system (36). Alternatively, RPA alone may not be able to completely overcome the observed inhibition.
XPAC Does Not Inhibit RPA ssDNA Binding Activity-It has been shown previously that XPAC preferentially binds to UVirradiated double-stranded DNA (17,19). It is possible that XPAC competes with RPA for binding to ssDNA nonspecifically and that this nonspecific interaction leads to the inhibition of DNA replication. Alternatively, XPAC may interact specifically with RPA to produce the inhibitory effect. To distinguish between these possibilities, we examined whether XPAC binds to ssDNA or interferes with RPA's ssDNA binding property. RPA, XPAC, or a mixture of both proteins was incubated with 5Ј-32 Plabeled (dT) 70 and analyzed for ssDNA binding activity using a gel mobility shift assay (Fig. 5). As reported previously, RPA binds to ssDNA generating two distinct bands (12,30). XPAC, however, did not bind to ssDNA in our gel mobility shift assay. Moreover, XPAC did not inhibit RPA's ssDNA binding activity; rather, it stimulated the ssDNA binding activity of RPA, supporting the belief that the inhibitory effect of XPAC on SV40 replication results from its interaction with RPA.
XPAC Inhibits the pol ␣ Activity Only in the Presence of RPA-Since RPA stimulates both pol ␣ and pol ␦ activities during the elongation stage (13), we examined whether XPAC affects RPA's ability to stimulate either of these polymerases.
XPAC had no effect on pol ␣ activity in the absence of RPA, but in its presence increasing amounts of XPAC quantitatively inhibited pol ␣ activity (Fig. 6A). This result suggests that the XPAC-RPA interaction prevents RPA from stimulating pol ␣ activity. In contrast, XPAC did not affect pol ␦ activity regardless of the presence or absence of RPA (Fig. 6B). Together, these results strongly suggest that the inhibitory effect of XPAC on SV40 replication (Figs. 2 and 3) is likely due to the interaction of XPAC with RPA, which in turn obstructs RPA's stimulation of pol ␣ activity. DISCUSSION We have examined the interaction of two proteins, XPAC and RPA, that are involved in the early stages of the repair process. We reasoned that because XPAC is a UV-damage recognition protein, RPA may be recruited to damaged DNA sites though its interaction with XPAC. The resulting RPA-XPAC complex might then form multiprotein complexes at the damaged sites to promote recruitment of other repair proteins required for nucleotide excision repair. Recently, XPAC has been shown to interact with ERCC1 (22) or the ERCC1-ERCC4 (XP-F) complex (23), a putative endonuclease complex that is necessary for 5Ј incision (37). Although the XPAC-ERCC1-ERCC4 complex did not show a damaged site-specific incision (23), it is possible that XPAC, RPA, ERCC1-ERCC4, and other repair proteins, such as the 3Ј incision endonuclease, XPG (37), form a multi- protein complex at the damaged DNA site that is necessary for accurate 3Ј and 5Ј incisions.
In addition to its potential role in repair, we found that XPAC inhibited SV40 DNA replication in vitro. This inhibition was reversed by the addition of excess RPA but not by topo I, pol ␣-primase, or SV40 T-ag, indicating that the inhibition and its reversal are physiologically relevant. The inhibition is unlikely to be the result of competition between XPAC and RPA for DNA binding because: (i) two known DNA binding proteins, human Rad51 protein (42) and EBNA1 protein (43), fail to interact with RPA or inhibit the SV40 monopolymerase replication system (data not shown), and (ii) XPAC itself did not show any stable ssDNA binding activity in the gel mobility shift assay; however, it did stimulate RPA's ssDNA binding activity (Fig. 5). RPA binds as a multimer to ssDNA more than 30 nucleotides in length (30). It is therefore possible that the XPAC-RPA interaction stabilizes the binding of RPA to ssDNA binding activity, allowing stable monomeric RPA-ssDNA complexes to form, and leading to the increased amount of RPA-DNA complex that can be seen in Fig. 5. XPAC did not stimulate the ssDNA binding activity of T4 phage ssDNA-binding protein (T4 gene 32), suggesting that the stimulation of RPA's ssDNA binding activity by XPAC occurs through their proteinprotein interaction (data not shown). In any event, this result strengthens our belief that the inhibition of replication by XPAC is a result of its interaction with RPA rather than its nonspecific binding to ssDNA.
XPAC binds dsDNA weakly (19); however, this inhibition is unlikely to have resulted from XPAC's interaction with dsDNA because, if this were the case, we would expect to see the same degree of inhibition regardless of the replication system (monopolymerase or dipolymerase) used in the experiments. It is also unlikely that the inhibition resulted from an interaction between XPAC and pol ␣ because: (i) XPAC did not interact with pol ␣ in our ELISA assay, (ii) addition of excess pol ␣-primase did not reverse the inhibition of replication (Fig. 4), and (iii) XPAC inhibited pol ␣ activity in the presence, but not in the absence of RPA. Therefore, the most likely explanation for this inhibition is that XPAC interacts with RPA, altering RPA's ability to stimulate pol ␣.
This belief is further supported by the fact that the inhibitory effect of XPAC is more evident with the monopolymerase system, which relies exclusively on pol ␣ activity, than with the dipolymerase system, which contains both pol ␣ and pol ␦ (Fig. 2 versus Fig. 3). Pol ␦ activity was not affected by XPAC (Fig. 6). In the monopolymerase system, pol ␣ is responsible for both leading and lagging strand synthesis; in the dipolymerase system, pol ␣ is only partly responsible for lagging strand synthesis, while pol ␦ is responsible for leading strand synthesis and probably also for part of the lagging strand synthesis (28,38,39). On the other hand, we should point out that XPAC had little effect on SV40 replication with HeLa cell cytosolic extracts (data not shown). This lack of inhibition in the crude extracts raises the possibility that our observations are limited to the specific model systems used.
In view of the fact that both RPA and XPAC function in repair, our results would support the hypothesis that the XPAC-RPA complex, once formed, is used in repair rather than in DNA replication. It would be of interest to know whether the XPAC-RPA complex, which is stable enough to be isolated, can still recognize UV-damaged DNA. Since the completion of this work, two articles demonstrating specific interactions between RPA and XPAC have been published (40,41). | 2018-04-03T02:58:25.029Z | 1995-09-15T00:00:00.000 | {
"year": 1995,
"sha1": "b4395a5675e167dbefe1ffbf4e53738f67fba7df",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/270/37/21800.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4305729856a2a3de9531e44dc0ec3e467b2ff49c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258007907 | pes2o/s2orc | v3-fos-license | Ventricular function and biomarkers in relation to repair and pulmonary valve replacement for tetralogy of Fallot
Objective Cardiac surgery may cause temporarily impaired ventricular performance and myocardial injury. We aim to characterise the response to perioperative injury for patients undergoing repair or pulmonary valve replacement (PVR) for tetralogy of Fallot (ToF). Methods We enrolled children undergoing ToF repair or PVR from four tertiary centres in a prospective observational study. Assessment—including blood sampling and speckle tracking echocardiography—occurred before surgery (T1), at the first follow-up (T2) and 1 year after the procedures (T3). Ninety-two serum biomarkers were expressed as principal components to reduce multiple statistical testing. RNA Sequencing was performed on right ventricular (RV) outflow tract samples. Results We included 45 patients with ToF repair aged 4.3 (3.4 – 6.5) months and 16 patients with PVR aged 10.4 (7.8 – 12.7) years. Ventricular function following ToF repair showed a fall-and-rise pattern for left ventricular global longitudinal strain (GLS) (−18±4 to −13±4 to −20±2, p < 0.001 for each comparison) and RV GLS (−19±5 to −14±4 to 20±4, p < 0.002 for each comparison). This pattern was not seen for patients undergoing PVR. Serum biomarkers were expressed as three principal components. These phenotypes are related to: (1) surgery type, (2) uncorrected ToF and (3) early postoperative status. Principal component 3 scores were increased at T2. This increase was higher for ToF repair than PVR. The transcriptomes of RV outflow tract tissue are related to patients’ sex, rather than ToF-related phenotypes in a subset of the study population. Conclusions The response to perioperative injury following ToF repair and PVR is characterised by specific functional and immunological responses. However, we did not identify factors relating to (dis)advantageous recovery from perioperative injury. Trial registration number Netherlands Trial Register: NL5129.
INTRODUCTION
Tetralogy of Fallot (ToF) is the most common type of cyanotic congenital heart disease with an incidence of 0.34 per 1000 life-born children. 1 Surgical repair can be achieved with excellent long-term survival. 1 However, lifetime morbidity of these patients remains high. Surgical repair of ToF is currently generally performed between 3 and 11 months of age. 1 Relatively earlier repair is considered to minimise the time the right ventricle (RV) is exposed to increased pressure load and cyanosis. However, earlier repair more often requires a transannular patch, which may result in worse long-term outcomes. 1 Furthermore, the neonatal repair is associated with a more complicated postoperative course. 2 Palliative procedures, such as a modified Blalock-(Thomas-)Taussig shunt (mBT), prior to repair may limit cyanosis and allow for pulmonary vascular growth. 1 However, there may be associated risks with repeat interventions. 1 There is currently no WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Cardiac surgery with cardiopulmonary bypass can lead to (temporarily) impaired ventricular performance and myocardial injury, which may affect long-term outcomes. The mechanisms have been studied scarcely.
WHAT THIS STUDY ADDS
⇒ We characterised the functional and immunological biomarker response to perioperative injury for patients undergoing surgical repair and pulmonary valve replacement for tetralogy of Fallot. We identified a biomarker phenotype related to the response to perioperative injury.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Improved characterisation of perioperative injury and subsequent recovery may provide biomarkers to identify patients at risk for adverse events. Furthermore, it may provide novel targets for therapy, such as inhibitors of disadvantageous immune responses following cardiopulmonary bypass or perioperative myocardial protective strategies. van consensus on the optimal treatment strategy or timing of repair for ToF, but treatment strategy in early life affects lifelong outcomes of ToF repair. 1 2 Surgical procedures for ToF expose the heart to injury. Cardiopulmonary bypass (CPB) is required to gain intracardiac access. CPB exposes the heart to, among others, ischaemia and reperfusion injury. 3 Other aspects of perioperative conditioning relating to injury include oxidative stress, surgical trauma and inflammation. 3 4 Following surgery, an extensive immune response is observed and ventricular function may be impaired for several months. 5 RV function is more severely impaired compared with left ventricular (LV) function, which may relate to (1) the abnormally loaded RV in ToF, (2) the RV's impaired metabolic and antioxidant response to hypoxia, 5 6 (3) the anterior position of the RV-which may expose the RV to room temperature, limiting the protective effects of cooling 7 8 and (4) the coronary blood supply of the RV-which is more sensitive to increased afterload than that of the LV. 9 The recovery of ventricular function and the role of the immune system in the recovery from perioperative injury are poorly understood but may have important implications for long-term biventricular function.
We performed a multicentre prospective study to characterise the functional and immunological response to perioperative injury for patients undergoing surgical repair or pulmonary valve replacement (PVR) for ToF. Furthermore, we characterised the transcriptome-that is, the complete set of coding and non-coding RNA transcripts-of the right ventricular outflow tract (RVOT) from tissue samples obtained during ToF repair. Transcriptome analysis provides detailed phenotype information, which may relate to differences in patient characteristics or differences in the response to perioperative injury.
We hypothesise that functional and immunological biomarkers may identify specific phenotypes of (dis) advantageous recovery from-and vulnerability toinjury and that these patterns may differ between patients undergoing ToF repair and PVR.
Study design and subjects
We performed a multicentre prospective observational study. The study protocol was published in the Netherlands Trial Register (NL5129). From December 2015 to September 2019, patients undergoing ToF repair and surgical PVR were recruited from the Erasmus MC Sophia Children's Hospital, Rotterdam; Willem Alexander Children's Hospital, Leiden; Wilhelmina Children's Hospital, Utrecht and Beatrix Children's Hospital, Groningen. Exclusion criteria were multiple congenital anomaly syndromes or pulmonary atresia. The study protocol was approved by the research ethics committees of the participating centres (protocol no MEC-2014-326/NL48188.078.14). All patients, and/or their legal guardians, provided written informed consent prior to inclusion in accordance with Dutch legislation. The public was not involved in the study design or conduct. Subjects were assessed at three time points: before surgery (T1), at the first outpatient follow-up or during the second postoperative week (T2) and at 1-year follow-up (T3). At each of the study time points, subjects underwent physical examination, echocardiography and blood sampling. Tissue samples of the RVOT were obtained during ToF repair.
Echocardiography
Transthoracic echocardiography was performed in accordance with the study echocardiography protocol in all participating centres. Studies were performed by experienced cardiac sonographers on a Vivid7 or Vivid E9 cardiac ultrasound system (General Electric Vingmed Ultrasound, Horten, Norway). No subjects were sedated for the echocardiography study. Postprocessing of images was performed using commercially available software (EchoPac V.11.2; General Electric Vingmed Ultrasound). To limit interobserver errors, all postprocessing was performed in two core centres (Erasmus Medical Center-Sophia Children's Hospital and Willem Alexander Children's Hospital-Leiden University Medical Center). All M-mode and pulsed wave tissue Doppler measurements were calculated as an average value from three consecutive heartbeats. Speckle tracking myocardial strain was performed (Echopac V.11.2; General Electric Vingmed Ultrasound). The end-diastolic phase was identified automatically by the software. Global longitudinal peak systolic strain of the LV was obtained from available segments from the apical two-chamber, threechamber and four-chamber views. RV global longitudinal peak systolic strain was obtained from the free wall on the apical four-chamber view. Biventricular global longitudinal strain (GLS) was considered the primary measure of ventricular function, as this parameter relies on a few geometric assumptions, which may not be applicable in ToF. Ventricular dimensions were indexed according to published paediatric references. 10 11 Blood sample analysis At each study time point, blood samples were collected in EDTA tubes, centrifuged and plasma was stored at −80°C. Samples were analysed using a protein biomarker panel of 92 cardiovascular and immunological biomarkers (Olink Cardiovascular panel III; Olink Bioscience, Uppsala, Sweden). 12 Biomarker concentrations were assessed using a proximal extension array, which has previously been described in detail. 13 This panel was chosen as it contains many biomarkers of interest, as previously determined by a literature study. 14 Concentrations are expressed as normalised protein expression (NPX), a measure of relative concentration, rather than absolute concentrations. NPX is a logarithmic scale, where a 1 unit increase represents a doubling in concentration. If the limit of detection was not reached for a sample, the reported NPX (under the limit of detection) was used, in consultation with Olink. Biomarkers for which the limit of detection was not reached in more than half of the subjects were excluded entirely from data analysis. N-terminal pro-brain natriuretic peptide (NT-proBNP) obtained by biomarker panel analysis was compared with NT-proBNP assessed by the participating centres' clinical laboratories.
Myocardial tissue analysis
During ToF repair, RVOT tissue was resected as part of the normal surgical treatment. Samples of the tissue were directly frozen in liquid nitrogen in the operating room and were subsequently stored at −80°C. Due to time and resource restraints, RVOT tissue samples from 20 patients had to be selected for RNA sequencing (RNA-Seq), rather than all collected tissue samples. Patients were selected based on the completeness of follow-up data and clinical characteristics (to cover a broad range of phenotypes). RNA was isolated using the ReliaPrep Kit (Z6112, Promega). RNA quality was determined using the Agilent 2100 Bioanalyzer (G2939BA, Agilent Technologies). The RNA-Seq library was prepared using the KAPA mRNA HyperPrep kit (KK8581, Roche) with an input of 500 ng RNA with an RNA integrity score >8. The library was sequenced on two HiSeq 4000 lanes single-end 50 bp reads (Illumina).
Statistical analysis
Continuous data are presented as 'mean±SD' for normal distributions and 'median (IQR)' for nonnormal distributions. Categorical data are presented as 'count (percentage)'. Paired t-tests and Wilcoxon tests, depending on the distribution, were used for comparisons between study time points. Data analysis is performed in R (R Foundation for Statistical Computing, Vienna, Austria).
Serum biomarkers were analysed with principal component analysis (PCA) in R base (prcomp function). This is a statistical method to reduce the number of parameters to be analysed from 92 serum biomarkers to a few principal components (PCs) with minimal information loss. We used this approach to limit the risks of false-positive findings associated with multiple testing. Biomarkers that are linearly correlated with each other are summarised into uncorrelated PCs. For serum biomarker analysis, PCs were considered until less than 5% of variance (ie, information) in the dataset was explained by the PC. Information regarding the individual biomarker's contribution to PCs was abstracted from the PCA analysis. The five highest contributing biomarkers to each PC were used to determine common biological features by protein enrichment. 15 Findings related to PCs were also assessed for each of the five highest contributing biomarkers within a PC individually. For echocardiographic and biomarker analyses, no further corrections for multiple statistical testing were performed, as the number of parameters to be analysed was reduced to an amount that is conventional in clinical research. A p value <0.05 was considered statistically significant.
Furthermore, analyses were performed for eachbiomarker. P values for these analyses were subsequently adjusted for multiple testing using the Benjamini-Hochberg procedure. 16 An adjusted p value <0.05 was considered statistically significant.
For RNA-Sq analysis of the RVOT, adapter and polyA sequences and low-quality nucleotides were removed using BBDuk. Trimmed reads were mapped against the human genome using STAR, and htseq-count was used to determine read counts. 17 Differential expression analysis was performed with the DESeq2 package. 18 PCA was performed with the DESeq2 package, using default parameters. For transcriptome analyses, p-values were corrected for multiple testing by using the false discovery rate of Benjamini-Hochberg procedure (p<0.05). 16
Study population
We included 45 patients who underwent ToF repair and 16 who underwent PVR. All surgical procedures were successful. Patient characteristics are shown in table 1. Patients undergoing ToF repair and PVR were comparable with regard to extracardiac defects and prior palliative procedures. Indications for PVR were the following: severe pulmonary regurgitation (PR) (n=10), pulmonary stenosis (PS) (n=3) and combined PR/PS (n=3). Parameters of cardiovascular MRI studies were available for 10 patients with PVR and are shown in table 1.
Assessment at T1 took place 1 (1-6) day before surgery, T2 at 7 (5-13) days after surgery and T3 at 372 (310-444) days after surgery. No patients died during follow-up. In total, 25 patients suffered any complication during hospital stay, which are specified in table 2. One patient was lost to study follow-up at T2, and an additional five patients at T3.
Echocardiography
Echocardiography studies were obtained for 56 patients at T1, 56 at T2 and 52 at T3. Echocardiography measurements are shown in RV basal and diameter Z scores were decreased following both ToF repair and PVR. RV end-diastolic dimension Z scores at T3 were higher for patients with ToF repair with a TAP compared with those without ToF repair (1.0±0.7 vs −0.4±0.2, p=0.006).
Serum biomarkers
Two samples from one patient with PVR did not pass the quality assessment and were excluded from the analysis.
Panel biomarker analysis was performed for 31 patients at T1, 13 at T2 and 25 at T3. Subjects with available blood samples did not significantly differ from those without. A table of patient characteristics denoting the differences between the aforementioned groups at T2 is included in the online supplemental file. Expression of one biomarker (SPON1) was below the limit of detection in all samples and was excluded from the analysis.
Congenital heart disease
Biomarkers were expressed as three PCs, which together accounted for 59% of the total variance in the dataset. An overview of these PCs is presented in table 4 and figure 1. PC1 scores primarily differentiated between patients undergoing ToF repair and PVR (2.3±5.4 vs −4.3±5.7, p≤0.001, combined for all study time points). As these patient groups differ importantly in age, we investigated the relationship between age and PC1 scores within these patient groups. PC1 scores did not correlate with age for patients undergoing ToF repair (r=0.08, p=0.720) or PVR (r=−0.12, p=0.752).
PC2 scores primarily differentiated between patients before and after ToF repair (−2.7±2.1 for T1 of ToF repair vs 1.2±2.7 for other time points including PVR, p≤0.001). PC2 scores did not relate to hospital or ICU stay at any time point. Furthermore, PC2 scores at T1 correlated with preoperative haematocrit for patients undergoing ToF repair (r=0.56, p=0.006). No relation between O 2 saturation and PC2 scores could be established at this study time point.
PC3 scores were increased at the early postoperative time point (T2) for ToF repair (4.9±1.1 vs −0.8±1.8, p≤0.001). For patients with PVR, PC3 followed a similar pattern, but differences between time points were not statistically significant. PC3 scores at T2 did not relate to total intervention duration, perfusion time, aortic crossclamp time, hospital stay or ICU stay.
Analyses for individual biomarkers are supplied in the online supplemental file. Biomarkers highly contributing to a certain PC generally behave in a similar fashion to the PC itself. PCs did not distinguish patients with regard to sex, biventricular GLS, staged versus primary ToF repair, RV dimensions or the occurrence of complications. NT-proBNP assessed by the clinical laboratories (after log transformation) correlated well with panel biomarkerderived measurements of NT-proBNP (r=0.84, p≤0.001). For the subset of patients with complete blood samples at each study time point (n=10), two biomarkers differed across study time points as per the repeated measurements analysis of variance: suppression of tumorigenicity-2 (ST2) (5.4±1.2 to 6.8±0.9 to 5.4±0.5, adjusted p=0.004) and NT-proBNP (3.3±0.9 to 5.4±2.1 to 3.1±0.5, adjusted p=0.011).
Characterisation of right ventricular outflow tract transcriptomes
We performed whole-tissue RNA-Seq of RVOT samples of 20 patients who underwent ToF repair. Patient characteristics are provided in the online supplemental file. Unsupervised PC analysis shows clustering of the samples according to the sex of the patients, rather than any clinical characteristic ( figure 2A). Patient 20 does not cluster with the other patient samples. This sample had higher expression of transcripts associated with fibroblasts, rather than cardiomyocytes. Significantly differential expressed genes between female and male patients were located on the Y-chromosomes such as TTTY14, TMSB4Y, or are involved in the inactivation of the X-chromosomes such as XIST and TSIX, consistent with the sex of the patients (figure 2B). Five genes-among which HECW2, PIGN and ADAMTS9-were differentially expressed between patients above and below the median haematocrit, a surrogate for cyanosis. No genes were differentially expressed between patients with and without a previous palliative procedure. No transcriptome phenotypes were related to the parameters of clinical outcome following ToF repair. Expression levels of NPPB, encoding for the biomarker NT-proBNP, in RVOT transcriptomes correlated with the measured serum concentrations at T1 (r=0.52, p=0.045).
DISCUSSION
In this multicentre prospective observational study, we confirmed temporarily impaired biventricular function for patients with ToF undergoing ToF repair. However, for patients undergoing PVR, only RV function was temporarily impaired. Ninety-one cardiovascular and immunological biomarkers were summarised as three PCs, which related to specific biological and clinical phenotypes. These phenotypes were related to patients undergoing ToF repair and PVR, patients before and after repair, and early postoperative status, respectively. No PC was related to either ventricular function or complications following procedures. RNA-Seq of the transcriptome of whole tissue samples of patients undergoing ToF repair differentiated patients only with regard to sex, rather than clinical features, suggesting that the RVOT-in contrast to other anatomic structures of the RV-may be relatively unaffected by clinical features such as right ventricular pressure overload.
Compared with published paediatric reference values, RV GLS of our study cohort at T1 (−19.3±4.9 for ToF repair and −20.8±2.4 for PVR) was lower than normal (although within the normal range). 19 This may relate to differences in loading conditions, rather than reflecting intrinsically decreased RV contractility. 20 Biventricular function was impaired at T2 for patients undergoing ToF repair and recovered at T3. These findings are in agreement with previous studies describing temporarily decreased ventricular performance following ToF repair. 21 22 Biventricular GLS following PVR did not differ statistically significantly across study time points. TDI TV S′-another parameter of RV function-did follow a pattern of temporarily impaired ventricular function. It should be noted RV GLS and TDI TV S′ followed the same fall-and-rise pattern, although differences between time points were not statistically significant. Temporarily decreased RV GLS may be confounded in this population by RV unloading, which should increase RV GLS. A previous retrospective study found temporarily impaired ventricular function following surgical-but not following transcatheter-PVR. 23 In our present study, perfusion duration in the PVR group was relatively short, aortic cross-clamping was less commonly used and-if employed-aortic cross-clamp duration was relatively short. This may also explain the limited effect on biventricular function observed in our study. PC1, accounting for 43% of the variance in the dataset, primarily differentiated between patients undergoing ToF repair and PVR. PC1 was mostly influenced by serum levels of ephrin type-B receptor 4 (EPHB4), tartrateresistant acid phosphatase type 5 (TR-AP), tumour necrosis factor receptors 1 and 2 (TNF-R1/TNF-R2) and urokinase receptor (U-PAR). Age differs importantly between patients undergoing ToF repair (4.3 (3.4-6.5 months) and PVR (10.4 (7.8-12.7) years). Although age did not relate to PC1 scores within these patient groups, we cannot ignore this potentially important confounder. We did not find any relation with other clinical parameters. What differences in patient characteristics cause the difference in biomarker expression is currently unclear. TNF-R1 and TNF-R2 are associated with, among others, pulmonary and aortic valve development. 24 Other biomarkers related to PC1 have been related to outcomes: TR-AP related to outcomes following acute coronary syndrome and cardiac hospitalisation among patients with chronic kidney disease. [25][26][27] EPHB4 marginally related to outcomes following acute coronary syndrome. 26 U-PAR has been related to outcomes in patients with congestive heart failure and coronary disease. 28 29 However, the role of these biomarkers in the perioperative setting is largely unknown.
PC2 differentiated between patients before and after ToF repair. Furthermore-across patients before ToF repair-PC2 correlated with haematocrit levels, which may reflect preoperative cyanosis. PC2 was mostly influenced by serum levels of metalloproteinase inhibitor 4 (TIMP4), interleukin-1 receptor type 2 (IL-1RT2), collagen alpha-1(I) chain (COL1A1), platelet glycoprotein VI (GP6) and P-selectin (SELP). GP6 and SELP regulate the activity of the integrin αIIb-β3 complex. 30 This complex is found in platelets and regulates platelet aggregation. 30 Previous research found that preoperative cyanosis may affect platelet function and increase platelet aggregation. 31 Furthermore, other biomarkers of collagen and other matrix metalloproteinase subtypes have been related to outcomes in congenital heart disease. [32][33][34] However, their role in the perioperative setting has scarcely been studied. Whether PC2 also relates to features of unrepaired ToF other than cyanosis-such as right ventricular pressure load, right ventricular hypertrophy or diminished flow in the pulmonary circulation-could not be established. In mice models, collagen and matrix metalloproteinases have been related to right ventricular hypertrophy resulting from pulmonary hypertension. 35 Biomarkers in PC2 have previously been related to outcomes in acute coronary syndrome (TIMP4, IL-1RT2, COL1A1, SELP), 26 36 congestive heart failure (TIMP) 37 and atherosclerosis development (TIMP4). 38 PC3 is clinically related to early postoperative status (T2). PC3 was influenced by, among others, serum levels of NT-proBNP, ST2, and myoglobin (MB). These biomarkers are expressed in striated (ie, skeletal and cardiac) muscle tissue. NT-proBNP and ST2 are released by the myocardium in response to increased wall stress. 39 40 MB can be released in response to injury to cardiac or skeletal muscle myocytes. 41 Temporarily impaired ventricular function at this time point may lead to the expression of NT-proBNP and ST2. 39 40 Serum MB levels may be reduced at T2 due to muscle wasting related to hospitalisation or due to periprocedural early losses. 41 Patients with PVR had lower PC3 scores at T2 compared with those with ToF repair. This may relate to the limited perfusion and aortic cross-clamp time in this group, as well as the limited impairment of ventricular function at this time point. NT-proBNP and ST2 have received much attention as biomarkers for long-term outcome in cardiovascular and congenital heart disease. 14 40 Perioperative levels of NT-proBNP may predict outcome during 6 months of follow-up. 42 ST2 had not been studied in the perioperative setting for congenital heart disease.
With regard to the RVOT transcriptome at the time of operation, we found most differences related to patients' sex, rather than clinical characteristics, indicating that gene expression in the RVOT is not influenced by clinical characteristics in the relatively homogeneous study population. It should be noted that the RVOT may not be representative of the global RV myocardium, and other anatomic structures may be less preserved. A previous study by Zhao et al, in eight patients with ToF aged 6 (3-10) months, reported various upregulation of HIF1Aregulated hypoxia response genes in RVOT samples obtained from patients with cyanosis when compared with patients without cyanosis. 43 In our present study, these genes were not differentially expressed between patients with and without cyanosis. It should be noted that our present study included both patients with staged and primary ToF repair, whereas Zhao et al only included patients with primary repair. Although the age of repair was similar between the two studies, patient characteristics related to disease severity such as Hb, Ht, RVOT dimensions or transpulmonary valve flow velocities were not reported in the study by Zhao et al. Disease severity may account for the different findings across these studies.
Limitations
Despite our study's strengths, some limitations should be considered. Patients undergoing PVR were significantly older and perioperative conditioning often varied compared with patients undergoing repair. This confounds comparisons between these groups. Strategies to account for confounders such as propensity score matching or weighting are not feasible considering the large inherent age difference between these groups. To minimise age differences, we excluded adult patients undergoing PVR. It should be noted that patients with ToF generally undergo PVR beyond childhood. 1 We used haematocrit as a surrogate marker for preoperative cyanosis. Haematocrit values, in contrast to O 2 saturation values, may better reflect the longterm burden of cyanosis, rather than a single saturation measurement, which may be subject to large fluctuations. Furthermore, many patients had O 2 saturations of ≥99%, which complicates statistical analysis. It should be noted preoperative PC2 scores correlated with haematocrit, but no relation to O 2 saturation at this time point could be established. Factors other than cyanosis such as hydration status and other blood count values also may have influenced haematocrit values.
Our biomarker panel analysis results in relative expressions, which cannot be compared across study populations or with published references. Most of these biomarkers are relatively novel and do not have age-related references. Some of the variance between time points may be explained by normal somatic development (eg, normal NT-proBNP concentration rapidly declines during the first years of life). 44 The analysis of myocardial transcriptome was limited to the RVOT, as this tissue is readily obtained during ToF repair. The transcriptome may differ between the RVOT and other segments of the RV.
Despite these limitations, we provide a comprehensive analysis, including RVOT transcriptome, functional echocardiographic parameters and serum biomarkers, of patients undergoing ToF repair and PVR.
CONCLUSIONS
We provide extensive observations on the functional and immunological response to periprocedural injury for patients with ToF. We identified biomarker phenotypes that clinically relate to (1) ToF repair and PVR, (2) uncorrected versus corrected ToF and (3) early postoperative status. The identified biomarker response at the early postoperative status may relate to recovery from perioperative injury. However, we did not identify any specific functional or immunological response that related to (un)favourable recovery from surgery.
These findings add to our current understanding of periprocedural injury and subsequent recovery. Improved characterisation of perioperative injury and subsequent recovery may provide biomarkers to identify patients at risk for adverse events. Furthermore, elucidating the mechanisms of perioperative injury may identify targets for therapy, such as inhibitors of disadvantageous immune responses following CPB or perioperative myocardial protective strategies. | 2023-04-08T06:17:43.382Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "5620d2c8e2380550267e4a258b3f0df8cc587e38",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "ef65028dd89414e15560a5a9726caacc9574481e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224960992 | pes2o/s2orc | v3-fos-license | The role of sports and well-being programmes in choosing workplaces in the future
Due to recent changes in the labour market, recruitment and retaining employees have become more important than ever. Research dealing with the appearance of new generations in the labour market has found that they are less loyal to their employers, have high demands, and the key factors that they consider when choosing a job are salary, career opportunities, working environment, and work-life balance. As numerous studies in recent years have proved the importance of a healthy lifestyle in the context of labour, the question has arisen whether opportunities for sport participation and services supporting the well-being of employees have an influence on young people when they are seeking employment. We carried out an online survey to find out what students of the University of Debrecen think about the issue. The results were in line with the findings of previous studies, that is, young people look for high salaries, good working conditions, work-life balance and career opportunities when choosing a job. However, respondents did not identify sports opportunities and well-being benefits as major factors Yet, we found significant differences between different groups in terms of preference of particular factors, depending on sex, marital status, and whether someone does physical exercises regularly, and whether someone works while attending a university course or not.
INTRODUCTION
The financial crisis in 2008 had a negative effect on unemployment rates [1,2] but in recent years the number of unemployed people has decreased. According to KSH (Hungarian Central Statistical Office) data, the unemployment rate has been under 4% since the first quarter of 2017 in Hungary, which practically means full employment. The Hungarian labour market is characterised more by the lack of workforce now [3], which is a serious problem that negatively impacts our economic development. As a result of the above processes and due to the high level of employee turnover, efficient recruitment and retaining employees have become very important for employers [4], The new (Y and Z) generations' entry into the labour market has made the situation worse. They have high demands in terms of working conditions and salary, but they lack loyalty [5,6]. For them, it is natural to change jobs every 2 or 3 years if they can get a better offer [7]. Further, they are driven by success, ambition and self-fulfilment, and are often highly creative and innovative [7]. According to Tari [8], family is much more important than work for new generations, which makes work-life balance a critical factor.
A similar correlation was found by R. Fedor and co-authors [9,10], who examined the employment and family planning plans of women with young children and youngsters. This area has been researched since the '80s [11], and its significance has been growing ever since. Due to their high demands, these young people are hard to retain. This tendency, combined with the lifestyle and attitude of the new generation, raises serious concerns. This is what justifies research into how job-seekers choose a workplace and how employers can retain employees.
According to studies, members of the Y generation choose jobs mainly on the basis of salary and career opportunities [12], while for the Z generation work-life balance is of utmost importance [13,14]. However, it might be useful to consider new factors in studying the area. For instance, innovative and employee-friendly companies have introduced measures and programmes to improve the satisfaction and health of employees. These so-called wellbeing programmes reduce work-related stress, the number of absence days and turnover, and increase employee satisfaction [15]. Deutch and Gergely [16] found that one of the most frequently used methods for coping with stress is sport. Pfau's research findings [1] show that sport is very important for university students. Therefore, it is a valid hypothesis that well-being programmes and sport opportunities may influence job-seekers in choosing an employer.
Besides the above-mentioned reasons, employers should also consider using well-being programmes for financial reasons. According to studies [17,18], diseases and health conditions related to sedentary lifestyle cause financial loss both for the national economy and for individual companies. According to Acs [19] physical inactivity may lead to HUF 60 billion loss for the national budget. This amount could be reduced by HUF 5.6 billion by increasing the number of people doing some sport by 10%. Well-being programmes can be seen as investments. According to Baicker et al. [20], US companies save USD 3.27 health expense and USD 2.73 absence-related expense with one dollar investment in health improvement. Based on the research findings of B acsn e et al. [21], we can draw the conclusion that Hungarian employers have already realised that it is profitable to invest in their employees' health improvement. Besides satisfying the needs of sport-lovers, it is no less important to motivate physically inactive people to engage in sports. So what role can businesses play in dealing with this issue? In the Eurobarometer [22] survey most people who did not do physical exercise regularly identify lack of time, money, energy or company as reasons. Employers can contribute to the solution of all these problems.
Based on what was said above, it is a valid question whether job-seekers in the future will consider sport and well-being opportunities and benefits when choosing an employer. To justify our research question, we can also refer to previous international [23][24][25] and Hungarian [26,27] studies, which all emphasize the importance of healthimprovement in this context.
In the course of our research, we examined the patterns of young people regarding physical activities, and the factors that influence how they choose between employers. Our hypothesis was that sport opportunities and well-being programmes offered by employers have a significant influence on the decision of young people when they have to choose between employers. They may even be among the most important factors, actually.
MATERIAL AND METHODS
We used a survey for our primary examination. We asked students of the University of Debrecen to complete a questionnaire, which means we did not work with a representative sample. The first part included questions about demographic parameters and sport habits. In the second part we asked respondents about how individual factors would influence their decisions when searching for a job. We listed 25 factors and respondents were asked to evaluate their importance on a 1-5 Likert scale. Some of these factors had been identified by previous studies, others had been defined by us. Finally, we asked students whether sport opportunities offered by employers could influence their decisions in job seeking.
Out of the 416 respondents 45.2% were men and 54.8% were women. Due to their student status, 50% were below 22 years of age, and over 90% were below 28 at the time of the survey. 39.9% were single, and 60.1% were involved in a relationship or married. Many of the respondents lived in county centres, about 30% in towns, 17.5% in villages, and only a few lived in Budapest. About 75% indicated that their financial situation was average or better and only 26% rated their situation below average. We also asked them what programme they were attending at the university. 53.1% were participating in a BA or BSc programme, 36.3% in an MA or MSc programme, while 7.2% were attending advanced vocational programmes and 3.4% were PhD students. 50.5% were working besides pursuing studies. As for their future, about the same percentage indicated that they wanted to work in the private, in the public and in the nonprofit sector. Two thirds of the respondents did some sport activities regularly (at least once or twice a week) ( Table 1).
The data collected during the study was processed using the IBM SPSS Statistics 23 software. Basic descriptive statistical indicators were examined and hypothesis testing was carried out, for which, considering the non-normal distribution of the sample, the Mann-Whitney U test was used. This is the non-parameter version of the 2-sample t-test, which allowed us to compare the elements of two groups [28,29]. We ran the hypothesis test with different variables. For example, differences between males and females, singles and those involved in a relationship were examined. We also examined whether there was a significant difference between respondents who were working and those who were not, and between young people who were living an active lifestyle and those, who were not.
RESULTS
As we mentioned in the previous section, more than two thirds of the respondents did physical exercise regularly; 13% every day, 24% three or four times a week, and most of them (30.8%) once or twice a week (this can be linked to the mandatory PE classes introduced at UD). 17.7%-did some sport once a month, and 13.5% did not do physical exercise at all (Fig. 1).
We also asked physically inactive students about their reasons. Almost 50% said they did not have enough time, 16.4% did not have enough energy, 14.3% did not have company (Fig. 2). Based on the responses, students who refuse to do physical exercise (or compete) and those with any health problem account only for 10% of the sample, which means that most of the respondents could be convinced to engage in sports. Employers could solve the rest of the above problems by offering sport opportunities either during working hours or outside of them.
The most important factors that influence job-seekers' decision when choosing a company were good atmosphere (4.61), good conditions and environment (4.58) and salary (4.54). These were followed by career opportunities (4.49) and work-life balance. Fodor [30] and R. Fedor [31] had published similar results earlier. The reputation of the company, corporate culture, benefits and performance-based bonuses were less important for respondents. Interestingly, our findings are in conflict with some previous statements regarding younger generations [8], namely that they prefer teamwork. Our respondents ranked this factor as last but one, which means they valued the opportunity to work independently more. Out of the 25 factors in the list, respondents valued international career opportunities the less, Every day 13%
Min. once or twice a week 31%
Once a month or more rarely 18% Never 14% and sport opportunities (3.44) and recreational or well-being programmes (3.46) were not significant influencing factors either (Fig. 3). Our hypothesis test revealed significant differences within demographic groups in the case of several variables. For example, between men and women in terms of preference of the following factors: good working environment, environmentally conscious corporate behaviour, the personalities of colleagues, the opportunity for teamwork, honest and open communication on the part of the management, flexible working hours, and corporate social responsibility. All of these factors were more important for women (Table 2).
Also, in most cases, there were significant differences between students who were working and those who were not, with the exception of company reputation, career opportunities, good working environment, training programmes, and the opportunity for independent work. All other factors tended to be preferred by working students ( Table 2).
We also found significant differences between the preferences of physically active and inactive students. Teamwork, international career opportunities, flexible working hours, sabbatical, sport opportunities, well-being programmes, and company image were more important for the physically active students, while inactive students found the personalities of colleagues more important (Table 2).
Finally, we examined whether the Mann-Whitney U test revealed significant differences between the preferences of single students and those involved in a relationship or already married in terms of the following factors: other benefits, good working environment, cutting-edge technology, work-life balance, honest and open communication on the part of the management, performance-based bonuses, and well-being programmes ( Table 2).
Sports opportunities offered by employers do not really influence the decisions of respondents regarding their workplace preference. Only two measures provided averages close to 4 on the scale: sport card (3.89) and health screening programmes (3.79) (Fig. 4). However, we have to emphasize that the hypothesis testing revealed significant differences between physically active and inactive students in all areas. It means that if the number of physically active job-seekers grows in the future, the demand for sport opportunities offered and financed by employers may grow, and employers should prepare for this development.
CONCLUSIONS
The new generations' entry into the labour market has changed the scenery in terms of how job-seekers choose between companies, and has therefore posed a new challenge for employers. In line with previous studies, we found that the key factors in choosing a workplace are atmosphere, workplace environment and salary, followed by work-life balance, and career opportunities. However, our findings are in conflict with the results of previous studies, which came to the conclusion that younger generations prefer teamwork to independent work. We reached the same conclusion only in the case of young people who do regular physical exercise. In recent years many researchers have studied health improvement benefits offered by companies and their importance. Yet, our findings do not support the idea that sports opportunities and well-being programmes have a significant effect on choosing an employer. However, we found that sport opportunities and well-being programmes may be important factors for those who do regular physical exercise. It means that employers should pay attention to health programmes, because if the ratio of sporty young people grows, the related programmes and benefits offered by companies may become more attractive for job-seekers. For this reason, we suggest that employers carry out a survey to find out what needs their employees have in relation to sport and recreation, for many previous studies have proved that health-improvement programmes may enhance employee satisfaction, which may in turn lead to lower levels of turnover. | 2020-10-19T18:08:20.823Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "1b15db7badcf4548035383376fa89500dbb173eb",
"oa_license": "CCBYNC",
"oa_url": "https://akjournals.com/downloadpdf/journals/1848/11/3/article-p280.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "81c1e873def7064a46462d85f2d031014fe26422",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230435938 | pes2o/s2orc | v3-fos-license | Latitudinal variation of methane mole fraction above clouds in Neptune's atmosphere from VLT/MUSE-NFM: Limb-darkening reanalysis
We present a reanalysis of visible/near-infrared (480-930 nm) observations of Neptune, made in 2018 with the MUSE instrument at the Very Large Telescope (VLT) in Narrow Field Adaptive Optics mode, reported by Irwin et al., Icarus, 311, 2019. We find that the inferred variation of methane abundance with latitude in our previous analysis, which was based on central meridian observations only, underestimated the retrieval errors when compared with a more complete assessment of Neptune's limb darkening. In addition, our previous analysis introduced spurious latitudinal variability of both the abundance and its uncertainty, which we reassess here. Our reanalysis of these data incorporates the effects of limb-darkening based upon the Minnaert approximation, which provides a much stronger constraint on the cloud structure and methane mole fraction, makes better use of the available data and is more computationally efficient. We find that away from discrete cloud features, the observed reflectivity spectrum from 800-900 nm is very well approximated by a background cloud model that is latitudinally varying, but zonally symmetric, consisting of a H$_2$S cloud layer, based at 3.6-4.7 bar with variable opacity and scale height, and a stratospheric haze. The background cloud model matches the observed limb darkening seen at all wavelengths and latitudes and we find that the mole fraction of methane at 2-4 bar, above the H$_2$S cloud, but below the methane condensation level, varies from 4-6\% at the equator to 2-4\% at near the south pole, consistent with previous analyses, with a equator/pole ratio of $1.9 \pm 0.2$ for our assumed cloud/methane model. The spectra of discrete cloudy regions are fitted, to a very good approximation, by the addition of a single vertically thin methane ice cloud with opacity ranging from 0 - 0.75 and pressure less than $\sim 0.4$ bar.
Introduction
The visible and near-infrared spectrum of Neptune is formed by the reflection of sunlight from the atmosphere, modulated primarily by the absorption of gaseous methane, but also to a lesser extent H 2 S (Irwin et al., 2018). Measured spectra can thus be inverted to determine the cloud structure as a function of location and altitude, providing we know the vertical and latitudinal distribution of methane. Although for some years the vertical profiles of methane determined from Voyager 2 radio-occultation observations were used at all latitudes, HST/STIS observations of Uranus recorded in 2002 (Karkoschka & Tomasko, 2009) and similar observations of Neptune recorded in 2003 (Karkoschka & Tomasko, 2011) -2-manuscript submitted to Icarus both showed that the tropospheric cloud-top (i.e., above the H 2 S cloud) methane mole fraction varies significantly with latitude on both planets, later confirmed for Uranus by several follow-up studies (Sromovsky et al., 2011(Sromovsky et al., , 2014(Sromovsky et al., , 2019. These HST/STIS observations used the collision-induced absorption (CIA) bands of H 2 -H 2 and H 2 -He near 825 nm, which allow variations of CH 4 mole fraction to be differentiated from cloud-top pressure variations of the H 2 S cloud. Tomasko (2009, 2011) found that the methane mole fraction above the main observable H 2 S cloud tops at 2-4 bar varies from ∼ 4% at equatorial latitudes to ∼ 2% polewards of ∼ 40 • N,S for both planets.
More recently, an analysis of VLT/MUSE Narrow-Field Mode (NFM) observations (770 -930 nm) along Neptune's central meridian (Irwin et al., 2019) found a similar latitudinal variation of cloud-top methane mole fraction, with values of 4-5% reported at equatorial latitudes, reducing to 3-4% at polar latitudes, but with considerable pixel-to-pixel variation that was not understood at the time. In this study we reanalyse these data using a new limb-darkening approximation model, which makes much better use of all the data from a given latitude, observed at many different zenith angles, and which we find considerably improves our methane mole fraction determinations. We also find that having constrained the smooth latitudinal variation of opacity of the tropospheric cloud and stratospheric haze, we are able to efficiently retrieve the additional opacity of discrete upper tropospheric (0.1 -0.5 bar) methane clouds seen in our observations.
MUSE Observations
As reported by Irwin et al. (2019), commissioning-mode observations of Neptune were made on 19th June 2018 with the Multi Unit Spectroscopic Explorer (MUSE) instrument (Bacon et al., 2010) at ESO's Very Large Telescope (VLT) in Chile, in Narrow-Field Mode (NFM). MUSE is an integral-field spectrograph, which records 300 × 300 pixel 'cubes', where each 'spaxel' contains a complete visible/near-infrared spectrum (480 -930 nm) with a spectral resolving power of 2000 -4000. MUSE's Narrow-Field Mode has a field of view of 7.5" × 7.5", giving a spaxel size of 0.025", and uses Adaptive Optics to achieve a spatial resolution less than 0.1". These commissioning observations are summarised in Table 1 of Irwin et al. (2019). The spatial resolution was estimated to have a full-width-half-maximum of 0.06" at 800 nm. The observed spectra were smoothed to the resolution of the IRTF/SpeX instrument, which has a triangular instrument function with FWHM = 2 nm, sampled at 1 nm, in order to increase the signal-to-noise ratio without losing the essential shape of the -3-manuscript submitted to Icarus observed spectra. This resolution was also more consistent with the spectral resolution of the methane gaseous absorption data used, which are described in section 3.2.
In our previous analysis of these data (Irwin et al., 2019), spectra recorded from single pixels along the central meridian of one of the longer integration time observations (120s) were fitted with our NEMESIS retrieval model (Irwin et al., 2008) to determine latitudinal variations of methane and cloud structure. The cloud-top (i.e., immediately above the H 2 S cloud) methane mole fractions were found to be consistent with HST/STIS observations (Karkoschka & Tomasko, 2011), but were not very well constrained with significant latitudinal variation that we attributed at the time to the random noise from single-pixel retrievals. However, they also did not make full use of the limb-darkening behaviour visible in these IFU observations, although we verified that our cloud parameterization reproduced the observed limb-darkening well at 5 -10 • S.
Since making our initial report on the MUSE-NFM Neptune observations, an analysis of HST/WFC3 observations for Jupiter has been conducted by Pérez-Hoyos et al. (2020), which makes much better use of the limb-darkening information content of multi-spectral observations using a Minnaert limb darkening approximation scheme. We have adapted this technique for use with our Neptune MUSE-NFM observations and find that it greatly improves the quality of our fits and our estimates of the latitudinal variation of cloudtop methane mole fraction at 2-4 bar in Neptune's atmosphere. This reanalysis has also highlighted an erroneous retrieval artefact in our previous work (Irwin et al., 2019) at some locations, which can now be explained.
Minnaert Limb-darkening analysis
The dependence of the observed reflectivity from a location on a planet on the incidence and emission angles can be well approximated using an empirical law first introduced by Minnaert (1941). For an observation at a particular wavelength, the observed reflectivity 1 I/F can be approximated as: 1 I/F is πR/F , where R is the reflected radiance (W cm −2 sr −1 µm −1 ) and F is the incident solar irradiance at the planet (W cm −2 µm −1 ).
-4-manuscript submitted to Icarus where (I/F ) 0 is the nadir-viewing reflectivity, k is the limb-darkening parameter, and µ and µ 0 are, respectively, the cosines of the emission and solar incidence angles. With this model a value of k > 0.5 indicates limb-darkening, while k < 0.5 indicates limb-brightening.
Taking logarithms, Eq. 1 can be re-expressed as: and we can see that it is possible to fit the Minnaert parameters (I/F ) 0 and k if we perform a least-squares fit on a set of measurements of ln(µI/F ) as a function of ln(µµ 0 ).
We analysed the same 'cube' of Neptune as was studied by Irwin et al. (2019), namely Observation '3', recorded by VLT/MUSE at 09:43:21(UT) on 19th June 2018. We analysed the spectra in this cube in the wavelength range 800 -900 nm and Fig. 1 shows the observed appearance of the planet at 830 and 840 nm, which are wavelengths of weak and strong methane absorption, respectively. The limb-darkening behaviour of the observed spectra were analysed in latitude bands of width 10 • , spaced every 5 • to achieve Nyquist sampling. For each latitude band, the observed reflectivities were used to construct plots of ln(µI/F ) against ln(µµ 0 ) and straight lines fitted to deduce (I/F ) 0 and k for each wavelength. Locations on the disc where there were bright clouds were masked out ( Fig. 2) and examples of the fits at 830 and 840 nm for latitude bands centred on the equator and 60 • S are shown in Fig. 3. Here it can be seen that the Minnaert empirical law provides a very accurate approximation of the observed dependence of reflectivity with viewing zenith angles. Although all the measurements are plotted, only those measurements with µµ 0 ≥ 0.09 (i.e., µ, µ 0 >∼ 0.3) were used to fit (I/F ) 0 and k to make sure that the fitting procedure was not overly affected by points measured near the disc edge and thus potentially more 'diluted' with space. Also plotted in Fig. 3 are the reflectivities calculated with our radiative transfer and retrieval model from our best-fit retrieved cloud and methane mole fractions at these latitudes, reported in section 3.3. It can be seen that there is very good agreement between the reflectivities calculated with our multiple-scattering matrix operator model and the Minnaert limb-darkening approximation to the observations for zenith angles less than ∼ 70 • .
-5-manuscript submitted to Icarus Extending this analysis to all wavelengths under consideration, Fig. 4 shows a contour plot of the fitted values of (I/F ) 0 and k for all wavelengths and latitude bands. It can be seen that at wavelengths near 830 nm, the fitted k values are greater than 0.5, indicating limb darkening, while at longer wavelengths, values of k less than 0.5 are fitted, indicating limb brightening. It can also just be seen in Fig. 4 that the width of the reflectance peak of (I/F ) 0 is noticeably wider at latitudes southwards of 20 -40 • S, a trend that is also just discernible in the fitted k values near the reflectance peak.
Having fitted values of (I/F ) 0 and k for all wavelengths and latitude bands, it is then possible to reconstruct the apparent image of the planet at any observation geometry. Using the measured observation values of µ and µ 0 we reconstructed the images of Neptune at 830 and 840 nm, where for each location with latitude φ and cos(zenith) angles µ 0 and µ the reflectivity is calculated as where R(φ) and k(φ) are the interpolated values of Minnaert parameters (I/F ) 0 and k at that latitude. We compare these reconstructed images with the observed images in Fig. 1, which also shows the differences between the observed and reconstructed images. As can be seen the observed general dependence of reflectivity with latitude and position on disc is well reproduced compared with the original MUSE observations and the differences are very small, except at: 1) locations of known artefacts in the reduced data; 2) locations of the small discrete clouds, which were masked out when fitting the zonally-averaged Minnaert limb-darkening curves; and 3) offdisc, where the observed images are not corrected for the instrument point-spread function (PSF). We will return to these discrete clouds in section 3.5.
Retrieval model
Having applied the Minnaert model to the observations we then used the fitted (I/F ) 0 and k parameters to reconstruct synthetic spectra of Neptune for all visible latitude bands and fitted these as synthetic 'observations' using our radiative transfer and retrieval model, NEMESIS (Irwin et al., 2008). There are two main advantages in doing this: 1) the spectra reconstructed using the fitted Minnaert parameters have smaller random error values as they have been reconstructed from values fitted to a combination of all the points in a latitude band; and 2) we can reconstruct the apparent spectrum of Neptune at any set of angles that is convenient for modelling, which can greatly reduce computation time. In our previous approach (Irwin et al., 2019), where we did not assume to know the zenith-angle dependence, we tried to fit simultaneously to the observations at several different zenith -8-manuscript submitted to Icarus -10-manuscript submitted to Icarus angles near the equator. For modelling near-infrared reflectivity observations, NEMESIS employs a plane-parallel matrix operator multiple-scattering model (Plass et al., 1973). In this model, integration over zenith angle is done with a Gauss-Lobatto quadrature scheme, while the azimuth integration is done with Fourier decomposition. For most calculations, not too near to the disc edge, we have found that five zenith angles are usually sufficient and the reflectivity at a particular zenith angle is linearly interpolated between calculations done at the two closest zenith angles. Although this provides a general purpose functionality, this approach has some drawbacks: 1) it requires two sets of calculations at two different solar zenith angles for each location; 2) the linear interpolation can lead to interpolation errors at larger zenith angles; and 3) for points near the disc edge, the number of Fourier components needed to fully resolve the azimuth dependence increases, which can greatly increase computation time. By reconstructing spectra using the fitted Minnaert (I/F ) 0 and k parameters we can simulate spectra measured as if they exactly coincided with the angles in our quadrature scheme, thus avoiding interpolation error. In addition, if we assume the Minnaert approximation to be true, which has a linear dependence in logarithmic space, we only need to fit spectra calculated at two different angles, not several, and can test how well the linear approximation applies in post-processing. Hence, in the retrievals presented here we reconstructed two spectra for each latitude band with viewing zenith angle θ V , solar zenith angle θ S and azimuth angle φ values of (0 Table 1, and θ = 0 • is the fifth.
This second zenith angle is sufficiently high to probe limb-darkening or limb-brightening, but is not too high that we need an excessive number of Fourier components in the azimuth angle decomposition to properly model it, which would make the computation excessively slow. For each latitude band the two synthetic observations at 0 • and 61.45 • zenith angle were then fitted simultaneously to determine the vertical cloud structure and tropospheric methane mole fraction.
Errors in the fitted (I/F ) 0 and k parameters were propagated into the errors on these reconstructed spectra at 0 • and 61.45 • zenith angle as normal, and were seen to increase towards the poles where the curves were less well sampled. However, even then, because the synthetic spectra are derived from linear fits to a large number of data points the random error is very small and we found that we were unable to fit the synthetic observations to within these error. We attribute this to 'forward-modelling' systematic errors due to defi--11-manuscript submitted to Icarus Karkoschka and Tomasko (2011) and also in our chosen cloud and methane parameterization schemes, described below. In order to achieve final χ 2 /n fits of ∼ 1 at all latitudes in our retrievals (necessary to derive representative error values on the retrieved parameters) we found it necessary to multiply these errors by a factor of ∼ 15. Although this may appear to be an alarmingly high factor, we will see later in Fig. 11 that this leads to error bars on the synthetic I/F reflectivity spectra of only 0.5 -1.0 %, which is perfectly reasonable given the likely accuracy of the absorption coefficients used and also the simplicity of our retrieval scheme. We also decided that this approach was more appropriate than our usual procedure of simply adding a forward modelling error (which here would have been 0.5 -1.0% at all wavelengths and locations) since this would miss the fact that the Minnaert-fitting errors are dependent on both wavelength and latitude.
As with our previous analysis (Irwin et al., 2019), we modelled the atmosphere of Neptune using 39 layers spaced equally in log pressure between ∼ 10 and 0.001 bar. We ran NEMESIS in correlated-k mode and for methane absorption used a methane k-table generated from the band model of Karkoschka and Tomasko (2010). The collision-induced absorption of H 2 -H 2 and H 2 -He near 825 nm was modelled with the coefficients of ; ; Borysow et al. (2000), assuming a thermallyequilibriated ortho:para hydrogen ratio. Rayleigh scattering was included as described in Irwin et al. (2019) and the effects of polarization and Raman scattering were again justifiably neglected at these wavelengths. We used the solar spectrum of Chance and Kurucz (2010), smoothed with a triangular line shape of FWHM = 2 nm and took Neptune's distance -12-manuscript submitted to Icarus from the Sun on the date of observation to be 29.94 AU. The reference temperature and mole fraction profile is the same as that used by Irwin et al. (2019) and is based on the 'N' profile determined by Voyager-2 radio-occultation measurements (Lindal, 1992), with He:H 2 = 0.177 (15:85), including 0.3% mole fraction of N 2 .
For the methane profile, we adopted a simple model with a variable deep mole fraction, limited to 100% relative humidity above the condensation level and further limited to a maximum stratospheric mole fraction of 1.5 × 10 −3 (Lellouch et al., 2010) as shown in This profile is compared with our "step" model in Fig. 5. Although this profile is smoother and may be more physically plausible than the "step" model, we do not have the vertical resolution in the MUSE data to be able to discriminate between the two and we also cannot see clearly through the H 2 S cloud to deeper pressures. This can be seen in Fig. 5, where we have also plotted the two-way vertical transmission to space through the cloud only, which shows the fitted cloud to be nearly opaque. In addition, we have also plotted in Fig. 5 the functional derivatives with respect to methane abundance, i.e., the rate of change of the calculated radiance spectrum with respect to the methane abundance at each level if we were to assume a continuous profile. Here we can see that we are only significantly sensitive to the methane abundance in the 1-4 bar region. In fact, we see that with the MUSE data we are only really sensitive to the column abundance of methane above the H 2 S cloud and this column abundance will depend on the vertical distribution of both the methane mole fraction and the cloud; since we do not have precise constraints on either we thus have a degeneracy. Hence, in this study we decided used the simpler "step" model for methane, which has the added advantage of returning a mean value for the methane mole fraction in the 2-4 bar region, which is easy to understand, interpret and compare with previous studies. It is worth noting that Tollefson et al. (2019), who were able to probe to slightly deeper pressures than we were, also adopted a simple "step" model of methane.
-13-manuscript submitted to Icarus For clouds/hazes we again adopted the parameterized model used by Irwin et al. (2016) to model VLT/SINFONI and Gemini/NIFS H-band observations of Neptune, which was found to provide good limb-darkening/limb-brightening behaviour. In this model particles in the troposphere are modelled with a cloud near the H 2 S condensation level (and which is thus presumed to be rich in H 2 S ice (Irwin et al., 2019)), with a variable base pressure (∼ 3.6 -4.7 bar) and a scale height retrieved as as a fraction of the pressure scale height (called the fractional scale height). Scattering from haze particles is modelled with a second layer with base pressure fixed at 0.03 bar and fixed fractional scale height of 0.1. Although the base pressure of the stratospheric haze may in reality vary with latitude, we found that the precise pressure level did not significantly affect the calculated spectra at these wavelengths (since the transmission to space is close to unity at the tropopause level from 800 to 900 nm) and so fixed it to a typically representative value stated. The scattering properties of the cloud were calculated using Mie scattering and a retrievable imaginary refractive index spectrum. For cases where we allow the imaginary refractive index to vary with wavelength (as in our previous report) we use a Kramers-Kronig analysis to construct the real part of the refractive index spectrum, assuming n real = 1.4 at 800 nm. Here, however, for simplicity we forced the imaginary refractive indices to be the same at all wavelengths across the 800 -900 nm range considered, and hence the real refractive index was fixed to 1.4 over the whole range also. The Mie-calculated phase functions were again approximated with combined Henyey-Greenstein functions for computational simplicity and also to smooth over features peculiar to spherical particles, such as the back-scattering 'glory'.
Retrieval analysis
To 'tune' our retrieval model we first concentrated on the latitude bands at the equator and 60 • S. We were aware from our previous study that there was likely to be a high degree of degeneracy in our best-fit solutions with respect to assumed particle sizes and other parameters in the 800 -900 nm range. Hence, we first analysed these two latitude bands for a grid of preset values of: 1) mean cloud particle radius; 2) variance of cloud radius distribution; 3) cloud imaginary refractive index; 4) mean haze particle radius; 5) variance of haze radius distribution; 6) haze imaginary refractive index; and 7) cloud base pressure, described in Table 2. From Table 2 we can see that the number of grid values is 4×2×3×4× 2 × 3 × 3 = 1728 setups for each latitude band. For each setup we retrieved simultaneously four variables from the synthetic spectra reconstructed at 0 • and 61.45 • emission angle: -15-manuscript submitted to Icarus In addition, we checked to see if the limb-darkening curves modelled with our radiative transfer model at all zenith angles were consistent with the Minnaert law and found very good correspondence for zenith angles less than ∼ 70 • (noted in Fig. 3) for schemes using both five and nine zenith angles, adding confidence to our approach.
-16-manuscript submitted to Icarus Table. 2. The first three panels of the top row show the χ 2 values of all the fits for different fixed values of the mean cloud particle radius, cloud particle imaginary refractive index, and variance of the cloud Although there are a wide range of best-fit χ 2 values it is apparent that the best fits are achieved for a methane cloud-top mole fraction of ∼ 4-6% at the equator and ∼ 2-4 % at 60 • S. However, although we clearly retrieve lower methane mole fractions near the south pole than at the equator it can be seen that there are a wide range of possible cloud solutions that give equally good fits to the data, but rather different methane abundances.
Hence, although it would appear that the polar methane cloud-top mole fraction is ∼ 0.5 times that at the equator we can be less certain of the absolute cloud-top methane mole fraction at the equator and pole.
Having surveyed the range of cloud properties that best match the observed limb darkening at the equator and 60 • S, we then took one of the best-fit setup cases and applied this to all latitudes. We chose to fix p base = 4.66 bar, r cloud = 0.1 µm with 0.05 variance and imaginary refractive index n imag = 0.001. For the haze we chose to fix r haze = 0.1 µm with 0.3 variance and imaginary refractive index n imag = 0.1. We then fitted to the synthetic spectra generated from our fitted Minnaert limb-darkening coefficients for all the latitude bands sampled by the Neptune MUSE observations and fitted once more for 1) cloud opacity; 2) cloud fractional scale height; 3) haze opacity; and 4) the cloud-top methane mole -18-manuscript submitted to Icarus fraction. The resulting fitted methane cloud-top mole fractions as a function of latitude are shown in Fig. 8, where we also show the methane mole fraction variation derived in our previous analysis (Irwin et al., 2019) and that derived by Karkoschka and Tomasko (2011).
In Fig. 8 we can see that our derived latitudinal methane distribution for our default model (indicated as Model 1) varies much more smoothly with latitude than our previous analysis (Irwin et al., 2019) and has more smoothly-varying error bars. In addition, it can be seen that our new retrieved methane variation more closely resembles that determined by Karkoschka and Tomasko (2011). The greatest discrepancy occurs at 20 -40 • S and we found here that our fits had the highest χ 2 /n values. To introduce additional flexibility into our model, we ran our retrievals a second time, but additionally allowed the model to -altitude) cloud structure. We can see that n imag for the cloud is poorly constrained, but that of the haze is well estimated and it would appear that to best match the observations at 20 -40 • S the haze particles are required to have slightly lower n imag values than those found at other latitudes. This is easily understood looking at Fig. 1 where we can see that this latitude has numerous bright, high, discrete clouds. Although we masked the observations to focus the retrievals on the background smooth latitudinal variation, to mask completely the brighter clouds at these latitudes would have left us with no data to analyse at all (Fig.2). Hence, we would expect Model 2, which allows the cloud/haze particle reflectivity to vary, to better incorporate the additional reflectivity from these upper tropospheric methane clouds and so fit the observations more accurately and also retrieve a more reliable latitudinal variation in cloud-top methane mole fraction. Please note that the cloud opacity plot in Fig. 9 shows opacity below the 4.66-bar cloud base pressure for two reasons: 1) we assume the opacity to diminish with a scale height of 1 km below the condensation level rather than cutting off sharply; and 2) we show here the opacity in the 39 model atmospheric layers, which are split equally between ∼ 10 and 0.001 bar and so do not coincide exactly with the base pressures of the cloud and haze.
-19-manuscript submitted to Icarus and 2) where the imaginary refractive indices of the haze and cloud are allowed to vary (keeping constant with wavelength). Also shown are the methane mole fractions estimated by Karkoschka and Tomasko (2011), scaled to match our estimates, and recalculations with Model 2 where the base cloud pressure has been reduced to 4.15 and 3.65 bar, respectively. The difference is not large for p base = 4.15 bar, but for p base = 3.65 bar it can be seen that the methane retrieval becomes unstable, for the reasons described in the text.
-20-manuscript submitted to Icarus In addition to providing a better constrained retrieval of cloud-top methane mole fraction, showing its mole fraction to decrease from equator to south pole, our new retrieval scheme appears to detect noticeably lower mole fractions of methane near 60 • S. This is more easily seen in Fig. 10, which shows the spatial variation of tropospheric cloud opacity, tropospheric cloud fractional scale height, stratospheric cloud opacity and cloud-top methane mole fraction projected onto the disc of Neptune as seen by VLT/MUSE. It is difficult to be certain if this is a real feature as we have much less geometrical coverage of the limbdarkening curves as we approach the south pole. If it is a real feature then it is possible it might perhaps be related to the South Polar Feature (SPF), e.g., Tollefson et al. (2019).
We will return to this question in the next section.
Finally, in Fig.11 here why the errors in the synthetic observations had to be inflated to enable the retrieval model to fit to an accuracy of χ 2 /n ∼ 1: even when inflated the reflectivity errors are still small (∼ 0.5 %) compared with the likely accuracy of the gaseous absorption coefficients used and the simplicity of our cloud parameterization scheme.
Comparison with previous retrievals
The difference between our new methane retrievals and our previous estimates (Irwin et al., 2019) are for some locations greater than 3-σ, although it should be remembered that these are random errors only, and do not account for systematic errors arising from differences in the assumed methane/cloud models. These differences mostly occur in the cloud belt near 20 -40 • , where we have unaccounted upper tropospheric methane ice clouds, but we wondered whether there might be other effects that might explain the sharper latitudinal changes in methane abundance and estimated errors of (Irwin et al., 2019). Our previous retrievals assumed a base cloud pressure of p base = 4.23 bar, rather than p base = 4.66 bar as we assumed here. Hence, we re-ran our retrievals using the two other base pressures listed in The results for Model 2 (where we also fit for n imag of both cloud and haze) for all three cloud base pressures are shown in Fig. 8. As can be seen, the results for p base = 4.15 bar are very similar to those for p base = 4.66 bar, but those for p base = 3.65 are very different and appear, in terms of scatter and inflated error bars, more like our previous results (Irwin et al., 2019). We believe this to be caused by an artefact of our retrieval model, where we have assumed a single cloud with fixed base pressure and variable scale height combined with our simple "step" methane model. For Model 2 with p base = 4.66 bar it can be seen in Fig. 9 that the retrieved level of unit cloud optical depth is in the range 3-4 bar, depending on latitude, comfortably greater than the methane condensation pressure. However, when the cloud base pressure is lowered to p base = 3.65 bar the cloud opacity has to be greater at lower pressure levels in order to give enough overall reflectivity. This pushes the level of unit optical depth to lower pressures and, depending on the deep methane abundance, can at some latitudes become similar to the methane condensation pressure. In such circumstances the sensitivity of the calculated reflectivity to the deep methane mole fraction is reduced and the retrieved mole fraction may need to be greatly increased (and have greater error bars) to give enough methane absorption, exactly as we see. The retrieved pressure levels of unit optical depth from Irwin et al. (2019) are shown in Fig. 9 of that paper to be in the range 1.8-3 bar, which is indeed rather close to the methane condensation pressure level and so would unfortunately have suffered from this same systematic artefact. However, Irwin et al. (2019) assumed p base = 4.23 bar, a value that gave consistent results in our new retrievals, which indicates that there must be an additional difference between the two analyses that caused Irwin et al. (2019) to retrieve unit optical depth values near the methane condensation level. We have identified this difference to be that rather than using a priori values of n imag = 0.001 and 0.1 for the cloud and haze respectively, as used in this study, Irwin et al. (2019) assumed the imaginary refractive index spectra to be fixed at all latitudes to those retrieved from their limb-darkening analysis at 5 -10 • S. Figure 6 of Irwin et al. (2019) shows that the haze particles were found at the equator to be rather dark (n imag in the range 0.076 to 0.159, depending on wavelength). These darker, wavelengthdependent haze particles, combined with the extended wavelength region of 770 -930 nm (compared with 800 -900 nm considered here) led to larger retrieved haze opacities and consequently required larger cloud opacities to match the peak reflectivity at continuum wavelengths. This then led to the retrieved unit cloud optical depth levels approaching the methane condensation pressure.
-24-manuscript submitted to Icarus To demonstrate this effect we repeated the analysis of the central median observations of Irwin et al. (2019), using exactly the same setup as used in this previous study, but substituting the a priori cloud scattering properties to be those used by our new Model 1 (fixed n imag ) and Model 2 (variable n imag ). Our re-fitted methane mole fractions are shown in Fig. 12. Here, we can see that when reprocessed in this way the set of spectra along Neptune's meridian return a latitudinal variation in deep methane mole fraction that is much more consistent with our new analysis and also with the HST/STIS determinations (Karkoschka & Tomasko, 2011). The apparently small retrieved errors of the reanalysis towards the south pole should be viewed with caution. Such latitudes are only seen over a very small range of zenith angles and so we cannot say anything about the limb-darkening here. Hence, we are much more dependent at these latitudes on the assumed cloud and methane profile, and so our methane estimates are prone to larger systematic error. In addition, for the central meridional analysis we used the MUSE pipeline radiance errors (scaled to give χ 2 /n ∼ 1 for fits to spectra near the equator), which are smaller near the pole giving smaller apparent methane mole fraction errors. In contrast our new limbdarkening analysis has fewer points to define the limb-darkening and so assigns larger error bars to the reconstructed spectra near the pole. Hence, at these latitudes the retrieved deep mole fraction is retrieved with larger error.
Returning to the question of the apparent minimum of methane at 60 • S in our new analysis, with larger retrieval errors towards the south pole the solution might be expected to partially relax back to the a priori value of (4 ± 4)% 2 . However, when we repeated the retrievals using a lower methane mole fraction of (2 ± 2)% (i.e., same fractional error), the same latitudinal behaviour was determined as can be seen in Fig. 12 (Model 2A), so this cannot be the cause. Instead, we believe this apparent methane feature may arise from the limited range of zenith angles sampled to fit the Minnaert parameters at these latitudes, since they will appear only at higher zenith angles and our analysis further omits observations with µ > 0.3 (µ is the cosine of the zenith angle), to avoid locations too near the disc edge. Figure 4 shows (bottom right panel) that at these latitudes the Minnaert limb-darkening parameter, k, appears to tend to ∼ 0.5 at ∼ 80 • S at all methane-absorbing wavelengths, but there is no clear difference in the appearance of Neptune at this latitude at any wavelength. Hence, we 2 Note that this parameter is treated logarithmically within NEMESIS, and hence it is the fractional error, i.e., 1.0, that is used in the covariance matrix.
-25-manuscript submitted to Icarus believe this effect to be a geometrical artefact of our limb-darkening analysis, in the same way that the central meridional analysis shows a continuing decrease towards the pole as we view the locations at higher and higher zenith angle. Only observations recorded with even higher spatial resolution would be able to better constrain the latitudinal variability of methane at such high latitudes. In the meantime, our new determinations of methane mole fraction at polar latitudes have larger retrieval errors properly indicating this greater uncertainty.
Finally, we return to the question of assumed methane and cloud profile parameterizations and the absolute accuracy of our methane retrievals. As noted earlier, with spectral observations such as these in this wavelength region what we are actually sensitive to is the column abundance of methane above the cloud top. Sadly, the vertical resolution of such nadir/near-nadir observations cannot physically be less than ∼ one scale height, which means that it is very difficult to discriminate between the "step" methane model used here and more sophisticated models such as the "descended profile" model favoured by, e.g., -26-manuscript submitted to Icarus Karkoschka and Tomasko (2011) and Sromovsky et al. (2019). This is especially the case when considering that we do not have a good ab initio model for the vertical cloud structure either. It is apparent that for models with higher cloud opacity at lower pressures, the mole fraction of methane will need to be higher to give the same column abundance and so it can be seen that there exists a wide range of possible cloud and methane vertical distribution models that could fit our observations equally well and give the same methane column abundance for a given latitude. For the nominal Model 2 retrievals presented here, with cloud base at 4.66 bar we see a clear reduction in the column abundance of methane towards the pole, which we interpret here in terms of a deep mole fraction varying from (5.1 ± 0.3)% at the equator to (2.6 ± 0.2)% at 60 • S, i.e., a reduction factor of 1.9 ± 0.2. However, in terms of the latitudinal dependence of the mole fraction of methane this depends on the methane profile, cloud profile and the reference pressure level and so the cloud-top methane mole fraction could conceivably vary by as much as ∼ ±1%. Hence, here we estimate the equatorial deep mole fraction of methane to be in the range 4-6%, with the abundance at polar latitudes reduced from those at equatorial latitudes by a factor of 1.9 ± 0.2.
Extension to discrete cloud retrievals
Having greatly improved our fit to the background atmospheric state, we wondered if it might be possible to retrieve, in addition, the cloud profiles for the discrete cloud regions masked out in our analysis so far. Discrete clouds such as these are known to exist at pressures from 0.5 -0.1 bar (Irwin et al., 2011, e.g.) and as such are almost certainly clouds of methane ice. It can be seen from Fig. 1 that in our observations these clouds are mostly restricted to latitudes 30 -40 • S and are of highly variable reflectivity and only cover a small range of central meridian longitudes. Hence, our Minnaert limb-darkening analysis, which assumes that the clouds at a particular latitude are zonally-symmetric and do not vary with central meridian longitude, was not appropriate for analysing these discrete clouds. Instead, assuming that the cloud properties of the background tropospheric and stratospheric clouds would not be different from their zonally-retrieved Minnaert values, we looked to see what opacity and pressure level additional discrete methane ice clouds might need to have to best match the observed spectra in the previously masked pixels. We assumed that the methane ice particles at such low pressures were likely small and assumed a size distribution with mean radius 0.1 µm and variance 0.3. Methane ice is highly scattering at short wavelengths (Martonchik & Orton, 1994;Grundy et al., 2002) and so the complex -27-manuscript submitted to Icarus refractive index was set to 1.3 + 0.00001i, with scattering properties calculated via Mie theory and phase functions again approximated by combined Henyey-Greenstein functions.
For vertical location, we assumed that the cloud had a Gaussian dependence of specific density (particles/gram) with altitude and had a variable peak pressure and opacity. The a priori pressure level of the opacity peak was set to 0.3 bar. We reanalysed the areas that had previously been masked and extended the area slightly to capture some of the thinner discrete clouds seen (Fig. 2). Then, for each pixel in this extended area, we set the tropospheric cloud, stratospheric haze and cloud-top methane mole fraction to that determined from the zonally-averaged Minnaert analysis for that latitude and retrieved the opacity and peak pressure of an additional methane ice cloud.
Our retrieved methane ice cloud properties can be seen in Fig. 10, where we retrieve opacities of up to 0.75. Although there are clearly some regions of thick methane ice cloud, the median value of this additional opacity is found to be only 0.0063 and so it only makes a significant difference to the observed radiances in the small discrete regions seen. Figure 10 also shows an apparent variation in mean pressure of these discrete clouds, remaining near the a priori pressure of ∼ 0.3 bar for the thinnest clouds, but reducing to as low as 0.15 bar for the thickest clouds. However, we find this pressure variation to be insignificant compared with the retrieval errors. Reviewing the two-way transmission-to-space within the 800 -900 nm wavelength region examined here, we find that we are only weakly sensitive to the actual pressure level of detached methane clouds in the 0.5 -0.1 bar region. Our chosen wavelength band includes the strong methane absorption band at 887 nm, but even here the two-way transmission to space only reduces to 0.5 at the 0.35 bar level for nominal cloud/haze conditions. To really probe the altitudes of such clouds we need to use observations in the much stronger methane bands at 1.7 µm in the H-band, as has been done by numerous previous authors (Irwin et al., 2011(Irwin et al., , 2016Luszcz-Cook et al., 2016, e.g.,), but which is not observable by MUSE. We examined the raw MUSE observations near 727 nm and 887 nm, but could not see any clear difference in the brightness of the discrete clouds with wavelength for either weak or bright detached clouds. Hence, all we can really say with the MUSE observations is that the discrete clouds (for all opacities) must lie somewhere at pressures less than ∼ 0.4 bar.
With the addition of discrete methane clouds our forward-modelled reconstructed images of Neptune at 830 and 840 nm are compared with the MUSE observations in Fig. 13.
Comparing with Fig. 1 it can be seen that we achieve a very good fit at all locations on -28-manuscript submitted to Icarus Figure 13. As Fig. 1, but here the reconstructed images are generated from NEMESIS forwardmodelling calculations from the fitted cloud and methane profiles, including fitting of discrete cloud regions using an additional thin methane ice cloud layer. As can be seen the residuals are reduced greatly and are small at all locations on the planet's disc.
Neptune's disc. Although we have only shown the fit at two wavelengths here, the fit was found to be very good for all wavelengths in the 800 -900 nm range and Fig. 14 shows the root-mean-square (RMS) reflectivity differences of our fits at all wavelengths, showing that we match the observed spectra to an RMS of typically 0.05%, increasing to only 0.15% in the brightest methane ice clouds. At these locations it may be that model is struggling with the fact that the stratospheric haze opacity was set and fixed to that derived from zonally-averaged fits where the discrete clouds had not been entirely masked.
Conclusions
In this work, we have reanalysed VLT/MUSE-NFM observations of Neptune, made in June 2018 (Irwin et al., 2019), with a Minnaert limb-darkening analysis recently developed for Jupiter studies (Pérez-Hoyos et al., 2020). We find that the new scheme allows us to use the observations much more effectively than simply analysing along the central meridian, as we had previously done, as it also accounts for the observed limb-darkening/limb-brightening at different wavelengths and latitudes. Once having fitted the general latitudinal variation of cloud and haze with this analysis we are then able to fit for the properties of discrete methane clouds seen in our observations, allowing us to fit all locations on the visible disc to a reflectivity (I/F) precision of 0.5 -1.0% (RMS < 0.15%). Our main conclusions are: • We find that we are able to fit the background reflectivity spectrum of Neptune from 800 -900 nm with a simple two-cloud zonally symmetric model comprising a deep cloud based at 4.66 bar, with variable fractional scale height and a layer of stratospheric haze based near 0.03 bar.
• The cloud-top mole fraction of methane at 2-4 bar (i.e., above the H 2 S cloud, based at 4.66 bar) is found to decrease by a factor of 1.9 ± 0.2 from equator to pole, from 4-6% at the equator to 2-4% at the south pole.
• While this latitudinal decrease in methane mole fraction is well defined, the absolute mole fractions at different latitudes depends on the precise choice of cloud parameterization (and indeed methane parameterization), for which a wide range of setups give similar goodness of fit.
• The previous retrievals along the central meridian of these data reported by Irwin et al. (2019) appear to have suffered from an unfortunate retrieval artefact at some locations due to a less sophisticated incorporation of limb-darkening. Our new methane retrievals are more robust, vary more smoothly with latitude, and give a clearer and conservative estimate of the likely error limits.
• The opacity of both the tropospheric cloud and stratospheric haze is found to be maximum at 20 -40 • S and 20 -40 • N, which are also the latitudes of the discrete -30-manuscript submitted to Icarus methane clouds seen. While this may be a real feature, it may also be that the limb-darkening curves analysed by our Minnaert scheme were contaminated by thin discrete clouds at these latitudes.
• Adding localised methane clouds to our zonal model allows us to additionally retrieve the properties of the discrete cloud locations; we find these clouds must lie at pressures less than 0.4 bar and have opacities of up to 0.75.
• The latitudinal variation of cloud-top methane mole fraction with latitude observed here in 2018 seems little changed from that seen by HST/STIS in 2003 (Karkoschka & Tomasko, 2011), indicating that this apparent latitudinal distribution of cloud-top methane has not varied significantly in the intervening fifteen years.
Having now developed a scheme that matches the observed spectra of Neptune from 800 -900 nm, the next step will be to reproduce the appearance of Neptune at other wavelengths to see if our fitted cloud/methane model is more generally applicable. There will be two main challenges to this: 1. Firstly, extending to shorter wavelengths the contribution of Rayleigh scattering becomes more important, which limits our ability to see to deeper cloud layers. However, more importantly for the MUSE-NFM observations is the fact that the full-widthhalf-maximum (FWHM) of the point-spread-function (PSF) becomes significantly worse. It can be seen in Fig. 1 that even at 800-900 nm, there is a considerable residual off-disc signal due to the PSF. We could have tried to model this in our fitting procedure, but concluded that it was simpler to limit ourselves to locations not too near the disc edge in the Minnaert analysis. However, this would become more difficult to justify at shorter wavelengths. Instead, in future work we hope to take our existing model, calculate the appearance at shorter wavelengths, and convolve with a PSF model, which still needs to be developed. By thus simulating the shortwave observations we will be able to see if we can discount more of the possible cloud/haze paramaterisation setups, which will help refine our methane retrievals.
2. Extending to longer wavelengths, the contribution of Rayleigh scattering becomes less and the increasing strength of the methane absorption bands means that it is possible to probe the vertical extent and location of clouds more precisely. We would need to extend to J, H and K-band observations to see which of our cloud/haze setups can be discounted, using the fact that the opacity of small cloud particles will fall -31-manuscript submitted to Icarus more quickly with wavelength than large particles. However, this will mean trying to analyse observations taken at different apparitions and with different instruments, which will mean that the cloud distribution will not be the same and that systematic errors in the photometric calibration and PSF characterisation will arise. In addition, the complex refractive indices of the particles will be different from those we have derived at these wavelengths.
Although these difficulties are not insurmountable, they will require careful evaluation and effort to overcome, which is why we leave them as future work. However, it is clear from this study that splitting the atmosphere of Neptune into a background zonal part (that can be fitted with a the Minneart limb-darkening scheme) and an additional spatially-varying part due to discrete methane clouds greatly simplifies the retrieval process and allows us to efficiently reconstruct the entire 3-D structure of Neptune's clouds and methane cloudtop mole fraction. Our model is then simultaneously consistent with the observations at all wavelengths under consideration and all locations on Neptune's disc. The approach could also be applied to observations of Neptune at other wavelengths and also Uranus and Saturn, which to a first approximation appear zonally symmetric with additional discrete cloud features at visible/near-infrared wavelengths. It could also be further applied to Jupiter observations, building upon the work of Pérez-Hoyos et al. (2020), at wavelengths and latitudes that appear relatively homogeneous. | 2020-12-31T09:02:10.080Z | 2021-01-04T00:00:00.000 | {
"year": 2020,
"sha1": "a55572a35a1a341580cab12434b2dad5f4d4c3fc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.01063",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6eaa2470b6b05e2a76221fe307ce21c48a9cd13e",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255644420 | pes2o/s2orc | v3-fos-license | Dysfunctional Lipid Metabolism—The Basis for How Genetic Abnormalities Express the Phenotype of Aggressive Prostate Cancer
Simple Summary Advanced prostate cancer has a higher mortality rate at diagnosis compared to localised prostate cancer. As such, it is critical to understand the mechanisms of development, and potential pathways that may drive research into novel treatments. We aim to review how lipid metabolism relates to advanced prostate cancer. Abstract Prostate cancer is the second most frequent cancer in men, with increasing prevalence due to an ageing population. Advanced prostate cancer is diagnosed in up to 20% of patients, and, therefore, it is important to understand evolving mechanisms of progression. Significant morbidity and mortality can occur in advanced prostate cancer where treatment options are intrinsically related to lipid metabolism. Dysfunctional lipid metabolism has long been known to have a relationship to prostate cancer development; however, only recently have studies attempted to elucidate the exact mechanism relating genetic abnormalities and lipid metabolic pathways. Contemporary research has established the pathways leading to prostate cancer development, including dysregulated lipid metabolism-associated de novo lipogenesis through steroid hormone biogenesis and β-oxidation of fatty acids. These pathways, in relation to treatment, have formed potential novel targets for management of advanced prostate cancer via androgen deprivation. We review basic lipid metabolism pathways and their relation to hypogonadism, and further explore prostate cancer development with a cellular emphasis.
Introduction
Prostate cancer (PCa) is the second most frequent cancer and the fifth most common cause of cancer death in men according to the most recent GLOBOCAN data [1]. In 2020 alone an estimated 1,414,000 new cases and 375,304 deaths occurred globally [2]. Fundamental changes in genetics and lipid metabolism drive the growth of PCa cells, leading to development and progression of disease. The mechanisms behind such growth are important to understand given the increasing burden of disease secondary to an ageing population, further emphasizing the need to continually develop treatment strategies. We aim to provide a comprehensive review of historical and contemporary evidence for dysregulated lipid metabolism, its relationship to hypogonadism, PCa cellular pathways and genetic abnormalities.
General Lipid Metabolism
Lipid metabolism involves a balance of synthesis and degradation of structural and functional lipids to satisfy the metabolic needs of the body and maintain dynamic equilibrium [3]. Examples of lipids include fatty acids (FA), phospholipids, glycolipids, cholesterol and prostaglandins. Relevant mechanisms of lipid metabolism for this review involve biosynthesis of steroid hormones and β-oxidation of FA, both increasingly studied in PCa development and progression [4]. Each pathway will be discussed, with relevance to PCa examined in more detail later.
Cholesterol is important in the biosynthesis of steroid hormones, of which testosterone (androgen) and its metabolite dihydrotestosterone (DHT) are classified as sex-steroids. All steroid hormones are generated from cholesterol via a common pathway involving pregnenolone, a steroid precursor. Cholesterol is converted to pregnenolone via cytochrome P450 cholesterol side-chain cleavage (P450scc) enzyme (CYP11A1) [5]. Subsequent formation of dehydroepiandrosterone (DHEA) and androstenedione occurs in the adrenocortical cells of the zona fasciculata and zona reticularis layers, respectively, allowing testosterone formation in testicular Leydig cells [5]. Centralised control of testosterone production occurs in the hypothalamus. The hypothalamus initiates pulsatile release of luteinising hormone-releasing hormone (LHRH) which binds and stimulates LHRH receptors in the anterior pituitary gland causing release of luteinising (LH) and follicle-stimulating hormones (FSH). Leydig cells in the testis are stimulated by LH and induce production of testosterone and subsequent conversion to its more potent form DHT via 5 α-reductase [5].
FA metabolism is considered critical in PCa, influencing several pathways including cell signalling, energy processing and membrane fluidity [6]. Its importance in cancer is elevated by increased energy demands to propagate growth and progression [7]. However, β-oxidation of FAs is considered the most prominent oxidation reaction as it relates to peroxisomal (membrane-bound organelle important in oxidative reactions) β-oxidation. This is required for the initial oxidation of very long chain FAs, branched chain FAs and derivatives not able to be directly oxidised by the mitochondrion [6]. In normal cells, FAs are oxidised to acetyl-coenzyme A (CoA) via multiple pathways in β-oxidation methods in peroxisomes and mitochondria. Peroxisomal β-oxidation can occur via branched chain acyl-CoA oxidase (ACOX2) and/or pristanoyl-CoA oxidase (ACOX3) and via D-bifunctional protein (DBP) [8]. Furthermore, α-methylacyl-CoA racemase (AMACR) is an enzyme in both peroxisomes and mitochondria aiding β-oxidation of branched FAs. Subsequently, oxidation of FAs to acetyl-CoA allows initiation of the tricarboxylic acid cycle (TCA) for production of adenosine triphosphate (ATP), the principal energy substrate intrinsic to cellular function and division [6].
Lipid Metabolism, Hypogonadism and Testosterone Disorders-Chicken or the Egg?
It has long been known from epidemiological data that increased testosterone levels are associated with a favourable lipid profile, that is, a lower total cholesterol, LDL and triglycerides (TG) and a higher HDL [9]. Subsequent interventional trials of testosterone replacement have demonstrated its ability to improve lipid profiles, as it suppresses proinflammatory cytokines TNFα, IL-1β and IL-6 to reduce the inflammatory state and lower the total cholesterol profile [10]. Meanwhile, testosterone deficiency is associated with an increased risk of developing metabolic disorders and is also highly prevalent in obesity, metabolic syndrome (MetS) and type 2 diabetes. Models of gonadotrophin releasing hormone deficiency-and androgen deprivation therapy (ADT) in patients with PCa-suggest that hypogonadotropic hypogonadism contributes to the onset and worsening of metabolic conditions, by increasing visceral adiposity and insulin resistance [11]. Therefore, the relationship between lipid metabolism and hypogonadism is bidirectional, in the context of testosterone deficiency and metabolic disease, both of which may have a roll in each other's pathogenesis.
Hypogonadism in men is characterised by an impairment in gonadal function resulting in circulating testosterone levels below the normal range. The clinical syndrome of testosterone deficiency includes impaired spermatogenesis, sexual dysfunction, reduction in testis volume and gynecomastia, together with anaemia, deterioration of muscle mass and metabolic abnormalities [12]. Hypogonadism can be classified as primary or secondary, as well as organic or functional ( Figure 1). In primary hypogonadism, testic-ular failure is the underlying cause, resulting in increased gonadotropin levels and this is defined as hypergonadotropic hypogonadism (Hyper-T). In secondary hypogonadism, the central hypothalamus-pituitary function is disrupted, leading to hypogonadotropic hypogonadism (Hypo-H) [12]. Organic hypogonadism is caused by specific pathologies that are well-recognised, such as Klinefelter's Syndrome (KS), Cushing syndrome, pituitary injury, prolactinoma or testicular trauma and is relatively rare. KS is a prime example of an organic, primary Hyper-T hypogonadism that is strongly associated with MetS. Observational studies in Korean KS patients have shown significantly worsened dyslipidaemia, especially elevated TGs and decreased HDL levels [13].
However, the absence of hypothalamus-pituitary-thyroid (HPT) axis pathology in the setting of hypogonadism-like features and lowered circulating testosterone are characteristic of functional hypogonadism [14]. Chronic diseases such as dyslipidaemia, diabetes, depression, renal or liver diseases and obesity are characteristic of men with functional hypogonadism. Furthermore, functional hypogonadism is demonstrated to have an association with ageing, thereby increasing prevalence. Men aged 50-59 years have 0.6% prevalence, rising to 5.1% in those aged 70-79 [15]. 'Late-onset' hypogonadism has been coined secondary to these observations; as such, these men have modest reductions in circulating testosterone concentrations, approximately 6-10 mmol/L, compared to reference young healthy men [16]. However, the extent to which lowered testosterone concentrations contribute to the ageing male phenotype is not known. Another theory is that functional hypogonadism may be secondary to an accumulation of age-related co-morbidities and MetS, instead of purely due to ageing [17]. HPT suppression may also simply occur due to poor health, leading to lowered testosterone [14]. As such, age-related testosterone reduction may be preventable through health and lifestyle management. Nevertheless, low testosterone is at the very least a sensitive marker of suboptimal health.
Figure 1. Classified causes of hypogonadism by Grossmann and Matsumoto via PMC Open Access
Subset [16]. * is noted to be "combined primary and secondary hypogonadism".
The exact pathophysiological mechanism by which testosterone deficiency leads to metabolic impairment, contributing to obesity, dyslipidaemia and type 2 diabetes, is still unclear. In a study on ageing hypercholesterolaemic men, clinically significant elevations of lipoprotein(a) were found in men with low testosterone [18]. It has therefore been proposed that low testosterone induces lipoprotein(a) lipase activity resulting in increased FA uptake and TG formation in adipocytes, which ultimately stimulate adipocyte proliferation and accumulation of adipose tissue, especially visceral adiposity [19]. On the other hand, mechanistic studies in hypogonadal men have suggested that increased adiposity leads to increased aromatisation of testosterone into oestradiol. As oestradiol increases, testosterone levels decrease, which can then result in a further unfavourable lipid profile [20]. This clearly demonstrates a vicious cycle involving hypogonadism, obesity and an unfavourable metabolic profile ( Figure 2). This profile leads to dyslipidaemia, type 2 diabetes and increased visceral adiposity, thus further decreasing testosterone levels. In fact, men with PCa who receive ADT are prime examples of this phenomenon. As ADT is commenced and testosterone levels reduce, a detrimental effect is seen not only in the patient's [16]. * is noted to be "combined primary and secondary hypogonadism".
The exact pathophysiological mechanism by which testosterone deficiency leads to metabolic impairment, contributing to obesity, dyslipidaemia and type 2 diabetes, is still unclear. In a study on ageing hypercholesterolaemic men, clinically significant elevations of lipoprotein(a) were found in men with low testosterone [18]. It has therefore been proposed that low testosterone induces lipoprotein(a) lipase activity resulting in increased FA uptake and TG formation in adipocytes, which ultimately stimulate adipocyte proliferation and accumulation of adipose tissue, especially visceral adiposity [19]. On the other hand, mechanistic studies in hypogonadal men have suggested that increased adiposity leads to increased aromatisation of testosterone into oestradiol. As oestradiol increases, testosterone levels decrease, which can then result in a further unfavourable lipid profile [20]. This clearly demonstrates a vicious cycle involving hypogonadism, obesity and an unfavourable metabolic profile ( Figure 2). This profile leads to dyslipidaemia, type 2 diabetes and increased visceral adiposity, thus further decreasing testosterone levels. In fact, men with PCa who receive ADT are prime examples of this phenomenon. As ADT is commenced and testosterone levels reduce, a detrimental effect is seen not only in the patient's lipid profile, but a whole host of cardiovascular perturbations, including MetS, higher blood pressure, left ventricular hypertrophy and overall mortality [21]. lipid profile, but a whole host of cardiovascular perturbations, including MetS, higher blood pressure, left ventricular hypertrophy and overall mortality [21].
Prostate Cancer and Dysregulated Lipid Metabolism
The relationship between lipid metabolism and cancer was first observed by Medes et al. [23] in 1953. Cancer tissue was found to overexpress enzymes to generate FAs and phospholipids via de novo lipogenesis in conjunction with environmental uptake of li-
Prostate Cancer and Dysregulated Lipid Metabolism
The relationship between lipid metabolism and cancer was first observed by Medes et al. [23] in 1953. Cancer tissue was found to overexpress enzymes to generate FAs and phospholipids via de novo lipogenesis in conjunction with environmental uptake of lipids. De novo lipogenesis, in turn, supports the excess energy requirements for growth and proliferation which has become a notable hallmark of cancer [24]. Cells can utilise FAs for energy generation via β-oxidation to generate ATP, the principal energy molecule. Activation of the de novo lipogenesis pathway affects all levels of lipid enzyme regulation, occurring downstream to known oncogenic abnormalities such as activation of akt, loss of PTEN, mutation or loss of p53 or BRCA1 and steroid hormone activation [4]. Additionally, exogenous lipids from circulation and lipolysis or stored lipids in adipocytes and intracellular lipid droplets can also be utilised [25].
Interestingly, PCa cells differ from various other cancers as FAs are the predominant energy substrate as opposed to glucose [26]. PCa cells undergo dysregulated lipid metabolism comprising increased de novo lipogenesis in the form of steroid hormone biosynthesis and β-oxidation of FAs for energy generation, membrane synthesis and cell division. Furthermore, FAs are stored in lipid droplets or converted to complex phospholipids as key components to cell membranes [4]. Remarkably, epidemiologic data suggests obesity is a significant risk factor for aggressive forms of PCa, further emphasising the role of dysregulated lipid metabolism [27]. Laboratory studies have demonstrated FA synthase (FAS) having similar properties to oncogenes in PCa mouse models and FAS inhibitors having converse effects, limiting PCa growth in similar environments [28,29]. As such, these studies further emphasise lipid metabolism as a significant contributor to PCa development, although theorised mechanisms will be explored in a later section.
Lipid Metabolism and Androgens in Prostate Cancer
Huggins and Hodges [30] first noted in 1941 the improvement of patients with metastatic PCa when chemically castrated with oestrogens. This led to the understanding that PCa is exquisitely influenced by androgenic activity with inhibition occurring with elimination of androgens. Androgen receptor (AR) activation is a key player in PCa growth and stimulation via multiple metabolic pathways, and its link to lipid metabolism has been observed in advanced PCa, whereby accumulation of lipid droplets in the cytoplasm occurs via AR-associated increased synthesis of cholesterol and FAs as demonstrated in Figure 3 [31,32]. Furthermore, an AR antagonist reverses the effects of lipogenesis, which is not seen in AR-negative PCa cells [32]. Subsequently androgens have been found to influence prostate cell lipid profile through synthesis, binding, uptake, metabolism and transport of lipids [4].
The most characterised mechanism for androgen involvement in stimulation of de novo lipogenesis is via indirect regulation of protein expression, a transcription factor family named sterol regulatory element-binding protein (SREBP). SREBP plays an important part in increasing lipid and cholesterol metabolism and, in turn, aids androgen synthesis [33]. Specifically, SREBPs are comprised of SREBP1a and SREBP1c, and with two isoforms are master regulators of lipid homeostasis due to regulation of enzymes required for lipid synthesis and uptake. Reduction of intracellular sterol levels causes SREBP cleavage activating protein (SCAP)-SREBP complex translocation into the Golgi and is further cleaved by proteases (site-1 and site-2). This, in turn, causes SREBP to translocate to the nucleus, binding to sterol-response elements (SRE) inducing transcription of the key enzymes to de novo lipogenesis, including FAS, 3-hydroxy-4-methylglutarul coenzyme A reductase (HMG CoA-R) and LDL receptor (LDLR) [34]. Furthermore, several studies reviewed by Wu et al. [35] have emphasised the importance of FAs as a dominant energy source in PCa, finding increased expression of enzymes DBP and AMACR, noted earlier to be important in β-oxidation for ATP generation [6]. Given the exquisite relationship of androgens for stimulation and growth of PCa, the mainstay of treatment of advanced PCa is ADT, which inhibits testicular testosterone production either medically or surgically to reduce circulating levels of androgen [36]. and is further cleaved by proteases (site-1 and site-2). This, in turn, causes SREBP to translocate to the nucleus, binding to sterol-response elements (SRE) inducing transcription of the key enzymes to de novo lipogenesis, including FAS, 3-hydroxy-4-methylglutarul coenzyme A reductase (HMG CoA-R) and LDL receptor (LDLR) [34]. Furthermore, several studies reviewed by Wu et al. [35] have emphasised the importance of FAs as a dominant energy source in PCa, finding increased expression of enzymes DBP and AMACR, noted earlier to be important in β-oxidation for ATP generation [6]. Given the exquisite relationship of androgens for stimulation and growth of PCa, the mainstay of treatment of advanced PCa is ADT, which inhibits testicular testosterone production either medically or surgically to reduce circulating levels of androgen [36].
Androgen Deprivation Therapy
Despite the availabilities of newer targeting agents, such as Enzalutamide or Abiraterone, classic ADT (hormonal therapy) has widespread use for local advanced to metastatic hormone sensitive PCa as neoadjuvant or adjuvant therapy with radiotherapy.
Androgen Deprivation Therapy
Despite the availabilities of newer targeting agents, such as Enzalutamide or Abiraterone, classic ADT (hormonal therapy) has widespread use for local advanced to metastatic hormone sensitive PCa as neoadjuvant or adjuvant therapy with radiotherapy. Furthermore, addition of ADT to other systemic agents, such as AR targeted therapy, have recently been noted to improve overall survival in a systematic review when comparing systemic treatments for metastatic castration-sensitive (hormone sensitive) PCa [38]. Wang et al. [38] found Abiraterone acetate (hazard ratio (HR), 0.61; 95% confidence interval (CI), 0.54-0.70) and Apalutemide (HR, 0.67; 95% CI, 0.51-0.89) may have the most improvement to overall survival when added to ADT. A historic form of ADT, which remains a treatment option, is bilateral orchidectomy. Though superseded by more commonly used non-surgical treatments, there remain benefits with surgical castration including less cost and follow-up with potentially fewer side effects. Furthermore, Weiner et al. [39] found that survival rates are comparable between surgical and medical castration.
ADT targets various portions of the hypothalamic-pituitary-gonadal axis and are broadly classed into antiandrogens, LHRH agonist/antagonists (i.e., gonadotrophin releasing hormone (GhRH)) and androgen pathway inhibitors [40]. Antiandrogens block the AR to reduce testosterone cellular signalling whilst androgen pathway inhibitors target along the androgen pathway to reduce AR signalling or inhibit testosterone synthesis. LHRH agonists/antagonists target the LHRH receptor in the anterior pituitary gland. LHRH agonists stimulate the receptor leading to a temporary LH and testosterone surge, subsequently downregulating the receptor causing reduction in the LH and testicular production. Conversely, LHRH antagonists are competitive reversible agents blocking the LHRH receptors, in turn reducing LH release and therefore dropping testosterone production which avoids the initial transient rise. The first systematic review and meta-analysis of prospective studies in men undergoing ADT and its effects on body composition in PCa demonstrated significant increase in body fat on average of 7.7% (95% CI, 4.3-11.2, p < 0.0001), body weight (2.1%, p < 0.001) and BMI (2.2%, p < 0.001) with reduction in lean body mass −2.8% (95% CI, −3.6, −2.0, p < 0.0001) [41]. This data was corroborated by a more recent systematic review of 39 studies, which similarly found increased body fat mass and body weight with decrease in lean mass [42]. Indeed, this emphasises the effect of ADT on lipid metabolism and the role of androgen in PCa stimulation and growth as previously described. Authors also note that given the side effects, ADT may increase the risk of several other co-morbid conditions associated with MetS substantiated by recent reviews [43,44]. ADT causes suppression of circulating androgens with hypogonadism in some cases within 2-3 days of a loading dose, such as in LHRH antagonists (e.g., Degarelix) [45].
Cellular Mechanisms of Prostate Cancer and Dysfunctional Lipid Metabolism
Several mechanisms for dysregulated lipid metabolism in PCa via genetic abnormalities have been hypothesised, and this section will mainly focus on two novel theories. The first focuses on amplification and overexpression of pyruvate dehydrogenase complex (PDC), which is a gatekeeper for conversion of pyruvate to acetyl-CoA and subsequent entry into the mitochondrial TCA cycle [46]. Another focuses on co-deletion of promyelocytic leukemia (PML) and phosphatase and tensin homolog (PTEN) on PTEN-null PCa phenotypes [47]. PTEN is a commonly mutated or lost tumour-suppressor gene in many cancers [48] with partial loss in up to 70% of localised PCa [49] and complete loss linked to metastatic castration-resistant PCa [50]. The PTEN-null transgenic PCa mouse model was utilised in both these studies and emulates high grade intraepithelial prostate tumours at an early age and invasive PCa at late age.
Chen et al. [46] emphasise the association of increased mitochondrial metabolism and cancer pathogenesis and progression through the investigation of PDC. A major component of PDC is PDHA1, which is activated when dephosphorylated by pyruvate dehydrogenase phosphatase (PDP). Conversely, PDC is deactivated when phosphorylated by pyruvate dehydrogenase kinases (Pdks). The authors first establish that subunits of the PDC are amplified and overexpressed in the PTEN-null transgenic mouse model similarly to clinical PCa. In vivo analysis of both mouse and human PCa models demonstrated hampering of PCa progression via suppression of lipid biosynthesis when PDHA1 is inactivated [46]. Furthermore, the authors postulate that the PDC must be functional in both mitochondria and cancer cell nuclei for sufficient support of lipid biosynthesis. In the nucleus, SREBP transcription factor, previously described as important in de novo lipogenesis, has reduced histone acetylation at regulatory regions. However, at the mitochondrial level lipid biosynthesis suppression was noted to be due to a reduction in citrate production, which is important for ATP generation via the TCA cycle [46]. Finally, the authors postulate that PDC, specifically PDHA1, may be a potential therapeutic target for prevention of PCa development. A recent study similarly suggested the therapeutic potential for targeting PDC components and comparatively also found a positive correlation between PDC component (PDHA1, PDP1 and PDK) expression and AR expression [51].
Alternatively, Chen et al. [47] aimed to explore the impact of PML and PTEN codeletion in metastatic PCa in PTEN-null transgenic mouse of PCa. Amplification of lipid metabolism was identified on coordinated loss of these two tumour suppressor genes. Similar to the PDC study [46], this study identifies amplification of an SREBP-dependent lipogenic program, albeit through hyperactivation of MAPK signalling, which is known to be limited by PML [47]. Interestingly, inhibition of SREBP by fatostatin can block metastasis, which can be replicated by a high fat diet in PTEN-null mice without PML loss [47]. Therefore, the authors note that PML deletion leads to amplification of MAPK signalling and subsequent aberrant lipid metabolism. Current evidence for statin use in PCa has no consensus guidelines; however, a recent systematic review and meta-analysis suggests statins may have a unique role in the reduction of biochemical recurrence of PCa post definitive treatment [52].
Several other genetic drivers of aggressive PCa include p53, retinoblastoma (RB1) and Myc, which are well established mechanisms of cancer cell survival in nutrient-poor environments. P53, RB1 and Myc may also be associated with dysfunctional lipid metabolism. Tumour suppressor gene (TSG) p53 regulates cellular metabolism and is a transcription factor controlling protein expression in cell cycle arrest, DNA repair, apoptosis and senescence [53]. Mutation in p53 has been found to increase gain of function activities (p53 R273H and p53 R280K ) and encourage mutant binding to SREBPs, which of course leads to aberrant lipid metabolism, amplifying tumour progression thereby increasing fatty acid synthesis [53]. Conversely, TSG protein RB1 (a critical transcriptional corepressor in prevention of tumour development and progression) loss of function (LOF) has been associated with progression to castration-resistant PCa [54]. Furthermore, LOF of RB1 is associated with alteration in multiple metabolic pathways including lipid, amino acid and peptide metabolism; however, the specifics of lipid dysregulation are unknown [54]. More recently, Myc has also been associated with key FA synthesis genes including ACLY, ACC1 and FAS in PCa [55]. Hi-Myc transgenic mice with prostate-specific overexpression of Myc demonstrated increased circulating levels of total free FAs [55]. As previously noted, FA metabolism is intricately related to PCa development for ATP generation through β-oxidation supporting the excess energy requirements for growth and proliferation of cancer cells [24]. In light of these genetic alterations associated with more aggressive disease variants, recent studies have attempted to identify specific molecular features of aggressive PCa.
Aggressive variant PCa (AVPC) refers to AR-independent forms of PCa [56]. Clinically, AVPC is characterised by rapid disease progression including hormone refractory disease and visceral metastases. PTEN, p53, RB1 and Myc have been found to have more frequent alterations in AVPC and is characterised by a combination of alterations similar to small-cell PCa and therefore have direct clinical relevance with platinum-based treatment [56,57]. Hence, treatment of AVPC as similar to small-cell PCa has demonstrated improvement in progression-free survival (65.4% first-line and 33.8% second-line) and median overall survival (16 months (95% CI, 13.6-19.0 months)) after first line carboplatin and docetaxel and second line etoposide and cisplatin, respectively [58].
Novel Pharmacology Treatments
CYP11A is an essential enzyme in catalysing the initial step of steroid hormone biosynthesis. Inhibition of CYP11A has been hypothesised to halt the synthesis of all steroid hormones. A recent study by Karimaa et al. (2022), showed development of the first-in-class oral CYP11A1 inhibitor, ODM-208, which is non-steroidal and selective. Administration to animal models and 6 human patients with metastatic castration-resistant PCa demonstrated rapid, complete and reversible inhibition within a few weeks, in conjunction with steroid hormone replacement. The authors note that ODM-208 administration is feasible with concomitant corticosteroid replacement therapy which is an exciting potential avenue for treatment of castration-resistant PCa [59].
Conclusions
Prostate cancer is intrinsically related to lipid metabolism and androgens through several notable pathways. This review details the multi-level relationship between lipid metabolism and prostate cancer. The understanding of this interconnectivity is continually evolving and offers a clear pathway towards propagating novel therapeutic management of an ever-growing global cancer disease burden.
Author Contributions: M.A. and A.Y., writing and draft preparation. N.L. and D.B., conceptualisation and editing. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-01-12T17:36:16.951Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "0497fe0ec451dd62ea6e8d42e9ade50fa07151d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/2/341/pdf?version=1672847174",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d466e09d1a7bbc08988ac2f756e90621a36ac3f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
1469961 | pes2o/s2orc | v3-fos-license | Effect of Dexamethasone and Fluticasone on Airway Hyperresponsiveness in Horses With Inflammatory Airway Disease
Background Airway hyperresponsiveness (AWHR), expressed as hypersensitivity (PC 75 RL) or hyperreactivity (slope of the histamine dose‐response curve), is a feature of inflammatory airway disease (IAD) or mild equine asthma in horses. Glucocorticoids are used empirically to treat IAD. Objectives To determine whether dexamethasone (DEX) (0.05 mg/kg IM q24h) and inhaled fluticasone (FLUT) (3,000 μg q12h) administered by inhalation are effective in decreasing AWHR, lung inflammation, and clinical signs in horses with IAD. Methods A randomized crossover study design was used. Eight horses with IAD were assigned to a treatment group with either DEX or FLUT. Measured outcomes included lung mechanics during bronchoprovocative challenges, bronchoalveolar lavage fluid (BALF) cytology, and scoring of clinical signs during exercise. Results Dexamethasone and FLUT abolished the increase in RL by 75% at any histamine bronchoprovocative dose in all horses after the first week of treatment. However, after 2 weeks of FLUT treatment, 1 horse redeveloped hypersensitivity. There was a significant decrease in the number of lymphocytes after treatment with both DEX and FLUT (P = .039 for both) but no significant differences in other BALF cell types or total cell counts (P > .05). There was no difference in the scoring of the clinical signs during each treatment and washout period (P > .05). Conclusions and Clinical Importance Both DEX and FLUT treatments significantly inhibit airway hypersensitivity and hyperreactivity in horses with IAD. There are no significant effects on the clinical signs or the number of inflammatory cells (except lymphocytes) in BALF. The treatments have no residual effect 3 weeks after discontinuation.
M ild equine asthma (inflammatory airway disease [IAD]), 1 together with recurrent airway obstruction (RAO) and summer pasture-associated obstructive pulmonary disease, comprise the equine inflammatory respiratory diseases. [1][2][3] The prevalence of IAD is high and can affect horses of any age and discipline. 4 It can impact performance in both racehorses and sport horses. 1 Inflammatory airway disease is defined as a noninfectious inflammatory lung disease with 3 predominant traits: (1) respiratory clinical signs at work, exercise intolerance without clinical signs of labored breathing at rest, or both, even after exposure to moldy hay; (2) evidence of pulmonary dysfunction in the form of airway hyperresponsiveness (AWHR); (3) nonseptic inflammation based on bronchoalveolar lavage (BAL) cytologic evaluation. 1 Airway hyperresponsiveness is one of the main features of IAD and can contribute to the development of clinical signs. Analogous to human asthma, AWHR can be objectively and reliably demonstrated in horses using bronchoprovocative challenge with histamine. [5][6][7] This allows measurement of airway sensitivity (threshold of the bronchoconstriction response) and reactivity (magnitude of the bronchoconstriction response). Airway hyperresponsiveness is a valuable variable for both research and clinical practice because it can be detected before more obvious clinical signs develop. 8 An increased number of mast cells in the BAL fluid, [8][9][10] respiratory clinical signs, and exercise intolerance 7,11 are correlated with AWHR in horses.
The exact etiology of IAD is still unknown. Several studies have demonstrated a link between IAD and a poor environment. 1,8,[12][13][14] There is also evidence that supports allergy as a contributing factor for the disease. 8,15 A connection between infectious airway disease with tracheal inflammation in young racehorses 16,17 and IAD has been suggested. Although many studies have been published on the diagnosis and characterization of the phenotype of IAD in horses, the scientific evidence for treatments of IAD is extremely limited. More recently, 1 study showed that dietary supplementation with Omega-3 with environmental modifications and lung
Abbreviations:
AWHR airway hyperresponsiveness BAL bronchoalveolar lavage BALF bronchoalveolar lavage fluid DEX dexamethasone E L pulmonary elastance FLUT inhaled fluticasone IAD inflammatory airway disease (mild equine asthma) PC 75 R L airway hypersensitivity RAO recurrent airway obstruction (severe equine asthma) R L pulmonary resistance inflammation will be controlled more rapidly than with only environmental modification. 18 Despite this gap in our knowledge and because IAD is an inflammatory lung disease, it is common practice to treat IAD with glucocorticoids. In several studies on heaves (RAO), dexamethasone (DEX) has specifically been used as a reference treatment to which other glucocorticoids have been compared. [19][20][21] Both DEX and inhaled fluticasone (FLUT) are effective in relieving clinical signs and significantly decreasing neutrophilia in bronchoalveolar lavage fluid (BALF) in horses with severe asthma (RAO). [20][21][22] The objective of this study was thus to evaluate and compare the effects of DEX and FLUT on the clinical signs, AWHR and BAL fluid cytology in horses with IAD. We hypothesized that both glucocorticoids would improve clinical signs and lung function as well as alter the cytologic findings of BALF.
Material and Methods
This study was approved by the Animal Care Committee of the Health Science Centre at the University of Calgary. The authors used the REFLECT statement guidelines to report this study. 23 Horses Eight adult horses (median body weight 512 kg; range 434-563 kg) with IAD from our research herd were studied. The number of horses was calculated using a power of 0.9 for a difference in measured variables between baseline and treatments of 2 times the within-patient standard deviation. a Horses were various breeds, predominantly Quarter Horses or Thoroughbreds, 4 mares, and 4 geldings of various ages (4-16 year old). Criteria for inclusion were as follows: (1) the presence of respiratory clinical signs during exercise without labored breathing at rest, (2) the absence of increased lung resistance at rest after a challenge with moldy hay, 24 (3) the presence of AWHR measured by an increase in lung resistance (R L ) by 75% at lower doses of nebulized histamine, 25 (4) a BAL with increased percentage of mast cells (>2%) or/and eosinophils (>0.1%) or/and neutrophils (>5%).
Prior to the experiment, horses were conditioned to stand in stocks wearing a mask. The animals were kept in the same outside paddocks for at least 3 weeks before the experiment and the management remained the same throughout the period of the study. The horses were kept on straw and were fed round bale hay and a pellet supplement. None of the horses received treatment for respiratory disease during the 3 months preceding the study.
Bronchoalveolar Lavage
Bronchoalveolar lavages were performed in the morning (8:00-10:00 AM) using a standard protocol. 4 Briefly, horses were sedated with xylazine b (0.8-1.0 mg/kg of body weight, IV) and butorphanol c (10-20 lg/kg of body weight, IV). A videoendoscope (3 m length, 12.9 mm diameter) was then inserted through the nostrils and directed down into the left lung until its tip was wedged in a distal bronchus. Small boluses of 0.5% lidocaine d solution were administered (up to a maximal volume of 120 mL) to desensitize the airway mucosa. Two 250-mL boluses of sterile 0.9% sodium chloride were alternatively instilled under pressure into the bronchus and aspirated via the endoscope biopsy channel by use of a suction pump. The BAL fluid was collected in a 500-mL plastic Nalgene jar and its volume was recorded. A 5-mL sample of the BAL fluid was immediately put into a Vacutainer EDTA tube which was stored on ice until analysis. Cytology slides were prepared within 3 hours of BAL procedure using a cytospin (113 g for 4 minute), then stained with an automatic stainer e using a Modified Wright Giemsa solution for better visualization of mast cells. Differential counts were performed on at least 400 nucleated cells, not including epithelial cells, by 1 author (NF, clinical pathologist), who was blinded to all the results of the study. 26
Lung Function Tests
Baseline lung mechanics measurements and histamine challenges were performed on the horses as previously described, with modifications. 6,[27][28][29] Briefly, standard lung mechanics were measured in unsedated horses before and during the bronchoprovocative challenge using airflow and esophageal pressure data acquisition. Flow rate was measured by a heated pneumotachograph f attached to a custom-made fiberglass mask sealed over the nose of each horse. Transpulmonary pressure (P L ) was obtained by use of a differential pressure transducer, g which was connected to a small-diameter esophageal tube (inside diameter, 2 mm; outside diameter, 4.5 mm) with a balloon sealed over the end and placed in the distal third of the esophagus. The second port of the differential pressure transducer was connected to the mask to subtract the mask pressure from the esophageal pressure. The balloon was distended with 15 mL of air and positioned to obtain the maximal changes in P L during a respiratory cycle (DP L ) and to eliminate cardiac artifacts. The balloon was checked for leaks at the beginning and at the end of each experiment. The system was calibrated for each experiment using a calibrated 3-L syringe h and a water manometer for the flow and pressure signals, respectively. The signals from the transducers were processed and analyzed using the UnitWise and Flexiware data acquisition and analysis system. i In addition to spirometry variables, values of pulmonary resistance (R L ) and elastance (E L ) were calculated at a rate of 200 Hz by applying the data to the multiple regression equation for the single-compartment model of the lung. 27 The coefficients of determination for the fit of the equation to the data were calculated.
Airway hyperresponsiveness was evaluated using histamine bronchoprovocation. Briefly, once baseline measurements were calculated based on an average obtained over a minimum of 20 consecutive breaths at steady state, lung mechanics were assessed after nebulization with saline and increasing doses of histamine j (1,2,4,8,16, and maximum 32 mg/mL). Each dose was administered for 90 seconds through a fine-particle jet nebulizer k (0.5 mL/min) powered by a high-pressure (30 psig), high-flow (9 L/min) air compressor. l A connector system with an aerochamber m and 1-way valves was attached between the nebulizer and the facemask. After each nebulization, the connector system was immediately removed from the mask and replaced with the pneumotachograph for data collection. The test was terminated either when the pulmonary resistance (R L ) doubled compared to the baseline resistance or when the maximum histamine dose (of 32 mg/mL) was delivered.
Concentration-response curves were plotted for each bronchoprovocative challenge test as the percentage increase in R L from baseline against the histamine concentration (Fig 1). Airway hypersensitivity and reactivity (slope of the concentration-response curve) were determined as follows (Fig 1): The dose of histamine that evoked a 75% increase of baseline R L (PC 75 R L ), which is an indicator of airway sensitivity, 25 was determined by interpolation or extrapolation of the histamine dose-response curve depending on the increase in R L compared to baseline. In horses for which R L crossed the 75% increase threshold before the maximal dose (32 mg/mL) of nebulized histamine, the PC 75 R L was determined by interpolation of the line between the last 2 points of the concentration-response curve (A-B in Fig 1). In horses for which R L stayed lower than the 75% increase threshold value for all doses of nebulized histamine, a linear regression of the last points (2 or 3, depending on which resulted in a positive value or a more even plateau) of the curve was used and the PC 75 R L value was determined by extrapolation of the line. In addition, if the slope of the last points of the line was negative, we conservatively set the PC 75 R L at 32 mg/mL. Lastly, we calculated airway reactivity by calculating the slope of the concentration-response curves using the same points as for PC 75 R L. Because the baseline value is an important point for the calculations, we averaged the R L values of baseline and saline and also used a linear regression of the first 3 points of the histamine concentration-response curve (averaged pre-post saline, 1 and 2 mg) to determine baseline R L values (Fig 1).
Clinical Signs
We modified a previously described clinical score 10 to grade respiratory clinical signs before, during, and after exercise ( Table 1). The horses were lunged in an arena with side reins after an exercise protocol of 1-minute walk, 7-minute trot, and 1-minute canter. The arena had a sand and rubber chip footing that was watered for 15-20 minutes prior lunging to minimize dust exposure. The lunging and evaluation of clinical signs were performed by 1 author (TT) who was not blinded to the study results. All horses tolerated the exercise well and could be lunged according to plan throughout the study. Before exercise, the horse's rectal temperature was recorded to exclude infectious respiratory disease. We first scored breathing effort, nostril flare, and nasal discharge before and during lunging. Then, immediately after the canter period, the respiratory rate was recorded for 1 minute and a photograph of both nostrils was taken. Nasal discharge was scored based on the area of the nostril (average of both nostrils) covered with mucus and on its distribution on the upper lip (Table 1B). The number of coughs was also counted throughout the exercise. This scoring system has yet to be validated for IAD; however, the presence of a chronic cough (>3 weeks duration) and nasal discharge can indicate an increased risk for developing IAD 1 ; the scoring system used is described in Table 1.
Experimental Protocol
The study used a controlled randomized crossover design. Randomization was performed by one author (RL) using Microsoft Excel Random Generator function. Two groups with 5 and 3 horses each were subjected to 2 treatment protocols separated in the middle by a washout period. On day À1 of the study, a BAL was performed on all the horses as described above. On day 0, approximately 24 hours after the BAL, baseline lung mechanics and histamine bronchoprovocation challenge were carried out as described above. The treatments with DEX and FLUT were started on day 1 of the study. Dexamethasone n (0.05 mg/kg) was administered intra muscularly once a day in the morning (7:00-8:00 AM) for 15 days. Fluticasone propionate o (3,000 lg) was administered using metered dose inhalers (MDIs) and an Aerohippus p twice daily (7:00-8:00 AM and PM) for 15 days. Lung mechanics and histamine bronchoprovocation challenges were performed on days 8 and 16. A second BAL was performed on day 15. The first treatment phase was followed by a 3-week washout period before switching to the second treatment, for which the same protocol was followed. The horses were lunged, starting on day 1 of the study, every second day during the treatment and every fourth day during the washout period. Day 36, which was the last day of the second washout period, was also the last day of lunging for both groups (Table 3).
Statistical Analysis
Nonparametric tests (Wilcoxon signed-rank test) were used to compare airway hyperreactivity, hypersensitivity, and BAL variables between treatments as well as before and after each treatment. A Friedman 2-way test was used to assess variation in clinical scores over each treatment and washout period, with Bonferroni correction for multiple testing of clinical scores used to determine level of significance for P values. A Spearman rank correlation was used to test for correlation between pulmonary sensitivity or reactivity and BAL cytological parameters. Values were expressed as the median (1st Quartile-3rd Quartile). A P value < .05 was considered significant (lower when Bonferroni correction was applied). Statistical analysis was carried out using commercial software. q
Results
Lung mechanics, histamine bronchoprovocation challenges, BAL, and lunging procedures were well tolerated by all horses. Seven horses completed the study. One of the 8 horses could only be used in the DEX treatment phase because she was euthanized due to femoral nerve paresis during the FLUT treatment phase (causality unrelated to study).
Histamine Bronchoprovocation Challenges: Airway Hypersensitivity and Hyperreactivity
The coefficients of determination for the regression analysis used to calculate R L and E L during the lung mechanics experiments for DEX and FLUT, respectively, had a median value of 0.95 (0.92-0.97) and 0.95 (0.92-0.97). All horses had airway hypersensitivity at baseline prior to DEX and FLUT treatments, as shown by the low baseline PC 75 R L values (Fig 2). The median PC 75 R L values were 6.7 mg/mL (5.1-13.4) and 14.2 mg/ mL (7.6-24.7) at baseline before DEX and FLUT treatments, respectively (Fig 2). There was no significant difference (P = .23) in the values of R L between treatment baselines for DEX and FLUT (Fig 2). The DEX treatment abolished the 75% increase in R L from baseline at any dose of histamine used in the bronchoprovocation challenge 8 and 16 days after initiation of treatment in all 8 horses. The calculated PC 75 R L median values were 40.2 mg/mL (32.0-182.8) and 257.7 mg/mL (33.8-435.2) after 8 and 16 days of DEX treatment, respectively, which were both significantly different from treatment baseline (P = .01 for both) (Fig 2). There was no significant difference in PC 75 R L between 8 and 16 days of treatment with DEX (P = .15) (Fig 2).
The FLUT treatment abolished the 75% increase in R L from baseline at any dose of histamine used in the bronchoprovocation challenge after 8 days of treatment in all 7 horses but in 6 horses after 16 days of Table 1. Clinical scoring system for respiratory signs in horses with inflammatory airway disease: (A) Clinical scoring system for respiratory signs during and after exercise; (B) Details on the scoring of nasal discharge.
Respiratory effort a 0-normal 1-mildly increased 2-moderately increased 3-severely increased Respiratory rate a 0-≤48 (Fig 2). The difference in PC 75 R L between 8 and 16 days of treatment with FLUT was not significant (P = .078) (Fig 2). All horses showed increased concentration-response curve slope values at day 0 in both DEX and FLUT treatments (13.6 (5.7-82.9) and 5.3 (3.3-9.4), respectively), indicative of airway hyperreactivity (Fig 3). There was no significant difference (P = .15) in the slope values between baselines for DEX and FLUT (Fig 3). Compared to baseline, the slope values of the concentration-response curve decreased significantly at day 8 and day 16 for both DEX (P = .008 for both) and FLUT treatments (P = .008 for both) (Figs 3 and 4). There was no significant difference in the slope values between 8 and 16 days of treatment with DEX and FLUT (P = .50 and P = .41, respectively).
Bronchoalveolar Lavage Cytology
The BALF of all the horses showed neutrophilic inflammation before each treatment (Table 2). Additionally, 4 horses before DEX treatment and 5 horses before FLUT treatment, respectively, had an increased number of mast cells. There was no association between the types of inflammation (neutrophils, mast cells, eosinophils' percentage) in BALF of individual horses and the results of the bronchoprovocative tests (PC 75 R L and reactivity). There was no significant difference between the DEX and FLUT treatment baseline values for the BAL fluid total cell counts or differential cell counts ( Table 2). The lymphocyte percentage decreased significantly in the BAL fluid after both DEX and FLUT treatments (P = .039 for both) ( Table 2). There was no significant difference in total cell count or differential cell count for any other cell type in the BAL fluid between before and after treatment with DEX and FLUT ( Table 2) although there was an evident trend in the decrease of mast cells after both treatments. The BAL sample volume collected after treatment with FLUT was significantly greater than the baseline BALF volume (P = .031); however, there was no significant increase in BALF volume after treatment with DEX ( Table 2).
Clinical Signs Score
There was no significant difference in clinical scores at baselines between the DEX and FLUT treatments. There was no significant change in the total clinical score of the horses over time (Table 3A,B). When analyzing each clinical variable separately, namely, respiratory effort, nasal discharge, increase in nasal discharge with exercise, nasal flare, coughing, and respiratory rate Cough Cough Tables 1 and 3A,B), there was no statistically significant change across all times points for both treatments (increase in nasal discharge for DEX and FLUT treatment had P values of .046 and .029, respectively, respiratory rate and nasal discharge for FLUT treatment had P values of .023 and .042, respectively, which were all nonsignificant after Bonferroni correction for multiple testing of clinical scores).
Discussion
The main finding of the present study is that both DEX and FLUT treatments significantly decrease airway hypersensitivity and hyperreactivity in horses with IAD (mild equine asthma). Dexamethasone was administered intramuscularly in this study because it is typically used in practice to ensure adequate dosage while avoiding the need for a catheter or trained personnel for the administration of the medication. No side effects were observed. Aerosol steroid therapy is commonly used in equine medicine; we administered FLUT using MDIs and the Aerohippus device, which is more practical than the Aeromask and has been shown to have a better diffusion of drug particles in the lower airways. 30 Horses tolerated the aerosol therapy very well.
Airway hyperresponsiveness is a well-described feature of human asthma 31,32 and has also been described numerous times in horses with nonseptic respiratory diseases. 1,5,7,8 In healthy human subjects, airway reactivity reaches a plateau or maximal response, where the airway smooth muscles are activated maximally. In subjects with asthma, the plateau response is increased or abolished altogether, as there are no more limitations to maximal airway narrowing. 33 These variables are believed to have independent etiologies although the correlation between the 2 remains unclear. 34 To our knowledge, this distinction between airway reactivity and sensitivity has not previously been characterized in equine airway diseases. There have been minimal references to the plateau response in veterinary studies 8,35 and it is uncertain whether a plateau as seen in human patients also exists in horses. The fact that an abolished plateau response in humans can lead to fatal asthma incidents and that airway hyperreactivity in horses typically does not result in death suggests that the mechanisms behind airway reactivity are different or more severe in humans than in horses.
In this study, we looked at reactivity and sensitivity as separate features of airway responsiveness as well as at a possible plateau response in horses. There is no reference to the physiological slope values (ie, airway reactivity) of the concentration-response curve in horses. Therefore, we evaluated the changes in the values of this variable during treatment periods but did not use the airway reactivity as an inclusion criterion for horses into the study. However, based on the results of this study, we would suggest that a reactivity value of more than 5 (% mg/mL) is indicative of airway hyperreactivity and a value <2 (% mg/mL) can be regarded as normal. Further studies are necessary to validate and establish normal values for airway reactivity in horses.
In a previous study on horses, a plateau was defined as a change in R L of <10% after 3 consecutive doses of histamine. 8 We could observe a plateau in the majority of the horses with airway hypersensitivity inhibited by the therapies but also witnessed a repeated increase in R L with higher doses of histamine after a plateau had been reached in 5 horses. 8 This observation of R L fluctuations of more than 10% during histamine bronchoprovocation could mean that higher doses of histamine are needed to provoke a stable plateau response in horses or that horses do not have a maximal plateau response to specific agonists comparable to humans. We did not use the plateau response in the statistical analysis of our study due to the yet uncertain value of this variable in horses.
Although DEX and MDIs FLUT treatments decreased airway hypersensitivity and reactivity in these horses with IAD, they did not affect the total or the differential cell counts of the BAL fluid. The persistent lower airway inflammation measured by the BAL technique in our study was similar to previous studies on the treatment of RAO in horses with steroids. 27,36,37 Conversely, other studies have shown a decrease in the amount of inflammatory cells after steroid treatment. 21,22 The environmental conditions were not changed in our study, which might contribute to the persistent accumulation of inflammatory cells in the lower airways. However, the lack of a negative control group treated with a placebo in the study does not allow conclusions to be made on the effect of the environment on lung hypersensitivity and inflammation in these horses. We did not find any significant association between types of inflammation in the BALF (neutrophilic, mast cell, eosinophilic) and AWHR in individual horses as has been reported previously. 9 However, this might be due to a lack of power and from the crossover study design that included a small number of horses with each type of inflammation (and none with eosinophilic inflammation). There was a noticeable trend in the decrease of mast cells after both treatments which might have been significant had the environmental conditions been changed in the study. Bronchoalveolar lavage cytology is a good method for measuring the number of inflammatory cells in the lung but it does not give information about the activation level of the various cell types found in the lower airways. Therefore a decrease in airway hypersensitivity and reactivity in spite of a persistent high percentage of inflammatory cells in the lower airways could be due to a decreased activation level of the inflammatory cells. It is also possible that, similarly to human asthma, steroids inhibit neutrophil apoptosis in horses with IAD, 38 thus maintaining greater levels of inflammatory cells in the airways.
Inflammatory airway disease is also defined by poor performance, exercise intolerance, or coughing, with excessive tracheal mucus. 1 These variables have been previously evaluated in horses either by measuring gas exchange 39,40 and metabolic response to exercise during a treadmill test or by subjective evaluation of the horses performance and competition results. 8 The challenge in evaluating clinical signs in nonracehorses is the lack of reference values and standardized tests for horses with different fitness levels and aerobic capacities. In this study, we modified a previously described scoring system 10 ; we evaluated each clinical variable separately, and then added them to calculate the comprehensive clinical score during treatment and washout periods (Tables 1 and 3). The increase in nasal discharge induced by exercise was the only variable that showed a significant increase in the washout period after DEX treatment (Table 3). This result suggests that airway inflammation increased during the washout period after the DEX treatment. Other clinical variables remained largely unchanged throughout the study. This might be due to a lack of sensitivity from our scoring system, possibly due to an exercise intensity that was not great enough to reveal the clinical differences induced by the treatments. Another difficulty in objectively measuring clinical variables is the influence by environmental factors like weather, dust, or chemical irritants as well as potentially by coincidental factors like head position, previous coughing, or time of the day. Further research is needed to validate objective clinical scoring for submaximal exercise conditions in horses with IAD.
Although there is evidence that airway inflammation is associated with AWHR, 6,8,10,41 the correlation between AWHR and respiratory clinical signs has not been established in horses. The importance of the association between these 2 traits in IAD is largely unknown. The fact that both glucocorticoids in our study significantly decreased AWHR but did not alter the BAL cytology (excepting lymphocyte count) nor change clinical signs might mean that these features of IAD have different etiologies or pathophysiology and also have to be addressed separately in treatment. In our opinion, these results reflect the complex nature of the disease and that more specific diagnostic means are needed to appropriately assess the response to treatment. | 2018-04-03T03:14:24.353Z | 2017-05-31T00:00:00.000 | {
"year": 2017,
"sha1": "0611f64bf60a10ef3ed474b0af9b9411f6418596",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1111/jvim.14740",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0611f64bf60a10ef3ed474b0af9b9411f6418596",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247941435 | pes2o/s2orc | v3-fos-license | Think positive: An interpretable neural network for image recognition
The COVID-19 pandemic is an ongoing pandemic and is placing additional burden on healthcare systems around the world. Timely and effectively detecting the virus can help to reduce the spread of the disease. Although, RT-PCR is still a gold standard for COVID-19 testing, deep learning models to identify the virus from medical images can also be helpful in certain circumstances. In particular, in situations when patients undergo routine X-rays and/or CT-scans tests but within a few days of such tests they develop respiratory complications. Deep learning models can also be used for pre-screening prior to RT-PCR testing. However, the transparency/interpretability of the reasoning process of predictions made by such deep learning models is essential. In this paper, we propose an interpretable deep learning model that uses positive reasoning process to make predictions. We trained and tested our model over the dataset of chest CT-scan images of COVID-19 patients, normal people and pneumonia patients. Our model gives the accuracy, precision, recall and F-score equal to 99.48%, 0.99, 0.99 and 0.99, respectively.
Introduction
The pandemic COVID-19 is placing enormous strain on public health systems around the world, and severely affecting the economies of many countries. Although, vaccination is being done for the virus, but the number of the variants of the virus is also increasing. The new variants of the virus can reduce the effectiveness of the vaccines (WHO, 2021). Therefore, along with vaccination for the virus, detection of the virus is important to reduce the spread of the disease and the development of mutants of the virus. In addition to the prevalent testing technique reverse transcription polymerase chain reaction (RT-PCR), deep learning models can also be helpful in efforts to detect the virus. Most of the deep learning algorithms work as a black-box because their reasoning process for their predictions is not transparent/interpretable. However, the interpretation of the reasoning process of a deep learning model related to a high stake decision is important. There have been cases where erroneous data fed into the black-box models went unnoticed, due to which wrongful long prison sentences were given (e.g., inmate Glen Rodriguez was denied parole because of wrong COMPAS score) (Li, Liu, Chen, & Rudin, 2017;Wexler, 2017). The lack of interpretability of the reasoning processes of such deep learning models has become a major issue for whether we can trust predictions that are coming from these models. Therefore, we propose an interpretable deep learning model quasi prototypical part network (Quasi-ProtoPNet), and trained and tested the model over the dataset of chest CT images.
Related work
In this section, we first discuss those works that are related to our paper because of the interpretability of their reasoning process. Second, we provide a brief summary of the studies that are related to this study as they categorize medical images (chest CT-scan and X -ray images). The models in the second category attempt to distinguish medical images of COVID-19 patients from the medical images of pneumonia patients and normal people, but the models are not necessarily interpretable.
Attention-based interpretability is another technique to clarify the reasoning process of the neural networks. The instances of this technique include part-based models Girshick, Donahue, Darrell, & Malik, 2014;Huang, Xu, Tao, & Zhang, 2015;Ren, He, Girshick, & Sun, 2015;Simon & Rodner, Fig. 1. For a given CT-scan image of a COVID-19 patient, Quasi-ProtoPNet identifies the parts of the image where it thinks that this part of the image is similar to that learned prototype. 2015; Uijlings, van de Sande, Gevers, & Smeulders, 2013;Xiao et al., 2015;Zhang, Donahue, Girshick, & Darrell, 2014;Zheng, Fu, Mei, & Luo, 2017;Zhou, Sun, Bau, & Torralba, 2018) and class activation maps (CAM) (Zhou, Khosla, Lapedriza, Oliva, & Torralba, 2016). In this approach, the aim of a model is to show the patches of an input image that are the focus of its attention; nonetheless, these models do not represent prototypes that resemble the parts of an input image that are the focal points of the models. Recently, a model CXR-specific with class activation maps has also been developed to detect COVID-19 from medical images (Rajaraman, Sornapudi, Alderson, Folio, & Antani, 2020).
Case-based classification techniques that use prototypes (Bien & Tibshirani, 2011;Priebe, Marchette, DeVinney, & Socolinsky, 2003;Wu & Tabak, 2017) or k-nearest neighbors (Papernot & McDaniel, 2018;Salakhutdinov & Hinton, 2007;Weinberger & Saul, 2009) are also related to our work. Throughout this paper, a prototype or a prototypical part will represent a patch of an image. Li et al. (2017) have developed a model that uses full image-sized prototypes and requires a decoder for visualizing prototypes. Chen, Li, Barnett, Su, and Rudin (2018) developed a model ProtoPNet which significantly improved on the model developed in Li et al. (2017).
As shown in Fig. 1, ProtoPNet is able to identify different parts of an input image that are similar to different prototypes, and it classifies an image based on the similarity scores. To classify an input image, ProtoPNet finds the Euclidean distance between each latent patch of the input image and the learned prototypes of images from different classes, where prototypes have spatial dimensions 1 × 1. The maximum of the inverted distances between a prototype and the patches of the input image is called the similarity score of the prototype. Note that, the smaller the distance, the larger the reciprocal, and there will be only one similarity score for each prototype. A weighted combination of similarity scores is used to determine the logits for different classes and these logits are normalized using Softmax to determine the class of the input image. The weights for the correct class and incorrect class of a training image are set equal to 1 and −0.5, respectively. These weights are also called connections of the similarity scores with the classes. The negative weights are assigned to include the negative reasoning process, that is, to reject the incorrect classes. ProtoPNet tries to zero out the negative weights during the training process, and with this assumption of ProtoPNet, a theorem is proved (Chen et al., 2018, Theorem 1.1). However, our experiments show that it is hardly possible to zero out the negative connections during the training process after making a negative connection between the similarity scores and incorrect classes.
The models NP-ProtoPNet (Singh & Yow, 2021c), Gen-ProtoPNet (Singh & Yow, 2021a) and Ps-ProtoPNet (Singh & Yow, 2021b) are variations of ProtoPNet, and we refer to these four models collectively the ProtoPNet models or the series of ProtoPNet models. Gen-ProtoPNet model uses a generalized version of the Euclidean distance function, NP-ProtoPNet considers the negative reasoning process and the positive reasoning process but emphasizes on the negative reasoning process, and Ps-ProtoPNet model uses the connections between logits and similarity scores as suggested by Singh and Yow (2021b, Theorem 1), and uses the generalized version of the distance function. The theorem (Singh & Yow, 2021b, Theorem 1) uses a more realistic assumption of fixed negative connections between similarity scores and incorrect classes to find the impact of change in the negative connections on the logits. The impact on the logits is obtained due to the projection of prototypes to the patches of training images, that is, the replacement of the prototypes with the latent patches of the training images. However, the use of fixed negative connections leads to decrease in the logit of correct class and increase in the logit of incorrect classes, consequently the accuracy of Ps-ProtoPNet deceases after the projection of prototypes. In particular, the impact is more severe when the number of classes is small, see Singh and Yow (2021b, Theorem 1). In summary, each model of the series of ProtoPNet models uses the negative reasoning process along with the positive reasoning process, whereas our model Quasi-ProtoPNet uses only positive reasoning process to categorize images.
In order to get rid of the flaws of the ProtoPNet models, especially when the number of classes is small, Quasi-ProtoPNet uses only positive reasoning process by placing zero connection between the similarity scores and incorrect classes. Quasi-ProtoPNet suspends the convex optimization of the last layer to keep the connections constant, where by the suspension of the convex optimization of the last layer means that Quasi-ProtoPNet does not optimize the last layer by freezing all other layers. In addition to the positive reasoning process, Quasi-ProtoPNet uses prototypes of all types of spatial dimensions, that is, rectangular spatial dimensions and square spatial dimensions, whereas ProtoPNet model uses the prototypes with only square spatial dimensions 1 × 1. Prototypes with large spatial dimensions help our model to classify the images on the basis of objects instead of backgrounds of the objects in the images. However, the optimum spatial dimensions need to be determined to get better accuracy.
To identify an image that has not been previously exposed, humans can compare patches of the image with patches of images of known objects. This type of reasoning is usually used in difficult identification tasks. For example, radiologists may compare suspicious tumors in an X -ray or a CT-scan image with prototype tumor images to diagnose cancer. This type of human reasoning inspired our model where comparison of image parts with learned prototypes is an integral part of the model's reasoning process. Therefore, our model differentiates between CT-scan images of a COVID-19 patient and CT-scan images of pneumonia patients based on greater similarity between the learned prototypes and the patches of images.
Dataset
We choose the dataset (Gunraj et al., 2021b) of chest CTscan images of COVID-19 patient, normal people and pneumonia patients to train and test our model. The dataset consists of 143778 training images and 25658 test images. We crop the images using the bounding box information provided with the dataset. Also, we use the information provided with the dataset to segregate the cropped images into three classes Covid, Normal and Pneumonia that contain the images of COVID-19 patients, normal people and pneumonia patients, respectively. We also call these classes first, second and third, and denote them by C , N and P, respectively. The classes C , N and P have 35996, 25496 and 82286 training images, and 12245, 7395 and 6018 test images, respectively. All images have been resized to the dimensions 224 × 224 as required by the base models.
Contributions
The novelty of our model is that it uses positive reasoning process along with the use of prototypes that can have any type of spatial dimensions, that is, rectangular spatial dimensions and square spatial dimensions. Quasi-ProtoPNet uses an objective function different from the objective function used in the series of ProtoPNet models. The contributions of this paper are summarized below.
• Quasi-ProtoPNet uses only the positive reasoning process by maintaining zero connection between the similarity scores and incorrect classes. Quasi-ProtoPNet suspends the convex optimization of the last layer to keep the connections fixed. The suspension of the convex optimization also reduces the training time considerably.
• The architecture of Quasi-ProtoPNet helped us to prove a theorem, see Theorem 3.1. The theorem provides the theoretical evidence of the reason of the improvement in the performance of our model over the other ProtoPNet models. We remark that the theorem is not only true for the distance function that we use for our model, but it is also true for any positive-valued function that satisfies the triangular inequality and has appropriate domain.
• Quasi-ProtoPNet uses prototypes with both types of spatial dimensions, that is, rectangular spatial dimensions and square spatial dimensions, whereas ProtoPNet model uses prototypes with only square spatial dimensions 1 × 1.
The rest of the paper is organized as follows. In Section 2, we provide a detailed information about the architecture of our model, and we explain the training procedure and reasoning process of our model. In Section 3, we provide confusion matrices for our model with different base models, and we compare the performance of our model with the ProtoPNet models and the base models. Also, we show that the improvement in the accuracies given by our model over the accuracies given by the other ProroPNet models is statistically significant. A graphical comparison of the accuracies is provided. In this section, we also prove a theorem that finds the bounds of the changes in logits due to projection of prototypes on the training images. In Section 4, we talk about the limitations of our model. In Section 5, a brief discussion on our model and the series of ProtoPNet models is provided. Finally, in Section 6, we conclude our work.
Method
In this section, we introduce and explain the architecture and the training process of our model Quasi-ProtoPNet in the context of CT-scan images.
Quasi-ProtoPNet architecture
Quasi-ProtPNet can be built on convolutional layers of a stateof-the-art base model (baseline), such as: VGG-19 (Simonyan & Zisserman, 2015), ResNet-34, ResNet-152 (He, Zhang, Ren, & Sun, 2016), DenseNet-121, or DenseNet-161 (Huang, Liu, Van Der Maaten, & Weinberger, 2017). As shown in Fig. 2, Quasi-ProtoPNet consists of the convolution layers of a base model that are followed by two additional convolutional layers 2 × 1 and 1 × 1. These convolutional layers are collectively denoted by L, and they are followed by a generalized convolutional layer (Ghiasi-Shirazi, 2019; Nalaie, Ghiasi-Shirazi, & Akbarzadeh-T, 2017) p t of prototypical parts. The layer p t is followed by a dense layer w with no bias. The parameters of L and the weight matrix of a dense layer are denoted by L conv and w m , respectively. The activation functions ReLU and Sigmoid are used for the additional second last convolutional layer and last convolution layer, respectively. Note that, convolutional layers L form a non-interpretable (black-box) part of our model whereas the generalized convolutional layer p t forms the interpretable (transparent) part of our model.
Although, convolutional layers of any of the base models can be used to construct our model, we provide the explanation of Quasi-ProtoPNet when it is constructed over the convolutional layers of VGG-16. Let x be an input image. Since the output of the convolutional layers of VGG-16 has depth 512 and spatial dimensions 7 × 7, L(x) has depth 512 and spatial dimensions 6 × 6. Note that, the layer p t is a vector of prototypical units, and each prototypical unit is a tensor of the shape 512 × h × w, where 1×1 < h×w < 6×6, that is, h and w together are neither equal to 1 nor 6. Suppose n and m denote the total number of classes and prototypes for each class, respectively. Let For our work n = 3, but we randomly set the hyperparameter m = 10. The shapes of L(x) and p t are 512 × 6 × 6 and 512 × h × w, where h and w lie between 1 and 6 but together they are neither equal to 1 nor 6. Therefore, each prototype can be thought of as a part of L(x). The model takes into account the spatial relationship between L(x) and the prototypical parts, and upsamples the part of L(x) (the part of L(x) that is at the smallest distance from a prototypical part) to the input image x to identify the patch on x that resembles similar to a prototype. The green rectangles in the source images are the parts of the source images from where the prototypes are actually projected. The source image of the prototypes p 1 1 , p 2 1 and p 3 10 are also shown in Fig. 2. Similar to ProtoPNet (see Section 1.1), Quasi-ProtoPNet computes the similarity scores between an input image and prototypes p 1 1 −p 1 10 , p 2 1 −p 2 10 and p 3 1 −p 3 10 , see Fig. 2. The prototypes p 1 1 , p 2 1 and p 3 10 have similarity scores 2.8001, 0.7889 and 1.0233, and the similarity score of p 1 1 is greater than the other two similarity scores. The complete list of similarity scores obtained from our experiments is given in the matrix s m , see Section 2.3.
In the dense layer w, the matrices w m and s m are multiplied to obtain the logits. The logits for the classes C , N and P are 38.0688, 10.1137 and 11.1361, respectively. The interpretability/transparency of our model comes into play when an image is classified into a certain class. Our model is able to tell the reason of the classification of the image to that class, and the reason is that the image has some patches more similar to certain learned prototypes related to that class and it shows those learned prototypes. The learned prototypes are projected from the training images, so they are the patches of the training images.
Training of Quasi-ProtoPNet
Quasi-ProtoPNet uses the generalized version d of the Euclidean distance function, and in this section we show that d is a generalization of the Euclidean distance function. Consider Quasi-ProtoPNet with base model VGG-16. Let x be an input image. Therefore, the shape of L(x) is 512 × 6 × 6 as described in Section 2.1. Let p be any prototype with shape 512 × h × w, where 1 ≤ h, w ≤ 6, and h and w together are neither equal to 1 nor 6. The output O(= L(x)) of the convolutional layers L has (7 − h)(7 − w) patches of dimensions h × w. Hence, square of the distance d(P ij , p) between p and (i, j) patch P ij (say) of O is: Note that, if p has prototypes of spatial dimensions 1 × 1, that is, h = w = 1, then d 2 (P ij , p) = ∑ 512 k=1 ∥O ijk − p 11k ∥ 2 2 , which is the square of the Euclidean distance between p and a patch of O, where p 11k ≃ p k . Therefore, the function d is a generalization of the Euclidean distance function. The prototypical unit p t calculates the following.
That is, Eq.
(2) exhibits that a prototype p is more similar to the image x if the reciprocal of the distance between p and a latent patch of x is smaller. Quasi-ProtoPNet is trained using the following two steps.
Optimization of all layers before the dense layer
Let X = {x 1 . . . x n } and Y = {y 1 . . . y n } be sets of images and associated labels, respectively, and Then our objective function is: where ClstCost is given by the equation Eq. (4) discloses that the drop in the cluster cost (ClstCost) leads to the clustering of prototypes around their respective classes. The reduction in cross entropy leads to better classifications, see the objective function (3). The hyperparameter λ is set equal to 0.7. Since w m is the weight matrix for the dense layer, w (i,j) m is the weight assigned to the connection between logit of ith class and similarity score of jth prototype. Therefore, for a class c, we put w (i,j) m = 1 for all j with p i j ∈ P i , and for all p c j ̸ ∈ P i with c ̸ = i, m (c,j) w = 0. The non-negativity of the distance function and optimization of all the layers before the last layer with optimizer SGD help Quasi-ProtoPNet to learn important latent spaces.
Projection of prototypes
Let x be an input image. At the second step, Quasi-ProtoPNet projects the prototypes onto the patches of x that are more similar to the prototypes. That is, a patch of x that is at a smaller distance from a prototype gets projected, and the distance must be at least 93rd percentile of all the inverted distances of the prototype from all the images. For this purpose, Quasi-ProtoPNet makes the following update: p c j ←− arg min {P:P ∈ patches(L(x i )) ∀i such that y i =c} d(P, p c j ).
Explanation of Quasi-ProtoPNet
In this section, we explain our model with an example of an input image as given in Fig. 3.
In Fig. 3, the image in the first column belongs to the class Covid. In the second column of the figure, the green rectangle on the image is enclosing the patches of the image that give the highest similarity score to the prototypes in the third column. In the fourth column, the rectangles are enclosing the patches on the source images of the prototypes, that is, the rectangles are pinpointing the patches on the source images from where the prototypes are projected. In the fifth column, similarity scores between the prototypes and patches of the test image are displayed. In the sixth column, the connections between similarity scores and the logits are given. Since the image belongs to the first class C , the similarity scores of the prototypes of the second and third class are assigned zero weight. The entries of the seventh column are obtained by multiplying similarity scores and class connections, and the logit (38.0688) for the class C is obtained by adding the entries of the seventh column. The logit for the class C can also be computed by multiplying the first row of w m with matrix s m . The logits for the classes N and P are 10.1137 and 11.1361, respectively, and can be computed by multiplying second and third row of w m with matrix s m .
The transpose of the weight matrix w m and similarity score matrix s m that we obtain from our experiments are as follows:
Results
In this section, we present the metrics given by our model and compare the performance of our model with the performance of the other models.
The metrics and confusion matrices
Suppose TP, TN, FP and FN denote the true positives, true negatives, false positives and false negatives for the Covid class. The metrics accuracy, precision, recall and F1-score are (Wikipedia contributors, 2021a, 2021b, 2021c): In Figs. 4-9, the confusion matrices of Quasi-ProtoPNet with the base models are given. For example, in Fig. 4 FN of the class Covid. Therefore, by Eqs. (5) and (6), for Quasi-ProtoPNet, the accuracy, precision, recall and F1-score are equal to 99.05, 0.98, 0.99 and 0.98, respectively.
The performance comparison of the models
The series of ProtoPNet models are constructed over the convolution layers of the base models. Although, the accuracies of the series of ProtoPNet models and the base models become stabilize prior to 35 epochs (see Section 3.4), but we trained and tested the models for 100 epochs.
The performance comparison in the metrics is provided in Table 1. We see from the third column of Table 1 that when we build our model on the convolutional layers of VGG-16 then the accuracy, precision, recall and F1-score given by Quasi-ProtoPNet are 99.05, 0.98, 0.99 and 0.98, respectively. The accuracy, precision, recall and F1-score given by the models ProtoPNet, NP-ProtoPNet, Gen-ProtoPNet, Ps-PrortoPNet with base model VGG-16, and the base model itself (Base only) are 90. 84, 0.89, 0.91 and 0.90; 98.23, 0.93, 0.95 and 0.94; 95.85, 0.93, 0.95 and 0.94; 98.83, 0.96, 0.98 and 0.97; and 99.03, 0.98, 0.99 and 0.98, respectively. The highest accuracies obtained with different base models are in bold. Moreover, we see from the Table 1 that accuracies given by Quasi-ProtoPNet are even better than the accuracies given by the base models when Quasi-ProtoPNet is constructed over the convolutional layers of VGG-16, VGG-19 and DenseNet-121. Furthermore, the highest accuracy (99.48%) achieved by Quasi-ProtoPNet with base model DenseNet-121 is equal to the highest accuracy (99.48%) achieved by the non-interpretable model DenseNet-161.
In addition to achieving excellent accuracy, Quasi-ProtoPNet can explain why an input image is classified into a certain class, whereas such explanations are not possible with black-box models. That is, our model exhibits some prototypes from the image class that are similar to some patches of the classified image. In other words, if an image is classified to a certain class then it must have some patches similar to the prototypes of that class. The model also gives prototypes that can be manually compared with some patches of the classified image to know why a certain class has been assigned to the image.
The test of hypothesis for the accuracies
Since an accuracy is the proportion of correctly classified images among all the test images, the test of hypothesis concerning system of two proportions can be applied to determine whether the differences between the accuracies are statistical significant. Let n d be the size of test dataset. Let x 1 and x 2 be the number of images correctly classified by models 1 and 2, respectively. Let The statistic for the test concerning difference between two proportions (accuracies) is as follows (Richard, Miller, & Freund, 2017): Suppose the models 1 and 2 give the accuracies p 1 and p 2 . Then, our hypothesis: Let the level of confidence (α) be 0.05. Therefore, to reject the null hypothesis, the p-value must be less than 0.025 because we have two-tailed hypothesis. Suppose p 1 represents the accuracy given by Quasi-ProtoPNet and the accuracies given by the other models are represented by p 2 . The values of test statistic Z are obtained by the above formula, see Eq. (7). We use the standard normal table to obtain the associated p-values, and list the p-values in the Table 2. Table 1 The comparison of performances of the models while experimented over the dataset of CT images.
Base
Metric Quasi-ProtoPNet Ps-ProtoPNet (Singh & Yow, 2021b) Gen-ProtoPNet (Singh & Yow, 2021a) NP-ProtoPNet (Singh & Yow, 2021c) ProtoPNet (Chen et al., 2018) Base only In particular, when convolutional layers of VGG-16 are used to construct the models, we get the p-values from the accuracy given by Quasi-ProtoPNet along with the accuracies given by Ps-ProtoPNet, Gen-ProtoPNet, NP-ProtoPNet, ProtoPNet and VGG-16 equal to 0.00755, 0.00002, 0.00002, 0.00002 and 0.40905, respectively. The null hypothesis for all the p-values that correspond to the series of ProtoPNet models got rejected, because the p-values are less than 0.025, see the Table 2. Therefore, the accuracies given by Quasi-ProtoPNet with different base models are statistically significantly (with 95% confidence) better than the accuracies given by the ProtoPNet models. However, the pvalues given in the last column of Table 2 corresponding to the base models VGG-16, ResNet-34, ResNet-152, DenseNet-121 and DenseNet-161 are greater than 0.025. So, the accuracies given by these base models are not significantly different from the accuracies given by our model.
The graphical comparison of the accuracies
In Figs. 10-15, graphical comparison of the accuracies given by Quasi-ProtoPNet and the other models is provided. Although, the accuracies given the models become stable before 35 epochs, the models are trained and tested for 100 epochs over the dataset (Gunraj et al., 2021b), and the graphical comparisons of the accuracies are provided over 50 epochs. Fig. 10 provides a comparison of the accuracies given by the models when they are constructed over the convolutional layers of VGG-16. Although, it is difficult to see the difference between the accuracies in Figs. 10-15, the difference is clear before the models stabilize.
The effect of the projection of prototypes
In this section, we prove a theorem similar to Chen et al. (2018, Theorem 2.1). The theorem (Chen et al., 2018, Theorem 2.1) assumes that the negative connections between similarity scores and incorrect classes can be made equal to zero during the training process. As mentioned in Section 1.1, our experiments show that it is hardly possible to make the negative connections zero during the training process. However, we do not need to make this assumption because our model uses only positive reasoning process, and the suspension of the convex optimization of the last layer of our model keeps the connection between similarity scores and incorrect classes zero. Furthermore, (Chen et al., 2018, Theorem 2.1) is proved with the Euclidean distance function, whereas our theorem is neither restricted to the Euclidean distance function nor to its generalized version d, but the distance function can be replaced with any positive-valued function that satisfies the triangular inequality and has an appropriate domain. However, we present the theorem with a hemimetric, a distance function more general than the distance function d.
Proof. For any class
denote the prototypes of class c. The connection between similarity score and incorrect classes is zero, and the suspension of the convex optimization of the dense layer keep these connections fixed. Therefore, Let ∆ c be the difference between the output logit of class c after the projection and before the projection of prototypes.
) denote the logits after the projection and before the projection, respectively. Therefore, we have Assume, Therefore, First, to prove 1, that is, to find the lower bound of ∆ k , assume c = k in Eqs. (9) and (10), where k is the correct class of x.
From the inequality given in assumption 2, we have Using the triangular inequality, we have By assumption 2, we have Square inequality (13) and add ϵ to the result, we obtain On rearranging inequality (14), we have By inequalities (12) and (15), we have Therefore, by Eqs. (11) and (16), we have Hence, by Eqs. (8) and (17), we have Second, to prove 2, that is, to find the upper bound of ∆ k ′ , assume c = k ′ in the above Eqs. (9) and (10), where k ′ is the incorrect class of x.
By the triangle inequality, The assumption 1 gives: By the inequality (19), we have The inequality (20) gives: From the inequalities (18) and (21), we have Again, by the triangle inequality, we have The assumption 1 implies Therefore, by the inequality (23), we have Again, by assumption 1, we have On simplifying the above inequality, we obtain By the inequality (25), we have On combining the inequalities (24) and (26), we obtain On combining the inequalities (23) and (27), we have Therefore, by Eq. (10), and inequality (28), we have Hence, ∆ k ′ ≤ m log(1 + δ)(2 − δ). □
Limitations
As mentioned in Section 1.1, Quasi-ProtoPNet gives better performance than the series of ProtoPNet models when classification is to be made over only a few classes. As the number of classes grows bigger, our model may not give performance better than the performance of ProtoPNet and Ps-ProtoPNet. However, there are many cases similar to the case of CT-scan images as discussed in this paper when we need to classify images over only a few classes. Therefore, our model can be really useful for such situations.
Discussion
Quasi-ProtoPNet model suspends the convex optimization of the last layer to keep the connections constant and it uses the objective function that accommodates only the positive reasoning process. Also, the suspension reduced the training time of our model. Quasi-ProtoPNet is closely related to the series of other ProtoPNet models, but strikingly different from them due to its reasoning process for the classifications. Quasi-ProtoPNet uses the positive reasoning process whereas other ProtoPNet models use the negative reasoning process along with the positive reasoning process that leads to decrease in their accuracy, especially when number of classes is small. In particular, our model can be useful during this pandemic when deadly mutants of coronavirus (e.g. omicron variant) are being identified.
Conclusions
The use of positive reasoning process along with the use of prototypes with rectangular spatial dimensions and square spatial dimensions helped our model to improve its performance over the series of the other ProtoPNet models. Moreover, as observed in Section 3.2, Quasi-ProtoPNet gives the highest accuracy (99.48%) when DenseNet-121 is used as the base model, and the highest accuracy given by Quasi-ProtoPNet is equal to the highest accuracy (99.48%) given by the non-interpretable model DenseNet-161.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-04-05T13:11:36.397Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "4027d04cd1b4746688ce5a78d378aa753e91467c",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.neunet.2022.03.034",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d08a72537850ab46e68c0615814a7501984c504",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62536182 | pes2o/s2orc | v3-fos-license | Building a scalable event-level metadata service for ATLAS
The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction.
Introduction
This paper deals with the scalability challenges of the relational TAG Database, a unique and demanding challenge within ATLAS due to ATLAS' unprecedented data rate and volume and the high performance query demands for ATLAS users.The ATLAS Database and the challenging environment in which it must operate are introduced, terabyte scale relational database scalability tests performed in early 2007 are described, the experience and learning from the scalability tests are shared and performance results are presented.
The ATLAS TAG Database
The ATLAS Computing Model [1] describes an Event Level Metadata system, or TAG Database [2].The role of the TAG Database is to support seamless discovery, identification, 3. ATLAS TAG Data Rate and Storage Requirements 3.1.Data Volume ATLAS has 200 days of data taking per year, 50000 active seconds per day (58% efficiency per day), ATLAS expects an event rate from the HLT of 200HZ, 10 7 events per day.As the current budget for tags is 1kB per event, the TAG Database is a terabyte scale system in volume.
Anticipated TAG Database storage requirements [4] are shown in table 2. The scale is small compared to the Event Store and other Event Data types, but unlike other event data types, the tags must be readily queryable, to provide both statistical information about events and produce event collections for analysis.
Data Rate
As well as supporting the terabytes of data volume and allowing reading of the data, the TAG Database must also allow writing of events on a large scale.Data is produced by the detector at a rate of 200Hz, therefore during active data taking the database must accept on average 200 new entries per second.In order to avoid contention between read and write operations in the database, the files into which event tags are first loaded are later used to populate the relational database in a controlled and managed way.In this way, tag files introduce latency to the system.
The 1TB Performance and Scalability Tests
In early 2007, it was decided that a large scale realistic test of a terabyte scale TAG Database was needed, to uncover challenges brought with scale.The scalability and performance tests are an opportunity to optimise and measure performance.The tests began with the creation of a 1TB TAG Database, hosted on a development Oracle server at CERN.A set of realistic and useful test queries were developed.Indexing, partitioning strategies, Oracle Optimiser behavior, query processing strategies, Oracle Hints, parallel processing and multi client environments were explored.The data was queried and performance assessed for a series of schema iterations, each development in schema influenced by the learning and knowledge gained from previous iterations.
Test data
One million unique simulated event tags were created, using a set of realistic and varied data types and value distributions.Flat, exponential, normal, continuous and discrete distributions, as well as uncorrelated random numbers were combined to create 1kB tags.Using multiplication and replication of data, the one million event tags were used to create one billion realistic event tags, each with unique identifiers.Btree and Bitmap indexes were assigned on a subset of attributes based on the distribution and cardinality of attribute values, others were left unindexed so that the usefulness of unindexed attributes could be assessed.
A number of globally identifiable variables are created throughout the billion rows, so each row is unique.An attribute with 10 distinct values is included to represent ten potential ATLAS physics streams by which AOD data are grouped.
Test Architecture
The test architecture used for the tests is an Oracle development server, INT8R, at CERN, with two Oracle instances, each with 2 CPUs and 2GB memory, and 2TB shared storage.
Challenges of 1TB data
1TB scale was selected for testing as it is a realistic order of data for a TAG Database, table 2, and as we expect that important phase transitions in performance, behaviour and management demands are crossed as we scale the number of events from millions to billions.
Query processing, in terms of memory, CPU and disk, has four possibilities, these are In this case disk must be used for these operations The O(TB) data we have will not fit into memory, each index is O(10GB/TB) so this will not all easily fit in memory.Queries are open-ended and will select on a variable set of attributes, so index cache turnover will be high, limiting any caching advantage.Queries are potentially unselective, returning O(10%) data.This means there may not be enough memory to store intermediate results and processing multiple queries in parallel could be difficult.
The 1TB data scale therefore presents challenges beyond those of a smaller relational database.Strategies must be adopted to optimise and facilitate administration and query performance in this challenging environment.
Partitioning
Partitioning is a strategy used in relational database to divide data from larger whole units to smaller ones.Partitioning is often referred to as a Divide and Conquer strategy -splitting data into smaller composite parts to improve the way it can be managed and used.For the 1TB tests, both Horizontal and Vertical partitioning are considered.The motivation for partitioning the TAG Database are described in terms of two mutually important dimensions -query performance and database manageability.
Horizontal partitioning involves subdividing data by rows, into a set of smaller tables with a subset of the event tags in each.Any query that uses the attribute by which data is partitioned (partition key) in its predicate will benefit in performance, as only partitions of interest to the query are then considered.This is known as partition elimination or pruning.There is no performance overhead meanwhile for queries that do not specify partition key, or that require data from multiple horizontal partitions.Oracle allows partitioning by Range, List and Hash, as well as Composite Range-List and Range-Hash -all were considered at small scale to understand the functionality and the potential benefits.Performance was seen to improve directly with the amount of data removed from consideration, and Range-List and Range were identified as the optimal schema choices.Tag attributes suitable for partitioning by were considered.It is important to select a partition key which users can often and easily define in their queries, or we can reasonably expect them to be required to do so, so that partition elimination will improve performance.If the partition key does not appear in the query, no partition elimination will take place and no performance benefit is made.Vertical partitioning involves dividing data along vertical lines, so each event tag would be split across vertical partitions.Such partitioning of data can improve query performance by removing data irrelevant to the query from consideration.There is however a management overhead for this schema, and potential performance disadvantage, as should a query require data from many partitions, joins become necessary.
Horizontal Partitioning Solution for 1TB
A useful candidate for tag partition key is Run number, as it is the unit of Tier 0 data production.Data can be written to the database in units of runs, grouped to create reasonably sized partitions.Once a run is complete we can be certain that no more write operations will be needed, the partition can then be declared complete and read only.Read and write operations can be separated.Equally, it is thought reasonable to ask a user to define some temporal quality in their query.Physics Stream is also a candidate, as an event attribute that physicists are anticipated to define queries by.
Using Run and Stream, Range and Range-List partitioning schemas were tested.Performance benefits were seen with both, but Range-List was found at this scale to increase the management overhead as the schema becomes more complex.As a result, an alternative means of composite partitioning is developed.Stream is used to create ten separate tables, each stream table is partitioned by run.Query performance is enhanced when one or both partition keys are included in the query, ten independent stream tables allow significant improvement in query performance with or without run number specified in the query.When a query involves more than one stream, queries can be easily divided in a preprocessing step and performed in parallel.
The schema does not add any significantly increased management overhead, in fact administration tasks are simplified as the ten Stream tables can be managed independently.Each partition is 1GB, we have 100 partitions per run in ten Stream tables.The load method for tables is to put data into a WRITER table, once the run is complete we copy the partition into memory, indexes are rebuilt and the data copied into the READER table.
Vertical Partitioning of 1TB database
At the scale of smaller tests, the performance impact of joins across vertical partitions was low and only a disadvantage when querying from all partitions.At terabyte scale using larger realistic tag queries, join operations are costly and potentially/likely require use of disk.The POOL Collection Infrastructure does not currently support disjoint import of data in its current state, so developments in the import system would be needed.As it is not clear how attributes should be grouped in vertical partitions for effective data elimination, as vertical partitions increase management and as joins across partitions are potentially costly, vertical partitioning was not used in the terabyte scale database.Vertical partitioning of a TAG Database at scale will be restudied in future, once a better understanding of query patterns has been gathered from deployment of TAG Databases for use by ATLAS physicists.It may be that attributes can be classified as Hot or Cold, depending whether they are often or seldom queried.Study of query patterns is therefore an ongoing project.
Complete Partitioning Solution for 1TB
The partitioning strategy adopted and found to be optimal in terms of performance and manageability is a separation of data into ten Physics Stream tables using the Stream attribute, each table is horizontally range partitioned by Run number.
Indexing
Indexing attributes can potentially improve query performance by avoiding the need for table reads and therefore speeding up queries.Indexes allow table lookups by rowid, a fast operation where the index is used to identify the row satisfying the query, then the data is taken directly from the rows, without needing to scan the table.Indexes however require storage space and overhead of creation.We use Btree indexes, Bitmap indexes and non indexed attributes to assess query performance and optimal query paths.Btree indexes are suited to attributes with many distinct values, bitmap to those with fewer.Btree indexes are more costly in maintenance and storage than bitmap indexes, due to their larger size.
Indexing solutions and experience for 1TB
Initially some attributes were not indexed, to study the usefulness of unindexed attributes in a table of this scale.It was seen that without indexing on attributes the query predicate, a query was forced into a full table scan and this is much more expensive in terms of performance than a query in which all attributes in the WHERE clause are indexed.Indexing an attribute which appears only in a SELECT clause does not impact performance, as the table lookup mechanism is performed on the attributes in the WHERE and not SELECT.
As indexing has such a drastic performance effect and as it is difficult to say which attributes are more likely to appear in a predicate, it was decided that indexing all attributes should be attempted.This is feasible when considered in combination with the partitioning strategy adopted for the table, in which events are partitioned horizontally by Run.Without this, indexing all attributes would be impossible.Indexes are partitioned with the table, so for building indexes, we can force Oracle to hold the index in memory, meaning the time to build indexes is seconds, rather than hours.Assuming partitioning of the table is done by Runs, indexes will be rebuilt only when we finish loading the data of the run in the WRITER table, will we load the partition into memory, rebuild all indexes, then put this in the READER table.
Btree and bitmap indexes were tested, to understand the optimal query plans and management overhead of each index type at this scale.Bitmap indexes have an average size of 2MB per partition, and for b-tree 20MB is the average (each partition is 1GB, we have 100 partitions per Stream table).After extensive testing we see that Btree and Bitmap indexes perform optimally under distinct operations, and a strategy for addressing query processing in terms of these index types must be developed, this is achieved by studying the behaviour of the Oracle Optimiser.
The Oracle Optimiser
Oracle has an Optimiser which evaluates each SQL query, assesses the possible execution plans and selects the most efficient based on a number of criteria.We use the Cost Based Optimiser, which selects an execution plan based on estimated lowest Cost.
Optimising the Oracle Optimiser
In query testing and execution plan comparison of queries on 1TB scale data, it was seen that often the optimiser would select a non optimal query plan.Often an index would not be used, a full table scan would be selected when a better choice existed, partitions would not be used to the fullest or parallel processing would not be used with indexes.A method was developed over the course of thorough query testing to implement SQL queries using Oracle Hints, so that the Optimiser is guided into adopting an optimal query plan.
Optimiser Hints
Queries were divided into sets based on their features and optimal query plans, and Oracle Hints applied accordingly.The optimising preprocess is a necessary feature of the TAG Database at scale, also demanding monitoring, as the system extends and more is understood about usage and query patterns, it may be necessary to adapt the hint strategy in response.The Oracle hints found to potentially improve performance for the 1TB scale data are All the hints described above were seen to improve query performance by influencing the query processing plan across various query types.After extensive testing we saw that parallel processing is desirable as processing time is reduced when a query can be processed in parallel.Btree indexes perform optimally when an INDEX JOIN operation is performed and bitmaps when INDEX COMBINE influences processing.As user queries may filter on both types of indexes, we develop a query processing hint strategy where the SQL query is reduced into two separate SQLs for btree and bitmap indexes.Queries are processed separately, allowing Oracle to implement an optimal processing plan for each.
Assessment of 1TB Performance
To assess performance, two general queries are used • Count the events with at least two electrons and missing ET greater than 10GeV that are good for physics -a SUMMARY query • Give me all the events with at least two electrons and missing ET greater than 10GeV that are good for physics -a CONTENT query Queries are optimised and performed on the partitioning and indexing schema discussed.Query predicates are based on both index types, we use INDEX JOIN for btrees and INDEX COMBINE for bitmaps, then INTERSECT the results.The buffer cache is flushed between queries, so no cache advantage is allowed.We increase number of partitions involved as we increase the number of rows returned, holding a consistent percentage rows from each partition, to allow comparison.
For SUMMARY queries, we see that time increases with number of partitions.The increase is linear, so we can predict the time a query will take based on the number of partitions involved.We note that while time is related to the number of partitions, it is not so directly related to the amount of data returned in that n times data from a set number of partitions does not take n times as long.We can therefore predict time with some bounds.We observe that times are in order of seconds.
For CONTENT queries, we see a linear increase in time with number of partitions and again time overhead is in number of partitions accessed, not data returned from within.Times are again in order of seconds.Without partitioning, indexing and Oracle Hint strategies developed, these times would be in the order of hours, demonstrating the significant improvements that can be achieved with schema and query performance tuning.Count queries perform much better than queries that select and return attributes from the table.Count queries are performed purely on the index, there is no need to use the table.Select queries perform comparably, regardless of whether the query returns a subset of or all attributes, as a query of this type has overhead in locating and accessing the row, rather than reading of selected attributes.
Users will therefore be encouraged to filter their query using Counts, performing adapted queries iteratively to see how many event will be returned returning events.It is anticipated that this will improve a user session and minimise unnecessary more costly queries on the database We extend test queries to an extreme test case, to understand if the observed linear relations extrapolate indefinitely.
We note that for select queries, if the observed linear relation is constant and roughly proportional to number of partitions in query, then a query from all (100) partitions would Figure 6.Select queries -an extreme take 20 minutes, but this is not the case in practice.In reality we see a threshold case where the sorts required for the query move from memory to disk, resulting in a higher performance overhead.The same query plan is used, but with use of disk.We also note that in all extreme cases, optimised performance is still faster than a full table scan.
Stress tests
Stress tests were performed to assess performance of the database in a multi client environment.Expected user query patterns were simulated by creating a sample job of nine optimised queries with a selection of count and retrieve queries across a selection of attributes.Each query scans 1GB of data, at 220Hz event rate this corresponds to one hour thirty minutes in logical units.The session runs on one node of the INT8R cluster, with two CPUs and 2GB memory.
On average a job running alone in a single session environment would take ten minutes.Stress tests increase the number of concurrent sessions to see the impact on performance and determine the level of multiple clients running optimised tag queries that can be supported on the system.Saturation of the machine was seen at one job per minute.Each job divides its time between CPU and I/O, with some cluster time, when saturation is not almost reached.One job per 90 seconds is the equivalent of approximately 9000 queries a day.Each Tier 1 site will have 2 nodes, although upgrades are expected, this rate would occupy one node for tag queries.Tier 0 production database has 6 nodes, tag queries of these rates could be supported on one.Upgrade is planned for April 2008 and so increased performance expected.Once new machines are available, tag queries will again be stress tested.It is clear from these results that in tag query processing there is a need to manage and limit concurrent client sessions, as the application is resource intensive.Careful planning of resource consumption and distribution is necessary.
Experience of Streams Test Data TAG Database
In May 2007 a TAG Database was created using data from the ATLAS Streams Tests.An interface was created to allow users to create SQL queries and query the database, and the database and interface were made available to users.During this time a system of monitoring user queries was established in order to develop learning about query patterns, as these may then influence the development of the relational database schema.Some monitoring system will be necessary as the database expands to scale, so that performance can be monitored and optimising strategies can be developed in response to query patterns and increased data volume.
Conclusions and Future Work
The 1TB Performance and Scalability Tests for the ATLAS TAG Database have successfully investigated and implemented database performance optimising strategies in terms of partitioning, indexing and query processing techniques for a terabyte scale TAG Database.An assessment of optimised query performance based on a database schema developed for the ATLAS TAG Database is made, demonstrating encouraging results in terms of the potential for a terabyte scale performant and queryable relational TAG Database, useful to ATLAS physicists.
Future areas of work include assessment of performance on new server hardware and implementation of a second user TAG Database for CSC data to follow from the Streams Test TAG Database.In doing so we will continue gathering data about likely query patterns.We also aim to continue development of an implementation of time varying trigger menus and decisions in tags.We plan to assess performance of file based tags for queries in comparison to relational tags at realistic scale.Work to integrate the TAG Database with ATLAS Distributed Analysis components is ongoing.
Table 2 .
Stress Test Results | 2019-02-14T14:02:49.314Z | 2008-07-01T00:00:00.000 | {
"year": 2008,
"sha1": "7414497bb032574d8b3cf588870291bcf5d63e08",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/119/7/072012/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fd9e27e9796f3ae5fda2bba6d520cb9aad91bf8a",
"s2fieldsofstudy": [
"Computer Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
268577773 | pes2o/s2orc | v3-fos-license | Marine Bioluminescence: Simulation of Dynamics within a Pump-Through Bathyphotometer
Bioluminescence is light produced by organisms through chemical reactions. In most cases, bioluminescent organisms produce light in response to mechanical stimulation, including from shear around objects moving in the water. Many phytoplankton and zooplankton are capable of producing bioluminescence, which is commonly measured as bioluminescence potential, defined as mechanically stimulated light measured inside of a chambered pump-through bathyphotometer. We have developed a numerical model of a pump-through bathyphotometer and simulated flow using Lagrangian particles as an approximation for bioluminescent marine plankton taxa. The results indicate that all particles remain in the detection chamber for a residence time of at least 0.25 s. This suggests that the total first flash of bioluminescent autotrophic and heterotrophic dinoflagellates will be measured based on the existing literature regarding their flash duration. We have found low sensitivity of particle residence time to variations in particle size, density, or measurement depth. In addition, the results show that a high percentage of organisms may experience stimulation well before the detection chamber, or even multiple stimulations within the detection chamber. The results of this work serve to inform the processing of current bioluminescent potential data and assist in the development of future instruments.
Introduction
Bioluminescence (BL) is light produced by organisms through chemical reactions in response to mechanical, chemical, and optical changes within their environment, as well as an indicator for predator-prey interactions and mating [1,2].In this work, we only consider bioluminescent organisms, specifically those that produce light in response to mechanical stimulation, including from shear around moving objects in the water.Many species of phytoplankton (primarily autotrophic and mixotrophic dinoflagellates) and zooplankton (including heterotrophic dinoflagellates, copepods, euphausiids, and many gelatinous organisms) are capable of producing mechanically stimulated bioluminescence, which is commonly measured as BL potential, defined as mechanically stimulated light measured inside of a chambered pump-through bathyphotometer.Most pump-through bathyphotometers pull ocean water into a closed chamber, where the marine organisms are mechanically stimulated to produce light upon entry into the chamber [3][4][5].The stimulation is achieved either through a pump, rotating impellers, or through the introduction of grid-generated turbulence.
The Underwater Bioluminescence Assessment Tool (UBAT) is the only currently existing and commercially available pump-through bathyphotometer [6].We note the importance of the bathyphotometer systems that preceded the UBAT, many of which have extensive data repositories and complementary experimental data [4].However, none are commercially available, and most have not been in use for many years.In the UBAT, oceanic water is entrained into an S-shaped intake that is designed to prevent pre-stimulation of organisms as they travel to the detection chamber.The inlet also acts as a light baffle to minimize ambient light collected by the instrument.To enter the detection chamber, particles contained in the water sample pass through a high-speed impeller that produces mechanical stimulation.The UBAT collects data on BL potential in units of photons/s.The BL potential measured by the bathyphotometer thus represents the sum of light emitted by different organisms in the detection chamber.Usually, zooplankton emit bright flashes (larger than 10 10 photons/s), while most dinoflagellate species emit flashes that produce less than 10 9 photons/s.However, several factors have an effect on how well the BL potential recorded in pump-through bathyphotometers correlates to the total light output of a given organism.
There are several known challenges that affect the interpretation of data collected with pump-through bathyphotometers (including the UBAT), as listed below [7]: 1.
The intake of the bathyphotometer can be avoided by fast-swimming organisms.
2.
The residence time of the organisms in the detection chamber might be inappropriate.The Total Mechanically Stimulated Light (TMSL) of an organism is a measure of its bioluminescent capacity, defined by the number of flashes produced by the organism, the duration of the flash, and the maximum intensity of the flash [8].If residence time is low, some of this TMSL may not be recorded in the detection chamber.
3.
Some organisms can be pre-stimulated prior to reaching the detection chamber; therefore, some light will not be recorded in the detection chamber.
4.
Large volumes of seawater should be sampled to obtain statistically significant estimates of BL potential, so bathyphotometers should pump through large volumes of water.
These challenges create uncertainties in understanding what fraction of an organism's TMSL is actually measured by pump-through bathyphotometers.In the present paper, we address the following questions: What is the distribution of residence time for the organisms in the detection chamber of a pump-through bathyphotometer? 2.
What is the rate of strain distribution recorded at the inlet, and does it facilitate the possibility of pre-stimulation? 3.
What rate of strain do organisms experience in the detection chamber, and can it cause multiple stimulations for some organisms?
To address the above questions, we developed a numerical model of a pump-through bathyphotometer, using the UBAT as a reference.For the remainder of this paper, we refer to the numerical model of the bathyphotometer as the SIM-BATH.We conducted Computational Fluid Dynamics (CFD) simulations of flow through the SIM-BATH, using Lagrangian particles as an approximation for bioluminescent marine taxa.From these simulations, we estimated the distribution of residence times for organisms in the detection chamber of the SIM-BATH, and we provide a statistical analysis of the rate of strain experienced by particles passing through the inlet and the detection chamber.Furthermore, we assess the sensitivity of results to changes in the density and diameter of particles, as well as to the instrument depth during deployment.
SIM-BATH Geometry
The UBAT bathyphotometer has two high-speed rotating impellers: a pump impeller to mechanically stimulate marine organisms and a flow impeller to maintain a specific flow rate through the instrument.Oceanic water enters the UBAT through the inlet into the S-shaped baffle that ends at the first impeller.The first impeller, called the pump impeller, spins at 1200 rpm and forces fluid into the detection chamber with a volume of 440 cm 3 .The flow impeller rotates at 600 rpm and redirects particles through the outlet.Measurements of the UBAT's S-shaped inlet, impellers, and detection chamber were used to create the CAD geometry for a numerical model approximating the UBAT, which we refer to as the SIM-BATH.The SIM-BATH has all elements of a pump-through bathyphotometer, including an S-shaped inlet, two pumps for stimulation and flow control, a detection chamber, and an outlet.The resulting geometry, which comprises the internal fluid domain of the SIM-BATH, is shown in Figure 1.
Computational Methods
We used a finite-volume Navier-Stokes solver (STAR CCM 2021.2) for the modeling of fluid flow inside the SIM-BATH.The numerical model solves the unsteady Navier-Stokes equations given in Equations ( 1) and (2), using the finite volume method (FVM) with an implicit scheme.Here, ρ denotes density, ⃗ v represents the fluid velocity vector, σ is the symmetric stress tensor, and ⃗ F b denotes body force.Flow is considered incompressible.Turbulence in the SIM-BATH is modeled with a Reynolds-Averaged Navier-Stokes (RANS) approach and the κ-ω SST model [9].We used structured hexahedral cells to improve orthogonality in the volume mesh.The specifics of the turbulence model, grid design, and residuals are presented in Appendices A.1-A. 4.
The FVM solution of the flow field is subject to boundary conditions and initial conditions.At the inlet and outlet, the boundary is defined with a pressure condition.The pressure is specified and kept the same on both boundaries, and all other properties are extrapolated from interior cells.All other boundaries-those delineating the SIM-BATH surface-are defined as no-slip walls.This selection of boundary conditions means that the volumetric flow rate is not explicitly defined, and it is instead allowed to adjust freely based on flow impeller motion.The model is validated by evaluating the convergence of the volumetric flow rate as a function of grid size.The initial condition for the fluid velocity field is ⃗ v f = 0 throughout the SIM-BATH.The dynamics of the bioluminescent organisms throughout the detection chamber are modeled with a particle tracking routine.A Lagrangian multiphase model was used for particle tracking.In this model, the Lagrangian particles are unidirectionally coupled to the RANS simulation, meaning the flow dynamics drive the particle motion but not vice versa.The equation of motion for the Lagrangian particles is given in Equation (3): where ⃗ F D and ⃗ F P are surface force vectors corresponding to the effects of drag and pressure, ⃗ F G is a body force vector representing the force of gravity, m p is the mass of the particle, and dt is the time rate of change of the particle's velocity vector.The surface and body force vectors are described in more detail in Appendix A.5.
The sum of these forces at each time step is substituted into the equation of motion, from which an acceleration can be calculated.From the particle velocity calculated in Equation ( 5), we can extrapolate the particle displacement over the current time step.We assigned initial conditions to the Lagrangian particles at the time step corresponding to one second of model time (at which point the SIM-BATH had reached its operating flow rate) by seeding 1000 particles in a uniform distribution on the inlet boundary.Each particle was given an initial velocity in the direction of flow in the chamber as given by Equation ( 4) to account for acceleration prior to entering the SIM-BATH, where ⃗ A i is the inward-pointing area vector of the inlet and dm dt is the average mass flow rate.
Residence Time
The residence time of a particle in the SIM-BATH is defined as the time during which that particle remains in the detection chamber.As a result, the distribution of the residence time for the ensemble of particles entering the SIM-BATH was estimated.The UBAT evolved from the Multipurpose Bioluminescence Bathyphotometer (MBBP) developed at UCSB [3].An analytical equation for the percent of particles remaining in the detection chamber of the MBBP was proposed under the assumption of an already well-mixed detection chamber, where n 0 is the initial number of particles, n is the number of particles remaining at time t from the initial time, 1 ρ dm dt is the volumetric flow rate, and V is the volume of the detection chamber.We compared the residence time distribution given in Equation ( 5), using the SIM-BATH flow rate and the volume of the detection chamber, with the corresponding distribution based on the CFD modeling.
Estimation of Rate of Strain
The rate of strain tensor, denoted as L, is expressed as the gradient of the velocity vector ⃗ v in Equation ( 6): The magnitude of the symmetric rate of strain tensor |E| is given as follows: The rate of strain tensor L is recorded for each region as shown in Figure 2, from which |E| is derived.Regions of interest are the S-shaped inlet, the area around the pump impeller, and the detection chamber.The size of the impeller region is extended upwards and leftwards of the impeller itself to capture the rates of strain immediately before and after the impeller.
Design of Model Runs
As stated in the Introduction, the objective of this paper is to estimate the distributions of the following quantities: the residence time for the plankton in the detection chamber, the rate of strain experienced by plankton in the S-shaped inlet, and the rate of strain experienced by plankton in the detection chamber.Lagrangian particles were placed in the flow to track the trajectory of simulated organisms within the SIM-BATH.In oceanographic applications, Lagrangian particle tracing is a standard approach for simulating the dynamics of marine organisms in oceanic flow.The Lagrangian particle model does not include particle-to-particle or particle-to-fluid interactions.For this reason, the particle tracking method is unidirectional, and the path of a given particle will be the same if multiple sizes and densities are combined in the same run, or if multiple runs are used with uniform parameters for each run, as was performed in this work.Table 1 provides a summary of the model runs, which assess sensitivity to diameter, density, and depth of deployment (pressure).For Run 1, the baseline run, the intention was to approximate the flow of massless particles through the SIM-BATH.For this reason, we used a small particle diameter D p = 2 µm with a density ρ p = 1000 kg/m 3 to mitigate buoyant forces.For a particle with these properties, the expected mass is on the order of 10 −10 grams.
In coastal regions, the primary source of mechanically stimulated bioluminescence is dinoflagellates, which generally range in size from about 15 µm to 100 µm but can reach sizes approaching 1 mm [10].With many small-volume bathyphotometers like the UBAT, large organisms are likely to avoid the inlet [7].As a result, we considered particles with diameters closer to the mean.Runs 2 and 3 were replicas of Run 1 but with particle diameters defined as D p = 20 µm and D p = 200 µm, respectively.Comparisons of Runs 1-3 highlight the impact of particle size on the residence time and the rate of strain experienced while passing through the SIM-BATH.
The densities of phytoplankton depend on their life stage and nutritional state.Vegetative cells of phytoplankton occupy a broad range of densities from 1030 to 1200 kg/m 3 [10], with most species that are not heavily silicified or calcareous having densities near 1050 kg/m 3 [11].Run 4 used the same particle diameter as Run 3 but with a particle density of ρ p = 950 kg/m 3 and Run 5 used the same particle diameter as Run 3 but with a particle density of ρ p = 1050 kg/m 3 .Comparisons of Runs 3-5 highlight the impact of particle density on the residence time and rate of strain.
The numerical model requires the specification of the pressure as a boundary condition at the inlet and outlet of the modeling domain.These boundary conditions can be interpreted as a specification of the bathyphotometer deployment depth, and varying these boundary conditions will highlight the sensitivity of the model results to the depth of the SIM-BATH in the field.Runs 1-5 were conducted with a pressure of p = 101.3kPa, which corresponds to deployment at sea level.Run 6 used the same particle diameter and density as in Run 1, but the pressure at the inlet and outlet was set to p = 199.1 kPa, corresponding to a depth of deployment of 10 m based on the hydrostatic assumption.Run 7 was set up in the same way as Run 1 and Run 6, but the inlet and outlet pressures were defined as p = 1081.3kPa, corresponding to a depth of deployment of 100 m.
Residence Time Analysis
To quantify the residence time of particles in the detection chamber, the percentage of particles remaining in the detection chamber as a function of time was calculated for the seven model runs listed in Table 2 and shown in Figure 3.The figure also shows the percentage of particles remaining in the detection chamber using the analytical function in Equation (5).The percentage of particles remaining for specific times corresponding to the flash durations of certain species is also shown in Table 2.As shown in Figure 3 and Table 2, all particles remain in the detection chamber until at least t = 0.25 s in all seven runs.After that time, the number of particles remaining in the detection chamber decays exponentially.All particles leave the detection chamber by about t = 8-10 s.There is very low sensitivity of particle residence time to the variations in sizes, density of particles, or the depth of the instrument deployment.Bioluminescent flash durations for most autotrophic and heterotrophic dinoflagellates range from around 0.1 s to 0.25 s, and 0.2 s is the average flash duration for the heterotrophic dinoflagellate Gonyaulax polyedra [8,12,13].This suggests that, in general, the total first flash of most dinoflagellates will be measured by the bathyphotometer.As shown in Figure 3 and Table 2, around 60% of particles remain in the detection chamber at t = 0.5 s, which suggests that approximately 60% of dinoflagellates remain in the detection chamber long enough to produce a secondary flash, which can occur if the rate of strain experienced by the organism again exceeds the threshold rate of strain.As a result, the measured BL potential might include multiple flashes from about 60% of candidate dinoflagellates.At the same time, around 0.5 s represents an average flash duration for the heterotrophic dinoflagellates Noctiluca scintillans [14], as well as the flash duration of the copepod Metridia longa [15,16].
Therefore, the bathyphotometer will measure the Total Mechanically Stimulated Light from around 60% of those plankton.At t = 1 s, only 40% of particles remain in the detection chamber, which means that around 40% of Noctiluca scintillans and Metridia longa will be able to flash twice depending on the possibility of re-stimulation in the detection chamber.For the ctenophore Beroe cucucmis, which has a flash duration reaching 1.4 s [16] or even 2.2 s [15], the TMSL of only around 20-25% of organisms will be presented in the BL potential measured by the bathyphotometer.For the other 75-80% of Beroe, only part of their flashes will be measured by the SIM-BATH.
The residence time curve corresponding to Equation ( 5) shows a lower percentage of particles remaining compared to the CFD results until around t = 0.4 s, which is likely due to the assumption of well-mixed particles used in the Herren report.The curves for the seven CFD runs and the curve for Equation ( 5) are in very good agreement after 1.3 s, which corresponds to the time scale in Equation ( 5): − t ρV dm dt −1 = 1.28 s.After that time, the particles recirculate and mix more thoroughly, more closely following the well-mixed assumption.
Rate of Strain Analysis
Numerous studies have been conducted to determine the rate of strain required to stimulate bioluminescence in different taxa [12,17,18].These studies found that the rate of strain required for stimulation in steady laminar flow varies between 20 and 300 s −1 .For the runs in Table 1, Figure 4 plots the percent of particles stimulated in the S-shaped inlet as a function of the threshold rate of strain to cause that stimulation.Figures 5 and 6 present similar estimations for the detection chamber and the pump impeller, respectively.Table 3 presents the percent of particles experiencing rates of strain exceeding 50, 100, and 200 s −1 in the inlet and in the detection chamber.Comparisons of Figures 4 and 5 show that a higher percentage of particles experienced a given rate of strain in the inlet compared to the detection chamber for all considered runs.This is also supported by Table 3, where 100% of particles experienced a rate of strain exceeding 50 s −1 in the inlet area.In the detection chamber, this percentage was approximately 93%.On average, approximately 90% of particles in the inlet experienced a rate of strain exceeding 100 s −1 , compared to about 80% of particles in the detection chamber.The S-shaped inlet was designed with the objective of minimizing marine organisms' exposure to high rates of stain in order to avoid pre-stimulation prior reaching the impeller.The rate of strain of 100 s −1 corresponds to a pressure of 1 dyn/cm 2 at sea level dynamic viscosity, and it is a well-known threshold for the stimulation of the dinoflagellate Gonyaulax polyedra [17].Our results demonstrate that 90% of the dinoflagellate Gonyaulax polyedra will receive sufficient mechanical stimulation in the inlet to initiate mechanically stimulated bioluminescence prior to passing the impeller, and as a result some portion of their flash will not be recorded inside of the detection chamber.At the same time, based on the results of the previous section, around 60% of autotrophic dinoflagellates remain in the detection chamber for at least 0.5 s, and 80% of them experience a rate of strain above the threshold of 100 s −1 (Table 3).As shown in Figure 6, 100% of particles experienced a rate of stain exceeding 300 s −1 in the pump impeller region, which is the maximum rate of strain considered in the text for the mechanical stimulation of bioluminescence.
For low rates of strain (below 50 s −1 ), Run 3 and Run 5 deviated from the baseline run.Analysis of particle paths (not shown here), demonstrated that this discrepancy can be attributed to particles that settle in the detection chamber, with the percent of particles settling proportional to the mass of the particles.Rate of strain plots in all regions show that some particles experience very high rates of strain, in some cases well over 1000 s −1 .This can be attributed to the interaction of a small percentage of the particles with the boundary layer in each region, where strain is very high compared to the surrounding flow due to the considerable velocity gradient near the wall.
Discussion
We developed a numerical model of a pump-through bathyphotometer.The dimensions of the UBAT instrument were used to create the domain for the numerical model, called the SIM-BATH.The SIM-BATH has all elements of a pump-through bathyphotometer, including an S-shaped inlet, two pumps for mechanical stimulation and flow rate control, a detection chamber, and an outlet.We conducted CFD simulations of flow through the SIM-BATH, using Lagrangian particles as an approximation of marine taxa.From these simulations, we presented a distribution of the residence times of particles in the detection chamber of the SIM-BATH, as well as a statistical analysis of the rate of strain experienced by particles passing through the inlet and the detection chamber.Our modeling results demonstrate a very low sensitivity of particle residence time and rate of strain in the detection chamber to the variations in their sizes, density, or the depth of the instrument deployment.
We found that all particles remain in the detection chamber for at least 0.25 s.This suggests that most autotrophic and heterotrophic dinoflagellates, including C. horrida, G. polyedra, L. polyedra, P. fusiformis, P. lunula, and T. fusus, will have their total first flash measured by the bathyphotometer because their commonly accepted flash durations are less than 0.25 s [19].One notable exception is the large heterotrophic dinoflagellate P. noctiluca, which has a flash duration of about 0.5 s.Only about 60% of P. noctiluca passing through the detection chamber will remain inside long enough for their total first flash to be recorded.Concerning other bioluminescent taxa, our results demonstrate that the bathyphotometer will measure the total first flash from around 60% of the copepod M. longa, based on their flash duration from the literature [19].Our simulations have also shown that for the ctenophore B. cucumis, the total first flash will be recorded for only around 25% of organisms, and for the remaining 75%, only part of their flashes will be measured inside of the detection chamber.
We also found that the rate of strain within the S-shaped inlet is sufficient to produce pre-stimulation of many dinoflagellates.While passing through the inlet, 90% of particles experience a rate of strain exceeding 100 s −1 .C. horrida, G. polyedra, P. fusiformis, and T. fusus are highly likely to experience pre-stimulation as their commonly accepted threshold rates of strain are below this value [19].P. lunula and L. polyedra have threshold rates of strain of 200 s −1 and 320 s −1 , respectively [19].As a result, about 40% of P. lunula and 25% of L. polyedra may experience pre-stimulation.The copepod M. longa, with a threshold rate of strain of 510 s −1 , has a very low likelihood of pre-stimulation [19].
Finally, we find that the long residence time of many particles coupled with the high rate of strain in some areas of the detection chamber may produce re-stimulation of certain taxa as they continue to circulate.For dinoflagellates with a short flash duration and low rate of strain threshold like C. horrida, G. polyedra, P. fusiformis, and T. fusus, 50% or more may undergo at least one additional stimulation while in the detection chamber [19].
Our results lend themselves to a discussion of some issues with the UBAT.First, we observe high rates of strain in the instrument prior to the detection chamber.While the inlet is likely effective as a light baffle, the two elbows in the S-shaped inlet create pockets for recirculation, and the narrow inlet diameter produces a high-shear boundary layer that extends well into the interior of the pipe.In addition, we observe that the detection chamber does not produce consistent residence times.Half of the particles are quickly directed through the outlet in under a second, while the rest remain within the chamber for as long as ten seconds.From this, some organisms' first flashes are not fully recorded, while others may be stimulated to exhaustion as they recirculate.Consistent BL potential data collection could benefit from more uniform residence times for all organisms.Finally, the distance particles must travel after being stimulated by the pump impeller but prior to the start of the detection chamber may result in some light emission from the stimulation not being recorded.
Data Availability Statement:
The raw output files for the model runs analyzed in this paper, as well as data sufficient to regenerate the figures, tables, and other results in this paper, are stored on US Naval Research Laboratory computers and will be made available to members of the scientific community upon request.To obtain the data, please contact the corresponding author.The optimal grid size was selected by recording the change in volumetric flow rate from the previous grid to the next.We defined the optimal grid as the smallest grid such that there is less than a 1% change in the flow rate from the previous grid.Between 1.6 million and 2.7 million cells, the change in average flow rate is 0.003 L/s, less than 1% of the total flow rate and well within the standard deviation of 0.009 L/s.This standard deviation represents the temporal variability of the volumetric flow rate due to the unsteady pumping effect of the impellers.As a result, the final grid size used to produce the following results has a cell count of 2.7 million with a flow rate of 1 V dm dt = 0.345 L/s.To establish an appropriate time step for the simulation, we referred to the rotation rates of the impellers.For impeller rotation, one to five degrees of rotation per time step is generally sufficient [22].In our model, we used a time step size of ∆t = 0.25 ms, where each time step corresponds to approximately two degrees of rotation on the pump impeller and one degree of rotation on the flow impeller.
Figure 1 .
Figure 1.View of the SIM-BATH model domain, including hidden faces.
Figure 2 .
Figure 2. Defined regions for rate of strain analysis.
Figure 3 .
Figure 3. Distribution of particles remaining in the detection chamber over time for the runs in Table 1 and from Equation (5): The x-axis represents the time particles have spent in the detection chamber (a) on a logarithmic scale and (b) on a linear scale.The y-axis represents the percent of particles remaining in the detection chamber.
Figure 4 .
Figure 4. Percent of particles stimulated in the inlet as a function of threshold rate of strain.Three vertical dashed black lines correspond to threshold rates of strain of 50, 100, and 200 s −1 .
Figure 5 .
Figure 5. Percent of particles stimulated in the detection chamber as a function of the threshold rate of strain.Three vertical dashed black lines correspond to threshold rates of strain of 50, 100, and 200 s −1 .
Figure 6 .
Figure 6.Percent of particles stimulated in the pump impeller region as a function of the threshold rate of strain.Three vertical dashed black lines correspond to threshold rates of strain of 50, 100, and 200 s −1 .
Figure A2 .
Figure A2.Mesh refinement from 0.25 million cells to 2.7 million cells.Visible are the orthogonal hexahedral volume cells, as well as the high aspect ratio boundary layer cells at the walls.
Table 1 .
List of the model runs.
Table 2 .
List of the model runs with the percent of particles remaining in the detection chamber at the durations of interest.
Table 3 .
Percent of particles experiencing a threshold rate of strain of 50, 100, and 200 s −1 in the inlet and detection chamber.
Table A3 .
Convergence of the time-averaged volumetric flow rate from 0.5 s to 1.0 s of model time.× 10 6 0.345 ± 0.009 time step after five internal iterations.Time-step residuals for each variable are provided in TableA4.
Table A4 .
Residuals for each variable in the SIM-BATH model at a time of 1.0 s. | 2024-03-22T15:05:55.852Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "38c71833a10bc3d49e0b3c8665cb5db5aa03e4e5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/6/1958/pdf?version=1710923673",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9979b40c4fce30a0ca68b696f1920c60e3115534",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258499741 | pes2o/s2orc | v3-fos-license | Ultrasound Assisted Endoscopic Gastric Bypass (USA-EGB): a potential endoscopic alternative to surgical bypass, a pre-clinical proof of concept study
Background and study aims Endoscopic weight loss procedures have gained traction as minimally invasive options for the primary treatment of obesity. Thus far, we have developed endoscopic procedures that reliably address gastric restriction but result in significantly less weight loss than surgical gastrointestinal bypass. The goal of this nonsurvival study was to assess the technical feasibility of an endoscopic procedure, that incorporates both gastric restriction and potentially reversible gastrointestinal bypass. Methods Ultrasound-assisted endoscopic gastric bypass (USA-EGB) was performed in three consecutive live swine, followed by euthanasia and necropsy. Procedure steps were: 1) balloon-assisted enteroscopy that determines the length of the bypassed limb; 2) endoscopic ultrasound-guided gastroenterostomy that creates a gastrointestinal anastomosis using a lumen apposing metal stent; 3) endoscopic pyloric exclusion that disrupts transpyloric continuity resulting in complete gastrointestinal bypass; and 4) gastric restriction that reduces gastric volume. Results Complete gastrointestinal bypass and gastric restriction was achieved in all three swine. The mean total procedure time was 131 minutes (range 113–143), mean length of the bypassed limb was 92.5 cm and 180 cm, using short and long overtubes, respectively. There were no significant complications. Conclusions We successfully described USA-EGB in three consecutive live swine. Further studies are needed to access the procedures safety, efficacy, and clinical use.
ABSTR AC T Background and study aims Endoscopic weight loss procedures have gained traction as minimally invasive options for the primary treatment of obesity. Thus far, we have developed endoscopic procedures that reliably address gastric restriction but result in significantly less weight loss than surgical gastrointestinal bypass. The goal of this nonsurvival study was to assess the technical feasibility of an endoscopic procedure, that incorporates both gastric restriction and potentially reversible gastrointestinal bypass.
Methods Ultrasound-assisted endoscopic gastric bypass (USA-EGB) was performed in three consecutive live swine, followed by euthanasia and necropsy. Procedure steps were: 1) balloon-assisted enteroscopy that determines the length of the bypassed limb; 2) endoscopic ultrasoundguided gastroenterostomy that creates a gastrointestinal anastomosis using a lumen apposing metal stent; 3) endoscopic pyloric exclusion that disrupts transpyloric continuity resulting in complete gastrointestinal bypass; and 4) gastric restriction that reduces gastric volume.
Results Complete gastrointestinal bypass and gastric restriction was achieved in all three swine. The mean total procedure time was 131 minutes (range 113-143), mean length of the bypassed limb was 92.5 cm and 180 cm, using short and long overtubes, respectively. There were no significant complications.
Conclusions We successfully described USA-EGB in three consecutive live swine. Further studies are needed to access the procedures safety, efficacy, and clinical use. facilitated transoral incisionless procedures that alter gastric anatomy and induce weight loss. When compared to surgery, endoscopic procedures are perceived to be less invasive, have shorter recovery periods, lower morbidity, are cost-effective and have grained traction as primary treatment options for obesity [10,11].
Endoscopic procedures that restrict gastric volume have been developed for clinical use in humans. Examples of such procedures are endoscopic balloon therapy and endoscopic sleeve gastroplasty. Procedures that simultaneously bypass proximal small bowel and restrict gastric volume are more effective at inducing weight loss and have superior metabolic effects than procedures that restrict gastric volume alone [12,13]. Several promising animal studies have evaluated the utility of alternatives such as natural orifice transluminal endoscopic surgery and self-assembling magnets to create a gastroenteric or entero-enteric bypass. An endoscopic procedure that simultaneously bypasses the proximal small bowel and restricts gastric volume has not yet been developed for clinical use in humans [14,15,16].
The goal of this nonsurvival animal study was to assess the technical feasibility of an endoscopic ultrasound (EUS)-assisted bypass procedure, that incorporates both gastric restriction and small intestinal bypass.
Materials and methods
The study was conducted at a United States Department of Agriculture-licensed facility, under an active protocol that was approved by our Institutional Review Board. Animal Care and Use Committee (IACUC) guidelines were strictly adhered to. Three large Yorkshire pigs that weighed 170 to 175 lb were used (▶ Table 1). On the day of the procedure, the animals were sedated and placed on a ventilator. Vitals including noninvasive blood pressure (NIBP), pulse, peripheral oxygen saturation, end tidal carbon dioxide levels and respiratory rate were measured throughout. After completion of the procedure, all three animals were euthanized using weight-based doses of sodium pentobarbital and necropsy was performed.
Procedure success was defined as the ability to accomplish all required steps of the procedure in a live swine without significant hemodynamic change or respiratory distress. At the conclusion of the procedure, water-soluble contrast was injected at the esophagogastric junction and the following was ensured on fluoroscopy: 1) absence of transpyloric contrast passage; 2) reduction in gastric volume by at least 50%; 3) presence of a common conduit that connects the gastric inlet to both the pylorus and the lumen apposing metal stent (LAMS); and 4) complete diversion of contrast through the gastroenteric anastomosis with no fluoroscopic evidence of a leak. Procedure-related complications were defined as perforation, significant bleeding that required intervention or any gross deviation from the expected procedure outcome.
Procedure description
The ultrasound-assisted endoscopic gastric bypass (USA-EGB) procedure was performed using the following steps, in three consecutive live swine, by a team of one physician with training in advanced endoscopy, one endoscopy technician, and one fluoroscopy technician (▶ Fig. 1).
Step 1: Balloon-assisted enteroscopy This step determines the length of small bowel that is bypassed. Balloon-assisted enteroscopy (BAE) is utilized to explore the small bowel to a pre-determined depth from the pylorus. The enteroscope is also used to manipulate the small bowel and create a window for EUS-guided gastroenteric anastomosis (EUS-GEA). A balloon-assisted enteroscope was advanced into the small bowel and, using standard technique, either 15 anterograde push-and-pull cycles were completed or 60 minutes elapsed (▶ Fig. 2a). Following this, the enteroscope and overtube were manipulated until the loop of bowel that is located immediately downstream from the overtube balloon, overlaps the gastric silhouette on fluoroscopy (▶ Fig. 3). Within this loop, the segment of bowel that simultaneously overlaps a hypothetical 3to 5-cm tubular zone that starts at the gastric inlet, runs along the lesser curvature, and ends at the pylorus, is ideally suited for LAMS placement (▶ Fig. 2b). This segment of small bowel will be referred to as the "target segment" for simplicity. After this position was achieved, the small bowel was infused with a solution of water, radio contrast, and methylene blue. The enteroscope was then withdrawn leaving the overtube with an inflated balloon in situ.
Using a commercially available LAMS, this step creates an EUSguided anastomosis between the stomach and small bowel.
A curvilinear echoendoscope was advanced alongside the flexible overtube, and using fluoroscopy and EUS, the "target segment" of small bowel was identified. After obtaining an avascular window on Doppler, a 15 × 10 mm electrocautery-enhanced LAMS (Hot Axios stent, Boston Scientific, Marlborough, Massachusetts, United States) was deployed freehand using standard technique and a gastroenteric anastomosis was created (▶ Fig. 2c). The overtube balloon was then deflated and the echoendoscope and overtube were withdrawn. ▶ Fig. 2 Descriptive illustrations USA-EGB procedure. a A balloonassisted enteroscope is advanced into the small bowel. b The enteroscope and overtube are manipulated until the loop of small bowel located immediately downstream from the overtube balloon, overlaps the gastric silhouette on fluoroscopy. The "target segment" (meshed lines) within this loop of small bowel that is ideally suited for LAMS placement, is the segment of bowel that also overlaps a hypothetical 3-to 5-cm tubular zone that starts at the gastric inlet, runs along the lesser curvature, and ends at the pylorus (orange). The small bowel is then infused with a solution of water, radio contrast and a blue dye. The enteroscope is then withdrawn, leaving the overtube with an inflated balloon in situ. c A curvilinear echoendoscope is inserted alongside the overtube and the LAMS is deployed within the "target segment" (meshed lines) of small bowel. d Closure of the pylorus using a single continuous polypropylene suture. e Reduction of gastric volume by application of transmural sutures. f Completed USA-EGB procedure depicting complete gastrointestinal bypass and gastric restriction. Source: Image courtesy of Elena S. Kakoshina.
Step 3: Endoscopic pyloric exclusion This step closes the pylorus and completely excludes the proximal small bowel from the nutrient stream. After this step is completed, the gastroenteric anastomosis serves as the sole conduit for the nutrient stream, therefore completing the gastrointestinal bypass.
An endoscope mounted with a suturing devise (OverStitch Endoscopic Suturing System, Apollo Endosurgery, Austin, Texas, United States) was used to close the pylorus with a single continuous 2.0 polypropylene suture (▶ Fig. 2d). The endoscope was kept in a short position to prevent excess linear force on the greater curvature of the stomach or the recently placed LAMS. To allow for reversibility, the pylorus is not de-epithelialized prior to suturing, and a single continuous suture is used that can be easily cut if necessary.
Step 4: Gastric restriction This step reduces gastric volume by using a plication or suturing devise.
The stomach was reduced in volume, by the application of transmural sutures (OverStitch Endoscopic Suturing System, Apollo Endosurgery, Austin, Texas, United States). We adhered to the following principals while suturing the stomach: 1) preservation of a hypothetical 3-to 5-cm tubular zone that starts at the gastric inlet, runs along the lesser curvature and connects the gastric inlet to both the gastroenteric anastomosis and the pylorus; 2) no suturing of the gastric fundus or antrum; and 3) no suturing within 2 to 3 cm of the LAMS, as that may have compromised the gastric outlet or result in undue pressure on the LAMS (▶ Fig. 2e). After completion of the suturing portion of the procedure, the LAMS was dilated to 10 mm using a wire-guided dilating balloon. At the conclusion of the procedure, contrast was injected at the esophagogastric junction to fill the stomach. Under fluoroscopy, reduction of gastric volume combined with complete diversion of the enteric stream into the small bowel via the LAMS was ensured (▶ Fig. 2f). Following this, the endoscope and all accessories were withdrawn (▶ Video 1).
Results
We were able to successfully reduce gastric volume and achieve complete gastrointestinal bypass in all three consecutive live swine (▶ Fig. 4).
All three swine had stable vital signs throughout the procedure and there were no significant complications. The mean total intervention time was 131 minutes (range 113-143). The mean time for BAE was 44.3 minutes (range 40-52). The mean time for EUS-guided gastroenterostomy was 27 minutes (range 23-34), mean time for pyloric exclusion was 15.6 minutes (range 14-17), mean time for gastric restriction was 44.3 minutes (range 32-52).
On necropsy, there was no significant intraperitoneal bleeding or enteric contamination in any of the swine. The stomachs were thoroughly examined and revealed topographic changes consistent with transmural suturing. There were no gross defects, misplaced sutures, or tethering to adjacent abdominal structures. The gastroenterostomy was inspected and found to be intact to palpation and visual inspection, with no gross leakage of air or gastric content. All three LAMS were located on the mid to distal posterior gastric wall. Using a flexible measuring tape, the bowel was measured three times from the duodenaljejunal junction to the site of the enterotomy. The average length of the bypassed limb was 92.5 cm using the short overtube and 180 cm using the long overtube (▶ Table 2).
Discussion
In this preclinical proof-of-concept study, we have successfully demonstrated the use of a novel endoscopic technique that results in both gastroenteric bypass and gastric restriction. The procedure is arguably similar to a surgical single-anastomosis gastric bypass, one of the most commonly performed bypass operations in some countries [17].
Drawbacks of the study are as follows. First, this was a nonsurvival animal study. Swine have larger stomachs than humans and variant small bowel anatomy. Considering this, procedural dynamics and success on a swine model may not fully apply to humans. Weight loss trajectories and metabolic benefits remain unknown given the nonsurvival design of the study. Second, there is a relative paucity of literature supporting the safety of long-term LAMS use for gastroenteric anastomosis. This may not be clinically relevant, if the LAMS is removed and the bypass reversed within 6 months. Third, EUS-guided gastroenteric anastomosis is usually fashioned through the posterior wall of the stomach. The LAMS must transverse the transverse mesocolon, before reaching the small bowel. The mesocolon is likely to be thicker in an overweight human than it is in swine. Based on our experience with EUS-guided gastrojejunostomy, we feel that this may add some technical difficulty, but do not consider it to be prohibitive.
The bypass in USA-EGB may be potentially reversed while leaving gastric restriction intact (▶ Fig. 5). This is accomplished via endoscopic removal of the LAMS and cutting the polypropylene suture at the pylorus. Once the LAMS is removed, the gastroenteric anastomosis is likely to spontaneously close within a VIDEO ▶ Video 1 Description of the ultrasound-assisted endoscopic gastric bypass (USA-EGB) procedure in a live swine model.
Step 1 (X-ray after enteroscope withdrawal showing overtube with balloon inflated and optimal for GJ placement) Step 2 (EUS-guided gastroenterostomy creation) Step 3
Endoscopic image showing pyloric suturing and exclusion
Step 4 Endoscopic image of gastric reduction few days [18,19]. If persistent, the fistula can be closed endoscopically via suturing. In addition, the pylorus was not de-epithelialized, and a single continuous suture was used during pyloric closure. Cutting the pyloric suture at a single location would open the pylorus and restore transpyloric continuity, hence reversing the gastroenteric bypass [20].
Conclusions
To our knowledge, this is the first successful description of a partially reversible ultrasound-guided endoscopic bypass procedure. The gastrointestinal bypass may be reversed, leaving gastric restriction intact. The procedure can be performed by a single endoscopist using equipment and accessories that are widely available. Further studies are needed to evaluate safety, efficacy, and clinical use. | 2023-05-05T15:04:32.966Z | 2023-02-12T00:00:00.000 | {
"year": 2023,
"sha1": "e4ca3ad7ae7f86f6b38f2e7f205a4e21306be8da",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2085-3866.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "feda52d858ad3425bd410bf46ef30f844f0c5668",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15449816 | pes2o/s2orc | v3-fos-license | What broad emission lines tell us about how active galactic nuclei work
I review progress made in understanding the nature of the broad-line region (BLR) of active galactic nuclei (AGNs) and the role BLRs play in the AGN phenomenon. The high equivalent widths of the lines imply a high BLR covering factor, and the absence of clear evidence for absorption by the BLR means that the BLR has a flattened distribution and that we always view it near pole-on. The BLR gas is strongly self-shielding near the equatorial plane. Velocity-resolved reverberation mapping has long strongly excluded significant outflow of the BLR and shows instead that the predominant motions are Keplerian with large turbulence and a significant net inflow. The rotation and turbulence are consistent with the inferred geometry. The blueshifting of high-ionization lines is a consequence of scattering off inflowing material rather than the result of an outflowing wind. The rate of inflow of the BLR is sufficient to provide the accretion rate needed to power the AGN. Because the motions of the BLR are gravitationally dominated, and the BLR structure is very similar in most AGNs, consistent black hole masses can be determined. The good correlation between these estimates and masses predicted from the bulge luminosities of host galaxies provides strong support for the similarity of AGN continuum shapes and the correctness of the BLR picture presented. It is concluded that although many mysteries remain about the details of how AGNs work, a general overall picture of the torus and BLR is becoming clear.
Introduction
A broad-line region (BLR) is present in all AGNs accreting at moderate-to high-Eddington ratios. BLRs are important both because they are our best probe of how AGNs work and because of their potential for readily providing masses of supermassive black holes (SMBHs) back to the earliest times of galaxy formation. However, in order to be able to use BLRs to reliably estimate the masses of SMBHs it is essential to understand the structure and kinematics of BLRs. Over the last four decades there have been wide-ranging and, not infrequently, mutually contradictory views of the nature of the BLR (see reviews by Mathews & Capriotti 1985, Osterbrock & Mathews 1986, and Sulentic et al. 2000. However, I believe that the situation is improving. I review here what I consider to be the clearest pointers to the underlying structure and kinematics of the BLR and I argue that, while there are certainly many interesting problems remaining, the basic picture is now becoming fairly secure. I furthermore believe that this picture applies to all BLRs because BLR equivalent widths and line ratios are remarkably similar, especially in the ultraviolet.
The structure of the broad-line region and torus
The two most basic questions about the BLR are "what does it look like?" and "how is it moving?" The traditional picture of the BLR of an AGN for over 40 years (and one which is widely depicted in cartoons of AGNs) has been that there is a central source emitting ionizing radiation roughly spherically, and that it is surrounded by a roughly spherical mist of cloudlets. This is depicted in the left-hand-panel of Fig. 1. Each individual cloud, if it is big enough, will have a structure as shown in the right-hand-panel of Fig. 1. It will be highly ionized on the front, and if it has a high-enough column density, it will be mostly neutral on the back. The front emits high-ionization lines such as He II, He I, O VI, N V, and C IV, while the back emits low-ionization lines such as Mg II, Ca II, O I, and Fe II. All these lines are well-known in AGNs. The emissivity of each line as a function of distance from the front of the cloud can be calculated with the photoionization code CLOUDY (Ferland et al., 1998). Fig. 2 shows emissivities for some well-known lines. Baldwin et al. (1995) showed that the sum of contributions from clouds with a distribution of cloud properties (densities and distances from the center) will automatically produce a total spectrum similar to what is observed from AGNs. This is the so-called LOC model. 1 This was important because it showed that no "finetuning" of cloud conditions was needed to explain AGN spectra.
Despite the success of the traditional picture in general, and the LOC model in particular, in explaining the overall spectrum of an AGN, the problem with this picture (see Gaskell, Klimek, & Nazarova 2007) is that to explain the strengths of the BLR lines the covering 1 Ostensibly from "locally optimally emitting clouds." factor has to be large (50% or so), yet if this is so, and if the cloudlets are covering the central source uniformly, we ought to see Lyman continuum absorption by the BLR clouds. In fact Lyman continuum absorption due to the BLR is never convincingly seen (Antonucci et al. 1989-see discussion in MacAlpine 2003and Gaskell, Klimek, & Nazarova 2007. We believe, as proposed by S. Phinney (see Antonucci et al. 1989), that the need for a high covering factor plus the lack of Lyman continuum absorption requires the BLR to have a flattened distribution and requires us to be viewing it through a hole. This conclusion is supported by recovery of what is called the "transfer function" of some lines (the transfer function is the temporal response of a line to a delta-function event in continuum light curve). Transfer functions for low-ionization lines have always implied that there is little or no gas along the line of sight (Krolik et al., 1991;Horne, Welsh, & Peterson, 1991;Mannucci, Salvati, & Stranga, 1992;Pijpers & Wanders, 1994), and thus that at least the low-ionization gas in the BLR has a flattened distribution.
Having a high overall covering factor but a flattened distribution means that near the equatorial plane there will be a close to a 100% chance that any path will intersect a BLR cloud. The clouds will thus be selfshielding. Radiation from the central source can freely escape near the axis of symmetry, but is strongly diluted in the equatorial plane. This is schematically illustrated in Fig. 3.
It is easy to calculate the average radial dependence of the ionization and the emissivities of all the lines coming from cloudlets with a distribution such as in Fig. 3. The ionization structure of a single cloud in CLOUDY is now spread out in radius as illustrated schematically in Fig. 4. The horizontal axis in Fig. 2 can now be read as distance into the BLR rather than distance into an individual cloud. Our model is in fact very similar to the old "filling factor" model of MacAlpine (1972).
The earliest reverberation mapping of multiple lines (Gaskell & Sparke, 1986) showed that the highionization lines were coming from smaller radii than the low-ionization lines. High-ionization lines were also wider (e.g., Shuder 1982;Mathews & Wampler 1985). The radial ionization stratification of the BLR has been well confirmed by later reverberation mapping. The best reverberation-mapped AGN is NGC 5548. The horizontal axis of Fig. 5 (taken from Gaskell, Goosmann, & Klimek 2008) shows the reverberation-mapping time lags (i.e., the effective radii) for lines of a variety of ions from Clavel et al. 2 (a) (b) Figure 3: Schematic cross section of the BLR and torus in a plane through the axis of symmetry. The torus is on the right. Ionizing radiation is attenuated in the equatorial plane, but can freely escape near the poles. Figure Gaskell, Klimek, & Nazarova (2007). The lines are OLS-bisector fits (Isobe et al., 1990). Figure (1991), Peterson et al. (1991), and Bottorff et al. (2002).
(See table in Gaskell, Klimek, & Nazarova 2007 for details). The observed lags cover an order of magnitude in radius. NGC 5548 is not unique in this regard: an identical range of radii has also been found for Mrk 110 by Kollatschny (2003). The vertical axis shows the lags predicted for the same lines by the LOC model (Korista & Goad, 2000;Bottorff et al., 2002). It can be seen that while there is a correlation, these predicted lags cover a much smaller range of radii. The reason for this can be appreciated in Fig. 4. In the LOC model (top part of the figure) every cloud has a highly-ionized front part; the clouds just differ in the degree of ionization. The self-shielding model (Gaskell, Klimek, & Nazarova, 2007) solves the problem of why the ionization stratification is so strong. In the self-shielded model (bottom of Fig. 4) there is a clear spatial separation of the differing ionizations. It can be seen in Fig. 5 that the self-shielding model gives good agreement with the observed lags. Netzer & Laor (1993) made the important suggestion that the outer edge of the BLR coincided with the dust sublimation radius of the torus. Reverberation mapping observations show that the low-ionization gas in the BLR indeed extends out to the dusty torus (Suganuma et al., 2006;Gaskell, Klimek, & Nazarova, 2007). The covering factor of the torus can be calculated statistically from the ratio of type-1 (faceon) to type-2 (edge-on) AGNs, and directly for in-3 dividual objects from the strength of the thermal emission (e.g., Maiolino et al. 2007).
We argue (Gaskell, Klimek, & Nazarova, 2007) that the covering factors of the BLR and torus have to be the same. This is because if the torus has a lower covering factor than the BLR we would see the BLR in absorption against the central continuum source in some objects near the type-2 viewing position. This is never seen. On the other hand, if the BLR has a lower covering factor, some of the dusty torus will see direct radiation from the central source. This cannot be the case for much of the torus because it would then be unable to exist as close in as is seen.
The overall picture we get of the torus and BLR is indicated schematically in the cartoon in Fig. 6 and in the computer generated renditions shown in Fig. 7. The best description of the appearance is to say that the BLR and torus look like a bird's nest. This picture is identical to that favored for totally independent reasons in an unfortunately almost totally overlooked paper by Mannucci, Salvati, & Stranga (1992). They inferred a "bird's nest" geometry from a combined analysis of line profiles and transfer functions in NGC 5548. The positions of masers in NGC 1068 also provides support for a thick BLR-torus (Greenhill et al., 1996).
Determining the Direction of Motion
The kinematics of the BLR have been a long-standing problem. It has been known from the earliest days of AGN studies that the lines are very broad (for a review of the earliest literature see Seyfert 1943), but Doppler shifts only tell us the motion of gas along the line of sight. To know whether the gas is inflowing, outflowing, moving in random virialized orbits, or in more planar Keplerian orbits in a disc we need to know the line-ofsight velocity as a function of position relative to the black hole.
The discovery of narrow intrinsic absorption in NGC 4151 (Mayall, 1934;Anderson & Kraft, 1969) and of broad absorption lines (BALs) in PHL 5200 (Lynds, 1967) proved that some gas was outflowing from AGNs. However, BALs commonly extend to velocities several times higher than those observed for the BLR in the same objects (see, for example, Turnshek et al. 1988), so it is not clear that there is necessarily any connection between BALs and BLRs. The case for an outflowing BLR was strengthened though when Blumenthal & Mathews (1975) and Baldwin (1975) showed that a radiatively-accelerated outflow could reproduce the observed line profiles well in some objects. However, showed that other models could provide comparably good fits to broad-line profiles, and so demonstrated that fits to individual line profiles alone could not uniquely determine the kinematics.
More progress was made by comparing lines of differing ionizations. Gaskell (1982) discovered that highionization broad lines were blueshifted with respect to low-ionization lines, and pointed out that this requires there to be radial motions plus some source of opacity. This blueshifting has now been widely confirmed. Gaskell (1982) suggested that the blueshifting could 4 Figure 8: Velocity-resolved reverberation mapping. Because of lighttravel-time effects, the gas on the near side of the AGN is seen to respond to continuum changes first. For the the hypothetical outflowing BLR illustrated here, the blue wing of a line would vary first. be the result of a "disk-wind" model where the highionization lines arise in a wind outflowing above the accretion disc. Wilkes & Carswell (1982) pointed out a problem with any purely radial motion: the profiles of C IV and Lyman α were observed to be very similar, yet, for optically-thick clouds, Lyman α is emitted very anisotropically. To satisfy this constraint the clouds either had to be optically thin, or not moving purely radially.
Obviously the question of the direction of motion could be settled if it could be determined which gas was on which side of the black holes. The best way of doing this is through velocity-resolved reverberation mapping (Gaskell, 1988). How this works is illustrated in Fig. 8. Surprisingly, velocity-resolved reverberation mapping results (Gaskell, 1988;Koratkar & Gaskell, 1989;Crenshaw & Blackwell, 1990;Koratkar & Gaskell, 1991a,b,c;Korista et al., 1995;Done & Krolik, 1996;Ulrich & Horne, 1996;Sergeev et al., 1999) strongly ruled out significant outflow of both high-and lowionization lines (see example in Fig. 9).
Ruling out significant outflow of the BLR was important not just because of what it said about how AGNs work, but also because it meant that the BLR motions were gravitationally dominated. The BLR could thus be used for determining the masses of the central black holes. This permitted the first reverberation mapping determinations of black hole masses and Eddington ratios (Gaskell, 1988;Koratkar & Gaskell, 1989;Crenshaw & Blackwell, 1990;Koratkar & Gaskell, 1991a,b,c). 2 Note that while gravity dominates BLR motions, the simple fact that radiation pressure was at one time considered to be driving BLR motions (e.g., Blumenthal & Mathews 1975) should warn us that radiation pressure might not be negligible (Marconi et al., 2008).
While the velocity-resolved reverberation mapping 2 The BLR was first used to estimate masses of AGN black holes by Dibai (1977) who estimated BLR sizes using photoionization considerations. At that stage, of course, there was no clear evidence that the BLR was virialized. See Bochkarev & Gaskell (2009). Figure 9: The cross-correlation function for the blue and red wings of the Mg II line in NGC 4151 as a function of time delay. The predicted peak in the correlation function for pure outflow (blue wing varies first) is shown by the arrow. It can be seen instead that the strongest correlation is for near zero delay (what is expected for virialized or Keplerian motion), but with the red wind leading by a small but significant amount thus implying some net inflow. Figure from Gaskell (1988). results were good news for the new black-hole-massdetermination industry, they created a problem for the generally accepted "disk-wind" explanation of the blueshiftings of high-ionization lines. Disk-wind models are very theoretically appealing, and strong blueshiftings have been taken as signs of strong winds (e.g., Leighly & Moore 2004). However, at the same time, people working on black hole mass determinations were firmly believing that they were using virialized lines! This has almost caused AGN observers to suffer from multiple-personality disorder! 3,4 It is very difficult to finesse a disk-wind model to fit the velocity-resolved reverberation mapping constraint, all the more so since outflow was first excluded for the high-ionization C IV line (Gaskell, 1988;Koratkar & Gaskell, 1989;Crenshaw & Blackwell, 1990;Koratkar & Gaskell, 1991a,b,c). We believe, however, that there is a simple solution to the problem: the opacity needed to cause the blueshifting is not pri- Figure 10: Cartoon illustrating why scattered photons are blueshifted when scattered off a reflector which is approaching the source of photons. The person on the right sees her reflection (far left) in the mirror. If the mirror is approaching her, then the image is approaching her twice as quickly.
marily absorption but scattering (Gaskell & Goosmann, 2008). Electron scattering had in fact been considered in the late 1960s to be a significant source of line broadening in AGNs (Kaneko & Ohtani, 1968;Weymann, 1970;Mathis, 1970), but the idea fell out of favor with the success of the Blumenthal & Mathews (1975) radiative acceleration model in fitting profiles. It has, however, long been well known (e.g., Edmonds 1950;Auer & van Blerkom 1972) that scattering off regions with a net radial motion produces line shifts. For an infalling scattering medium, photons gain energy. This is explained in Fig. 10. The process is similar to the well-known Fermi acceleration process. The effect of scattering off radially moving material in AGNs was considered by Kallman & Krolik (1986) and Ferrara & Pietrini (1993).
As can be seen in Fig. 9, velocity-resolved reverberation mapping not only excludes outflow, but it also shows that there is a slight inflow. Initially this result only had ∼90% confidence for any one line in one object, but it has been found repeatedly for many lines in many objects now and thus, as pointed out by Gaskell & Snedden (1997), the overall significance is high. Early examples included Gaskell (1988); Koratkar & Gaskell (1989); Crenshaw & Blackwell (1990); Koratkar & Gaskell (1991a,b,c); Korista et al. (1995); Done & Krolik (1996); Ulrich & Horne (1996).
More recent examples can be found in Sergeev et al. (1999), Kollatschny (2003), Welsh et al. (2007), Doroshenko et al. (2008), Bentz et al. (2008), Denney et al. (2009a) Bentz et al. (2009c). Important independent evidence for inflow comes from high-resolution spectropolarimetry (e.g., Smith et al. 2005). The systematic change in polarization as a function of velocity across the Balmer lines requires a net inflow of a scattering region somewhat exterior to the Balmer lines. Polarization reverberation mapping (Gaskell et al., 2008a) can reveal the location of scattering regions.
We have used the STOKES Monte Carlo radiative Figure 11: Cross sections in a plane through the axis of symmetry of the two scattering region geometries modeled in Fig. 12. transfer code (Goosmann & Gaskell, 2007) 5 to model the effects on line profiles of scattering off an inflowing external medium. The two geometries considered are shown in Fig. 11. One is an infalling spherical distribution of scatterers and the other an infalling cylindrical distribution. In Fig. 12 we show a comparison of observed profiles of two low-and high-ionization lines in PKS 0304-392 with various models with. We adopted an infall velocity of ∼ 1000 km s −1 based on velocity-dependent reverberation mapping, spectropolarimetry, and the observed mean blueshift (see Gaskell & Goosmann 2008 for details). It can be seen that both spherically and cylindrically symmetric models readily reproduce the blueshifting. An additional advantage of having significant scattering in the BLR is that it solves the "smoothness problem" for BLR line profiles (Capriotti, Foltz, & Byard, 1981). The intrinsic line broadening in an individual BLR cloudlet is only of the order of the sound speed (∼ 15 km s −1 ), yet the velocity broadening of the BLR as a whole is hundreds of times greater. This requires the number of clouds to be very high (Capriotti, Foltz, & Byard, 1981;Atwood et al., 1982). The limit on the number of discrete clouds has now been pushed up to 10 8 (Arav et al., 1998;Dietrich et al., 1999). This constraint is relaxed if there is broadening by scattering.
The overall velocity field of the BLR
For a typical AGN, several independent lines of evidence (the blueshifting, velocity-resolved reverberation mapping, and spectropolarimetry) all point to the inflow velocity being of the order of ∼ 1000 km s −1 . As has been mentioned, velocity-resolved reverberation mapping (see, for example, Fig. 9) implies that the dominant motion is not radial, but Keplerian or random. The observed widths of broad lines are indeed several times higher than the inflow velocity, and, of course, the predominant motion for a flattened distribution must be Keplerian.
As is clear from Fig. 6 and 7, when we observe the BLR (i.e., in type-1 objects) we are always seeing it close to face-on. The Keplerian component of velocity must be reduced by sin i, where i is the angle between the axis of rotational symmetry and the line of sight. The statistics of line profiles in the SDSS (La Mura et al., 2009) suggest that for the vast majority of objects i is < 20 deg).
As was realized by Osterbrock (1978), the statistics of line widths imply that, in addition to Keplerian motion, there has to be a substantial additional component of velocity perpendicular to the orbital plane. Osterbrock appropriately called this "turbulence". The vertical component is also necessary for the reconciling the structure of the BLR with its kinematics. As Mannucci, Salvati, & Stranga (1992) showed, for NGC 5548 the combined constraints of reverberation mapping and time-averaged line profile and favor the sort of "bird's nest" BLR distribution shown in Figs. 6 and 7.
In summary, I believe that all the evidence points to the BLR having a nest-like appearance and having ve- where the Keplerian velocity, v Kepler , of an emission line is a couple of times larger that the turbulent velocity, v turb , which is in turn somewhat bigger than the inflow velocity, in f low. The ratios of BLR height to radius and of v Kepler to v turb are similar to those deduced by Osterbrock (1978). The only change to the Osterbrock model is recognizing that there is also a significant inflow.
Orientation effects
It is well established from radio properties (see Antonucci 1993) that core-dominated AGNs are simply lobe-dominated AGNs viewed from near the jet axis (i.e., near face-on). Gaskell et al. (2004) showed from a comparison of continuum shapes and line ratios that core-dominated and lobe-dominated AGNs have the same underlying optical-to-UV continuum shape and that the SED differences are just due to increased reddening in the lobe-dominated AGNs. We thus have every reason to expect the BLRs of core-dominated and lobe-dominated AGNs to be the same on average. Lobe-dominated radio-loud AGNs should therefore be an excellent laboratory for studying how orientation affects the appearance of the BLR. Miley & Miller (1979) found that lobe-dominated AGNs preferentially had broader and more irregular line profiles. Wills & Browne (1986) discovered that the FWHM of Hβ increases as we see AGNs more edge-on. This provided strong support for a flattened BLR.
AGNs with the peaks of their broad Balmer lines blueshifted or redshifted from the systemic velocity have long been known (Lynds, 1968). It was proposed (Gaskell, 1983) that these peaks might represent separate BLRs each associated with a member of a supermassive black hole binary, but line profile variability observations on long and short timescales have delivered two fatal blows to this hypothesis. Firstly, although for a while it looked like the expected binary orbital motion was being seen in long-term profile variations in 3C 390.3 (Gaskell, 1996), further observations showed that the radial velocity changes were completely inconsistent with a binary black hole (Eracleous et al., 1997) but were instead consistent with orbital motion of concentrations of BLR gas orbiting in a disk. The second fatal blow was that velocity-resolved reverberation mapping of 3C 390.3 strongly ruled out the binary BLR hypothesis because the redshifted and 7 Figure 13: The effect of broadening lines on the appearance of structure in line profiles. The left frame shows a Lorentzian and two Gaussians chosen to approximate the appearance of Hα or Hβ in 3C 390.3 in 1981 or 1988. The right frame has the same line widths and peak intensities as in the left frame, but half the velocity displacements. Figure from Gaskell & Snedden (1997). blueshifted peaks varied simultaneously on a lightcrossing timescale (O'Brien et al., 1998;Dietrich et al., 1998). This demonstrated conclusively that the doublepeaked profiles arose from an inclined disk, as had been widely proposed (see references and discussion in Gaskell & Snedden 1999). Despite these double fatal blows to the idea that displaced broad-line peaks might be due to supermassive binaries, the topic of what signs there might be of sub-parsec supermassive binaries nonetheless remains one of considerable current interest (see review by Tamara Bogdanović in these proceedings).
A subsequent comprehensive survey of radiogalaxies by Eracleous & Halpern (1994 revealed many disk-like Balmer line profiles. They found the FWHMs of the Balmer lines to be approximately double those of AGNs with single-peaked Balmer lines. As is shown in Fig. 14, a factor of two reduction in line width is sufficient to make displaced peaks disappear. Gaskell & Snedden (1999), Popović et al. (2004), and Bon et al. (2006) have argued that a disk-like emission line contribution is probably present in all BLRs but simply hard to recognize because, as illustrated in Fig. 13, the classic double peaks become hard to see when the disk is near to face-on.
It is straight forward to estimate the inclinations of the BLRs from broad disk-like line profiles. These can often be estimated to within a few degrees. Eracleous & Halpern (1994 get inclinations which predominantly have i > 25 deg. Their fits to the disk profiles also provide important confirmation that a significant turbulent velocity is needed and give the turbulent velocity for each object. Without the turbulent velocity component the peaks of the line profiles would be much too sharp. The turbulent velocities are fairly well determined from the line profile fits (to ≈ ±250 km s −1 ). The average BLR turbulent velocity needed is 1300 km s −1 . This is roughly what would be expected from the height of the BLR/torus. The 1-σ scatter in the derived turbulent velocities is only ±400 km s −1 , which is only slightly greater than the average formal uncertainty in the estimates. Bon (2008) has estimated inclinations for singlepeaked AGNs. For these we mostly see disks with inclinations of i < 25 deg (see also Bon et al. in these proceedings). The difference in sin i between the displaced-BLR-peak AGNs and single-peaked AGNs is thus about a factor of two. This agrees with the ratios of FWHMs for the two samples.
The accuracy of AGN black hole mass determinations
The component of velocity perpendicular to the equatorial plane is vital for AGN black hole mass determinations! Without this strong turbulent velocity component, variations in sin i would introduce substantial scatter into AGN black hole mass estimates, especially since type-1 AGNs are observed close to faceon. There is recent evidence that there is remarkably little scatter in AGN mass estimates. Firstly, it has become apparent (Bochkarev & Gaskell, 2009) that the two main methods of estimating black hole masses from the BLR agree surprisingly well. The Dibai singleepoch-spectrum method (Dibai, 1977) and reverberation mapping methods agree to within the expected errors. Gaskell (2009b) has shown furthermore that a simple refinement of the method produces even better agreement. The agreements mean that such methods are estimating the effective radii of the BLR correctly. As Bochkarev & Gaskell (2009) discuss, the success of the Dibai method means that the inner regions of AGNs are very similar. In particular: 1. The spectral energy distribution (SED) from the optical to the far UV must be very similar in all type-1 AGNs because the optical region where the flux is measured is far removed in energy from the far UV which is photoionizing the gas. Although AGN SEDs look different, Gaskell et al. (2004) and Gaskell & Benker (2007) have already argued that the apparent variation is not real but is primarily caused by reddening. 2. There is a simple scaling relationship between the luminosity and the effective radius. This is supported by reverberation mapping estimates of the effective radii of BLRs (Koratkar & Gaskell, 8 1991c;Kaspi et al., 2000Kaspi et al., , 2005Bentz et al., 2006Bentz et al., , 2009a. Both the Dibai and reverberation-mapping methods of estimating black hole masses depend on observed BLR line widths, so geometric differences and orientation effects will affect both methods. An important external check on the accuracy of AGN black hole mass estimates is provided by the tightness of the relationship between black hole mass, M • , and luminosity, L host , of the bulge of the host galaxy. Gaskell & Kormendy (2009) have recently shown that estimating M • by the Dibai method and L host from the fraction of starlight in SDSS spectra gives a scatter of ±0.23 dex in log M • (see Fig. 14). Bentz et al. (2009b) have estimated L host completely independently for a different set of AGNs using HST photometry and published reverberation mapping mass estimates. They get a scatter in log M • of ±0.33 dex. Both of these scatters in the AGN M • -logL host relationships are smaller than the ±0.38 dex scatter Gultekin et al. (2009) and others find when M • is determined by stellar dynamical methods, but they are still greater than the ±0.17 dex scatter in the M •σ * relationship for pure bulge (i.e., barless) galaxies (Graham, 2008). The Dibai method and the method proposed by Gaskell (2009b) seems to give particularly tight M •σ * and M • -L bulge relationships for the most massive elliptical galaxies (Gaskell, 2009a). This is probably because they have the least intrinsic scatter in the M •σ * relationship. These comparisons with predictions from host galaxy properties imply that black hole mass determinations from the BLR are surprisingly accurateas accurate as the best stellar-dynamical estimates. This accuracy of black hole mass estimates made using the BLR provides strong support for all type-1 AGNs being very similar as far as the structure and kinematics of the BLR goes, and for orientation effects being minimal. The accuracy of AGN black hole mass estimates is thus consistent with there being a substantial turbulent BLR velocity component and type-1 AGNs being seen close to pole-on.
What drives BLR motions?
The circular component of motion of BLR clouds is a simple consequence of gravity, but the turbulent vertical component of motion has to be maintained against dissipation losses, and an outward transfer of angular momentum is necessary to get inflow. We now know that the viscosity needed to drive the outward flow of angular momentum in accretion discs, and hence the inward flow of matter, is the magneto-rotation instability (MRI) (Balbus & Hawley, 1991). Over the last decade increases in computing power and the development of more sophisticated programs by several groups have allowed increasingly detailed magneto-hydrodynamic (MHD) simulations of accretion flows (e.g., Hawley & Krolik 2001;Proga 2003;Anninos, Fragile, & Salmonson 2005;Ohsuga et al. 2009;Shafee et al. 2008, and references therein). In these models attention has been focused a lot on the low-density outflows. Because emissivity goes as the square of density, the emission is dominated by the high-density. To my mind what is impressive about every single one of these models is that, despite the different modeling approaches, the velocity fields of the high-density material all match the velocity field inferred for the BLR. I.e., the dominant motion is Keplerian, but there is substantial turbulence, and a significant inflow. This is very clear when one watches movies different groups make of their simulations. I believe that these simulations give us a physical basis for what we have deduced from BLR observations: the BLR is the material accreting onto the black holes. It was indeed noted a long time ago that if the BLR is inflowing it can provide the necessary mass flux for powering the AGN (Padovani & Rafanelli, 1988).
Conclusions and unsolved problems
I think we now have a fairly good emerging picture of what the BLR is like and what role it plays in the life of an AGN. Interstellar material approaching the nucleus settles into a flattened distribution, the thick torus. Material loses angular momentum because of MRI turbulence and gradually spirals inwards. When the material of the torus gets within the dust sublimation radius, the dust evaporates and we have the BLR. The turbulent BLR continues to spiral inwards towards the black hole where it is eventually accreted. The degree of ionization increases as the gas gets closer in. The optically-thick material, which will tend to be concentrated towards the mid-plane, produces continuum emission; the more optically-thin material produces the BLR. Not all of the BLR is accreted. Some of it is driven off the surface of the BLR/torus in a high-velocity, low-density wind, as is found in all the MHD simulations and as is observed.
Although I believe we are getting a clear overall picture of the BLR, there is still plenty to work on both observationally and theoretically! For a start, the picture discussed above needs to be thoroughly tested to verify that it works for all objects and not just a few wellobserved objects such as NGC 5548. More work needs to be done to see whether a disk-wind model (the leading rival to the model presented here) could also explain everything. We know that there is outflowing gas as well as gas accreting onto the black hole. The question is: how much of this is also contributing to the broadline profiles? (especially to the high-ionization lines). Ilić et al. (2008), for example, have shown that outflows can match some observed BLR profiles, so determining the relative contribution of an outflow to broad-line profiles from line-profile fitting alone is difficult. I think that reverberation mapping and spectropolarimetry (see, for example, Axon et al. 2008) are going to provide the best answers. Kollatschny (2003) found marginal evidence for some outflow in Mrk 110, and Denney et al. (2009b) have found a clearer signature of an apparent outflow component in velocity-resolved reverberation mapping of NGC 3227. Since the Denney et al. (2009b) results are from a single short observing campaign, I do not think that NGC 3227 presents a major problem yet for the general picture presented here. 6 If follow-up observing campaigns confirm the signature of outflow in NGC 3227 then this would be a significant challenge to the model favored here. Nevertheless, the Denney et al. (2009b) result does caution us that AGNs might not all 6 The uncertainties in the red-wind/blue-wing lags can be larger than thought. NGC 5548 provides a good illustration of this. The redwing/blue-wing lag varies from year to year by more than the formal errors (Welsh et al., 2007), but the NGC 5548 BLR is probably not changing direction at the end of every observing season! A strong reason for believing that the NGC 3227 kinematics are not unusual is that, as Denney et al. (2009b) point out, the mass estimate lies on the M •σ * relationship. be identical in the relative dominance of inflow and outflow.
Even if we are right about the basic structure of the BLR and torus of AGNs, there are still a lot of interesting and potentially important details in need of further investigation. Although in this review I have been emphasizing the similarities among AGNs and what they imply, there are some significant differences in the BLRs too (see, for example, Marziani et al. 1996). If our basic framework of how an AGN works is correct, then the differences need to be explicable within the framework too. Space here only permits a brief mention of some of these problems, but fortunately many of them are reviewed and discussed elsewhere in these proceedings (see, for example, the reviews by Mike Eracleous and Jack Sulentic).
It has been known for over three decades now that object-to-object differences are correlated with each other, and one of main drivers of the correlated differences is the Eddington ratio (see Sulentic et al. 2000 and these proceedings). Since we now have reliable AGN black holes masses, we also have reliable Eddington ratios, so there is a lot that can be done in investigating the dependence of BLR properties on accretion rate. I think there is a lot that needs explaining here.
The biggest object-to-object difference in optical spectra is optical Fe II emission (Osterbrock, 1977). Understanding how the very strong optical Fe II emission seen in AGNs is produced has been a longstanding problem (see Baldwin et al. 2004;Joly et al. 2008;Hu et al. 2008;Kuehn et al. 2008;Verner et al. 2009;Dong et al. 2009 for recent discussions). In the BLR model discussed here, optical Fe II emission arises in the outer part of the BLR just inside the torus (and quite likely overlapping with it), but this does not readily explain why optical Fe II is so much stronger in some objects than others.
Another mystery of the correlated object-to-object differences is the strength of the narrow-line region (NLR) emission. This is the other strong objectto-object difference and it is mysteriously strongly anti-correlated with Fe II emission (Osterbrock, 1977;Steiner, 1981;Boroson & Oke, 1984;Gaskell, 1987;Boroson & Green, 1992). A complete model of AGNs needs to explain why the NLR and BLR know about each other.
Although I have argued that the basic properties of AGNs with broad disk-like Balmer line profiles are consistent with the picture presented here, these objects, and especially the variability of their profiles, present some special challenges, as is discussed in Mike Eracleous's review. There is also a lot more to be learned 10 with orientation effects.
In summary, I think that although our overall picture of the BLR and the role it plays in the AGN phenomenon is becoming clearer, many mysteries remain, there is still a lot to learn, and there are probably surprises in store.
I am grateful to Luka Popović, Milan Dimitrijević, and the other members of the scientific organizing committee of the 7th Serbian Conference on Spectral Line Shapes for inviting me to speak on this topic. I would like to express my appreciation to Dragana Ilić and all the members of the local organizing committee for providing a very pleasant and stimulating experience throughout the conference, both culturally and scientifically. I also have to thank my collaborators and former graduate students for all their contributions and discussions over the years, and the anonymous referee for useful comments. This research has been supported in part by US National Science Foundation grant AST 08-03883. | 2009-10-16T05:09:43.000Z | 2009-07-01T00:00:00.000 | {
"year": 2009,
"sha1": "1c976b1bf6d9daf164c5108707f6eef6ee3b5669",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0908.0386",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1c976b1bf6d9daf164c5108707f6eef6ee3b5669",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46874956 | pes2o/s2orc | v3-fos-license | On a toy network of neurons interacting through their dendrites
Consider a large number $n$ of neurons, each being connected to approximately $N$ other ones, chosen at random. When a neuron spikes, which occurs randomly at some rate depending on its electric potential, its potential is set to a minimum value $v_{min}$, and this initiates, after a small delay, two fronts on the (linear) dendrites of all the neurons to which it is connected. Fronts move at constant speed. When two fronts (on the dendrite of the same neuron) collide, they annihilate. When a front hits the soma of a neuron, its potential is increased by a small value $w_n$. Between jumps, the potentials of the neurons are assumed to drift in $[v_{min},\infty)$, according to some well-posed ODE. We prove the existence and uniqueness of a heuristically derived mean-field limit of the system when $n,N \to \infty$ with $w_n \simeq N^{-1/2}$. We make use of some recent versions of the results of Deuschel and Zeitouni \cite{dz} concerning the size of the longest increasing subsequence of an i.i.d. collection of points in the plan. We also study, in a very particular case, a slightly different model where the neurons spike when their potential reach some maximum value $v_{max}$, and find an explicit formula for the (heuristic) mean-field limit.
Introduction and motivation
Our goal is to establish the existence and uniqueness of the heuristically derived mean-field limits of two closely related toy models of neurons interacting through their dendrites.
1.1. Description of the particle systems. We have n neurons, each has a linear dendrite with length L > 0 that is endowed with a soma at one of its two extremities. We have some i.i.d. Bernoulli random variables (ξ ij ) i,j∈{1,...,n} with parameter p n ∈ (0, 1), as well as some i.i.d. [0, L]valued random variables (X ij ) i,j∈{1,...,n} with probability density H on [0, L]. If ξ ij = 1, then the neuron i influences the neuron j, and the link is located, on the dendrite of the j-th neuron, at distance X ij of its soma.
We have a minimum potential v min ∈ R, an excitation parameter w n > 0, a regular drift function F : [v min , ∞) → R such that F (v min ) ≥ 0, a propagation velocity ρ > 0 and a delay θ ≥ 0.
We denote by V i,n t the electric potential of the i-th neuron at time t ≥ 0. We assume that initially, the random variables (V i,n 0 ) i=1,...,n are i.i.d. with law f 0 ∈ P([v min , ∞)). Between jumps (corresponding to spike or excitation events), the membrane potentials of all the neurons satisfy the ODE (V i,n t ) = F (V i,n t ). Note that all the membrane potentials remain above v min thanks to the condition F (v min ) ≥ 0.
When a neuron spikes (say, the neuron i, at time τ ), its potential is set to v min (i.e. V i,n τ = v min ) and, for all j such that ξ ij = 1, two fronts start, after some delay θ (i.e. at time τ + θ), on the dendrite of the j-th neuron, at distance X ij of the soma. Both fronts move with constant velocity ρ, one going down to the soma (such a front is called positive front), the other one going away from the soma (such a front is called negative front).
On the dendrite of each neuron, we thus have fronts moving with velocity ρ. When a negative front reaches the extremity of the dendrite, it disappears. When a positive front meets a negative front, they both disappear. When finally a positive front hits the soma (say, of the j-th neuron at time σ), the potential of j is increased by w n , i.e. V j,n σ = V j,n σ− + w n and the positive front disappears. Such an occurrence is called an excitation event.
We assume that at time 0, there is no front on any dendrite. This is not very natural, but considerably simplifies the study.
It remains to describe the spiking events, for which we propose two models.
Soft model. We have an increasing regular rate function λ : [v min , ∞) → R + . Each neuron (say the i-th one) spikes, independently of the others, during [t, t + dt], with probability λ(V i,n t )dt. Hard model. There is a maximum electric potential v max > v min . In such a case, we naturally assume that f 0 is supported in [v min , v max ]. A neuron spikes each time its potential reaches v max . This can happen for two reasons, either due to the drift (because it continuously drives V i,n t to v max at some time τ , i.e. V i,n τ − = v max ), or due to an excitation event (we have V i,n τ − < v max , a positive front hits the soma of the i-th neuron at time τ and V i,n τ = V i,n τ − + w n ≥ v max ). Observe that the hard model can be seen as the soft model with the choice λ(v) = ∞1 {v≥vmax} . The soft model is thus a way to regularize the spiking events by randomization. If, as we have in mind, λ looks like λ(v) = (max{v − v 0 , 0}/(v 1 − v 0 )) p for some v 1 > v 0 > v min and some large value of p ≥ 1, a neuron will never spike when its potential is in [v min , v 0 ] and, since λ(v) is very small for v < v 1 and very large for v > v 1 , it will spike with high probability each time its potential is close to v 1 and only in such a situation.
The neurons transmit information using electrical impulses. When the difference of electrical potential across the membrane of the soma of one neuron is high enough, a sequence of action potentials (also called spikes) is produced at the beginning of the axon, at the axon hillock and the potential of the somatic membrane is reset to an equilibrium value. This sequence of action potentials is then transmitted, without alteration (shape or amplitude), to the axon terminals where the excitatory connections (e.g. synapses) with other (target) neurons are located. We ignore inhibitory synapses in this work. It takes some time for the action potential to reach a synapse and to cross it. The action potential propagates in every branch of the axon. When an action potential reaches a synapse, it triggers a local increase of the membrane potential of the dendrite of the target neuron. This electrical activity then propagates along the dendrite in both ways, (see Figure 3 in Gorski et al. [18] for a simulation of this behaviour) i.e. to the soma and to the other dendrite extremity, interacting with the other electrical activity of the dendrite. The dendritic current reaching the soma increases its potential.
Generation of spikes. We need to introduce a little bit of biophysics, see [23]. Consider a small patch of cellular membrane (somatic, dendritic or axonic) which marks the boundary between the extra-cellular space and the intracellular one. This piece of membrane contains different ion channel types which govern the flow of different ion types through them. These ion channels (partly) affect the flow of charges locally, and thus the membrane potential. The ion channels rates of opening and closing depend on the membrane potential V of the small patch under consideration. Hence the time evolution of V is complicated, one needs to introduce a 4-dimensional ODE system, called the Hodgkin-Huxley equations [20], see also Koch [24], involving other quantities related to ion channels. If V is large enough and if there are enough channels, a specific cascade of opening/closing of ion channels occurs and this produces a spike. In the axon, only one sort of spike is possible. For the dendrite, the situation is more complicated and only some types of neurons have dendrites that are able to produce spikes.
Propagation/annihilation of spikes. The above description is local in space and we considered that the patch of membrane under consideration was isolated. To treat a full membrane, for example a dendrite, a nonlinear PDE is generally used, see e.g. Stuart, Spruston and Haüser [35] or Koch [24], to describe the membrane potential V (t, x) at location x at time t ≥ 0 (and some other quantities related to the ion channels), with some source terms at the positions of the synapses. Fronts are particular localized solutions of the form V (t, x) = ψ(x − ρt), see [24]. For tubular geometries, a spike induced in the middle of the membrane will produce two fronts propagating in opposite directions. In the axon, the fronts are produced only at one extremity (the soma) hence yielding only one propagating front.
Two fronts propagating in opposite directions, in a given dendrite, will cancel out when they collide, because after the initiation of a spike, some ion channels deactivate and switch into a refractory state for a small time. Some consequences of this annihilation effect, yet to be confirmed experimentally, were analyzed in Gorski et al. [18].
Instead of solving a nonlinear PDE for the front propagation/annihilation, we consider an abstract model which captures the basic phenomena. This enables us to have some formulas for the number of fronts reaching the soma even when annihilations are considered. Note that the same rationale was used for the axon where we only retained the propagation delay as meaningful. Our last approximation concerns the dynamics of an isolated neuron, at the soma, between the spikes. We replace the 4-dimensional ODE system for V , e.g. the Hodgkin-Huxley equations, by a simpler scalar piecewise deterministic Markov process where the jumps represent the spiking times and the membrane potential evolves asV = F (V ) between the spikes.
The toy model. We are now in position to explain our toy model. Each action potential of an afferent neuron produces, after a constant delay θ, two fronts in all the dendrites that are connected to the extremities of its axon. In each dendrite, these fronts propagate and interact (by annihilation), and the ones reaching the soma increase its membrane potential by a given amount w n . When the somatic membrane potential is high enough, an action potential is created. Observe that in nature, several action potentials reaching a single synapse are required to produce fronts.
Let us stress one more time that the model we consider is highly schematic. Actually, dendrites are not linear segments with constant length, but have a dense branching structure; dendritic spikes are not the only carriers of information; inhibition (that we completely neglect) plays an important role; the delay needed for the information to cross the axon and the synapse is far from being constant; the spatial structure of interaction is much more complicated than mean-field, etc. However, it seems this is one of the first attempts to understand the effect of active dendrites in a neural network.
1.3. Heuristic scales and relevant quantities. (a) Roughly, each neuron is influenced by N = np n others and we naturally consider the asymptotic N → ∞.
(b) Using a recent version by Calder, Esedoglu and Hero [6] of some results of Deuschel and Zeitouni [15] concerning the length of the longest increasing subsequence in a cloud of i.i.d. points in [0, 1] 2 , we will deduce the following result. Consider a single linear dendrite with length L, as well as a Poisson point process (T i , X i ) i≥1 on [0, ∞) × [0, L], with intensity measure N g(t)dtH(x)dx, H being the repartition density defined in Subsection 1.1 and g being the spiking rate of one typical neuron in the network. For each i ≥ 1, one positive and one negative front start from X i at time T i . Make the fronts move with velocity ρ > 0, apply the annihilation rules described in Subsection 1.1 and call L N (t) the number of excitation events occurring during [0, t], i.e. the number of fronts hitting the soma before t. Under a few assumptions on H and g, in probability, where Γ t (g) is deterministic and more or less explicit, see Definition 4. Of course, Γ t (g) also depends on H, but H is fixed in the whole paper so we do not indicate explicitly this dependence.
(c) We want to consider a regime in which each neuron spikes around once per unit of time. This implies that on each dendrite, there are around N fronts starting per unit of time. Due to point (b), even if we are clearly not in a strict Poissonian case, it seems reasonable to think that there will be around √ N excitation events per unit of time (for each neuron). Consequently, each neuron will see its potential increased by w n √ N per unit of time and we naturally consider the asymptotic w n √ N → w ∈ (0, ∞). Smaller values of w n would make negligible the influence of the excitation events, while higher values of w n would lead to explosion (infinite frequency of spikes).
One could be surprised by this normalization N −1/2 (and not N −1 for example) which is the right scaling for the electric current from the dendrite to the soma to be non trivial as the number of synapses goes to infinity.
1.4. Goal of the paper. Of course, the networks presented in Subsection 1.1 are interacting particle systems. However, the influence of a given neuron (say, the one labeled 2) on another one (say, the one labeled 1) being small (because the neuron 2 produces only a proportion 1/N 1 of the fronts influencing the neuron 1), we expect that some asymptotic independence should hold true. Such a phenomenon is usually called propagation of chaos. Our aim is to prove that, assuming propagation of chaos, as well as some conditions on the parameters of the models, there is a unique possible reasonable limit process, for each model, in the regime N = np n → ∞ and w n √ N → w ∈ (0, ∞).
The soft model seems both easier and more realistic from a modeling point of view. However, we keep the hard model because we are able to provide, in a very special case, a rather explicit limit, which is moreover in some sense periodic.
1.5. Informal description of the main result for the soft model. Consider one given neuron in the system (say, the one labeled 1), call V 1,n t its potential at time t and denote by J 1,n t (resp. K 1,n t ) its number of spike events (resp. excitation events) before time t. We hope that, by a law of large numbers, for n very large, κ n t := w n K 1,n t wN −1/2 K 1,n t which represents the increase of electric potential before time t due to excitation events, should resemble some deterministic quantity κ t . The map t → κ t should be non-decreasing, continuous (because w n → 0, even if this is of course not a rigorous argument) and starting from 0. We thus should have V 1,n . Moreover, κ t should also be obtained as the approximate value of w n K 1,n t , where K 1,n t is the number of excitation events before time t, resulting from the influence of N = np n (informally) almost independent neurons, all behaving like the one under study.
We thus formulate the following nonlinear problem. Fix an initial distribution f 0 ∈ P([v min , ∞)) for V 0 . Can one find a deterministic non-decreasing continuous function (κ t ) t≥0 starting from 0 such that, if considering the process with furthermore the counting process J t jumping at rate λ(V t ) (all this can be properly written using Poisson measures), if denoting by (T k ) k≥1 its jumping times, if considering an i.i.d. family ..,N of copies of (T k ) k≥1 , if making start, on a single linear dendrite with length L, one positive front and one negative front from X i (for all i = 1, . . . , N ) at each instant T i k + θ (for all k ≥ 1), if making the fronts move with velocity ρ, if applying the annihilation procedure described in Subsection 1.1 and, if denoting by K N t the resulting number of excitation events occurring during [0, t], one has lim N →∞ wN −1/2 K N t = κ t for all t ≥ 0?
Under a few conditions on f 0 , F , λ and H, we prove the existence of a unique solution (κ t ) t≥0 to the above problem. Furthermore, the process (V t ) t≥0 solves a nonlinear Poisson-driven stochastic differential equation and κ t = Γ t ((E[λ(V s )]) s≥0 ). Our conditions are very general when the delay θ is positive, and rather restrictive, at least from a mathematical point of view, when θ = 0. 1.6. Informal description of the main result for the hard model. Similarly to the soft model, we formulate the following problem. Fix an initial distribution f 0 ∈ P([v min , v max ]) for V 0 . Can one find a deterministic non-decreasing continuous function (κ t ) t≥0 starting from 0 such that, if considering the process if denoting by (T k ) k≥1 its instants of spike, if considering an i.i.d. family (X i ) i=1,...,N with density H and an i.i.d. family ((T i k ) k≥1 ) i=1,...,N of copies of (T k ) k≥1 , if making start, on a single linear dendrite with length L, one positive front and one negative front from X i (for all i = 1, . . . , N ) at each instant T i k + θ (for all k ≥ 1), if making the fronts move with velocity ρ, if applying the annihilation procedure described in Subsection 1.1 and, if denoting by K N t the resulting number of excitation events occurring during [0, t], i.e. the number of fronts hitting the soma before t, one has lim N →∞ wN −1/2 K N t = κ t for all t ≥ 0? As already mentioned, we restrict our study of the hard model to a special case for which we end up with an explicit formula. Namely, we assume that the delay θ = 0, that the continuous repartition density H attains its maximum at 0, that the drift F is constant and positive and that the initial distribution f 0 has a regular density (on [v min , v max ] seen as a torus). We prove that, there is a unique C 1 -function (κ t ) t≥0 solving the above problem. Furthermore, (κ t ) t≥0 is explicit, see Theorem 11.
The function (κ t ) t≥0 is periodic. Observe that κ t is proportional to the number of excitation events (concerning a given neuron) during [t, t + dt]. This suggests a synchronization phenomenon, or rather some stability of possible synchronization, which is rather natural, since two neurons having initially the same potential spike simultaneously forever in this model. Observe that such a periodic behavior cannot precisely hold true for the particle system (before taking the limit N → ∞) because the dendrites are assumed to be empty at time 0, so that some time is needed before some (periodic) equilibrium is reached. 1.7. Bibliographical comments. Kac [22] introduced the notion of propagation of chaos as a step toward the mathematical derivation of the Boltzmann equation. Some important steps of the general theory were made by McKean [29] and Sznitman [36], see also Méléard [30]. The main idea is to approximate the time evolution of one particle, interacting with a large number of other particles, by the solution to a nonlinear equation. We mean nonlinear in the sense of McKean, i.e. that the law of the process is involved in its dynamics. Here, our limit process (V t ) t≥0 indeed solves a nonlinear stochastic differential equation, at least concerning the soft model, see Theorem 8. This nonlinear SDE is very original: the nonlinearity is given by the functional Γ(g (Vs) s≥0 ) quickly described in Subsection 1.3, arising as a scaling limit of the longest subsequence in an i.i.d. cloud of points of which the distribution depends on a function g (Vs) s≥0 , which depends itself on the law of (V s ) s≥0 .
The problem of computing the length L N of the longest increasing sequence in a random permutation of {1, . . . , N } was introduced by Ulam [37]. Hammersley [19] understood that a clever way to attack the problem is to note that L N is also the length of the longest increasing sequence of a cloud composed of N i.i.d. points uniformly distributed in the square [0, 1] 2 , for the usual partial order in R 2 . He also proved the existence of a constant c such that L N ∼ c √ N as N → ∞. Versik and Kerov [38] and Logan and Shepp [25] showed that c = 2. Simpler proofs and/or stronger results were then found by Bollobás and Winkler [3], Aldous and Diaconis [1], Cator and Groeneboom [8], etc. Let us also mention the recent work of Basdevant, Gerin, Gouéré and Singh [2].
As already mentioned in Subsection 1.3, we use the results of Calder, Esedoglu and Hero [6], that generalize those of Deuschel and Zeitouni [15] and that concern the limit behavior of the longest ordered increasing sequence of a cloud composed of N i.i.d. points with general smooth distribution g in the square [0, 1] 2 (or in a compact domain). These results strongly rely on the fact that since g is smooth, it is almost constant on small squares. Hence, on any small square, we can more or less apply the results of [38,25]. Of course, this is technically involved, but the main difficulty in all this work was to understand the constant 2 (note that the value of the corresponding constant is still unknown in higher dimension).
Of course, a little work is needed: we cannot apply directly the results of [6], because we are not in presence of an i.i.d. cloud. However, as we will see, the situation is rather favorable.
The mean-field theory in networks of spiking neurons has been studied in the computational neuroscience community, see e.g. Renart, Brunel and Wang [33], Ostojic, Brunel and Hakim [31] and the references therein. A mathematical approach of mean-field effects in neuronal activity has also been developed. For instance, in Pakdaman, Thieullen and Wainrib [32] and Riedler, Thieullen and Wainrib [34], a class of stochastic hybrid systems is rigorously proved to converge to some fluid limit equations. In [4], Bossy, Faugeras, and Talay prove similar results and the propagation of chaos property for networks of Hodgkin-Huxley type neurons with an additive white noise perturbation. The mean-field limits of networks of spiking neurons modeled by Hawkes processes has been intensively studied recently by Chevallier, Cáceres, Doumic and Reynaud-Bouret [10], Chevallier [9], Chevallier, Duarte, Löcherbach and Ost [11] and Ditlevsen and Löcherbach [16]. Besides, in [26,27,28], Luçon and Stannat obtain asymptotic results for networks of interacting neurons in random environment.
Finally, we conclude this short bibliography of mathematical mean-field models in neuroscience by some papers closer to our setting: models of networks of spiking neurons with soft (see De Masi et al. [14] and Fournier and Löcherbach [17]) or hard (see Cáceres, Carrillo and Perthame [5], Carrillo, Perthame, Salort and Smets [7], Delarue, Inglis, Rubenthaler and Tanré [12,13] and Inglis and Talay [21]) bounds on the membrane potential have also been studied. In particular, in [21] the authors introduced a model of propagation of membrane potentials along the dendrites but it is very different from ours. In particular, it does not model the annihilation of fronts along the dendrites.
1.8. Perspectives. One important question remains open: does propagation of chaos hold true? This seems very difficult to prove rigorously. Indeed, the dynamics of the membrane potential at the soma depends also on the state of its dendrite and on their laws. Thus, the state space of the dendrite is not a classical R d . Informally, the knowledge of the state of the dendrite is equivalent to knowing the history of the membrane potential during time interval [t − L/ρ, t]. Such an intricate dependence is present in many models for which one is able to prove propagation of chaos. However, in the present case, one would have to extend the results of Deuschel and Zeitouni [15] or Calder, Esedoglu and Hero [6] to non-independent (although approximately independent) clouds of random points, in order to understand how many excitation events occur for each neuron, resulting from non-independent stimuli creating fronts on its dendrite. This seems extremely delicate, and we found no notion of approximate independence sufficiently strong so that we can extend the results of [15,6] but weak enough so that we can apply it to our particle system.
1.9. Plan of the paper. In the next section, we precisely state our main results. In Section 3, we relate deterministically the number of fronts hitting a given soma to the length of the longest increasing (for some specific order) subsequence of the points (time and space) from which these fronts start. In Section 4, which is very technical, we adapt to our context the result of Calder, Esedoglu and Hero [6]. The proofs of our main results concerning the hard and soft models are handled in Sections 5 and 6. We informally discuss the existence and uniqueness/non-uniqueness of stationary solutions for the limit soft model in Section 7. Finally, we present simulations, in Section 8, showing that the particle systems described in Subsection 1.1 indeed seem to be wellapproached, when n is large, by the corresponding limiting processes.
Acknowledgment. We warmly thank the referees for their fruitful comments. E. Tanré
Main result
Here we expose our notation, assumptions and results in details. The length L > 0, the speed ρ > 0 and the minimum potential v min are fixed.
2.1. The functional A. We first study the number of fronts hitting the soma of a linear dendrite. We recall that a nonnegative measure ν on [0, Note that (s, x) (s , x ) implies that s ≤ s . The following fact, crucial to our study, is closely linked with Hammersley's lines, see e.g. Cator and Groeneboom [8].
Proposition 2. Consider a Radon point measure ν = i∈I δ Mi , the set S ν = {M i = (t i , x i ) : i ∈ I} consisting of distinct points of [0, ∞) × [0, L]. Consider a linear dendrite, represented by the segment [0, L], with its soma located at 0. For each i ∈ I, make start two fronts from x i at time t i , one positive front going toward the soma and one negative front going away from the soma. Assume that all the fronts move with velocity ρ. When two fronts meet, they disappear. When a front reaches one of the extremities of the dendrite, it disappears.
We assume that which implies that no front may start precisely from some (space/time) position where there is already a front. Hence we do not need to prescribe what to do in such a situation.
The number of fronts hitting the soma is given by A(ν) and the number of fronts hitting the soma before time t is given by A t (ν).
This proposition is proved in Section 3. The following observation is obvious by definition (although not completely obvious from the point of view of fronts).
Remark 3. Consider two Radon point measures ν and ν on
2.2. The functional Γ. The role of Γ was explained roughly in Subsection 1.3, see Section 4 for more details. See Deuschel and Zeitouni [15] for quite similar considerations.
It is important, in the above definition, to require H to be continuous. Modifying the value of H at one single point can change the value of Γ t (g). The following observations are immediate.
Concerning (ii), it suffices to note that one maximizes I t (g, β) with the choice β ≡ 0.
2.3. The soft model. We will impose some of the following conditions. (S1): There are p ≥ 1 and C > 0 such that the initial distribution The repartition density H of the connections is continuous on [0, L].
(S2) The initial distribution f 0 is compactly supported, f 0 ((α, ∞)) > 0, F (α) ≥ 0 and λ is locally Lipschitz continuous on [v min , ∞) and positive and non-decreasing on (α, ∞). Proposition 6. Assume (S1). Consider r : [0, ∞) → R + continuous, non-decreasing and such The process (V r t ) t≥0 represents the time evolution of the potential of one neuron, assuming that the excitation resulting from the interaction with all the other neurons during [0, t] produces an increase of potential equal to r t , and J r t stands for its number of spikes during [0, t]. Indeed, between its spike instants, the electric potential V r t evolves as Proposition 7. Assume (S1) and fix θ ≥ 0. Fix a non-decreasing continuous function r : . Then for any t ≥ 0, lim Let us explain this result. If we have N independent neurons of which the electric potentials evolve as (V r t ) t≥0 , of which (J r t ) t≥0 counts the number of spikes, if all these spikes make start some fronts (after a delay θ) on the dendrite of another neuron and that these fronts evolve and annihilate as described in Proposition 2, then the number of fronts hitting the soma of the neuron under consideration between 0 and t equals A t (ν r N ). If each of these excitation events makes increase the potential of the neuron by w N = wN −1/2 (with w > 0), then, at the limit, the electric potential of the neuron will be increased, due to excitation, by wΓ t (h θ r ) during [0, t]. Theorem 8. Assume (S1) and fix w > 0 and θ ≥ 0.
Observe that if the repartition density H attains its maximum at 0, then (3) has a simpler form, In this case (3) writes: where the second term involves the non locally Lipschitz square root. Consider the n-particle system described in Subsection 1.1 (soft model) and denote by (V 1,n t ) t≥0 the time-evolution of the membrane potential of the first neuron and by (J 1,n t ) t≥0 the process counting its spikes. Theorem 8 tells us that, if propagation of chaos holds true, under our assumptions, (V 1,n t ) t≥0 should tend in law (in the regime N = np n → ∞ and w n = wN −1/2 ) to the unique solution (V t ) t≥0 of (3). See Subsection 1.5 for more explanations.
Assumption (S1) seems rather realistic. Our assumption that λ vanishes in a neighborhood of v min actually implies that a neuron cannot spike again immediately after one spike. Indeed, after being set to v min , we observe a refractory period corresponding to the time the potential needs to exceed α. In addition, it allows us to consider some time intervals [a k , a k+1 ], in our proof of Proposition 7, such that the restriction of ν r N to [a k , a k+1 ] × [0, L] is more or less an i.i.d. cloud of random points. This is crucial in order to use the results of Calder, Esedoglu and Hero [6], who deal with i.i.d. clouds of random points. More precisely, the proof of Proposition 7 (as well as that of Proposition 10 below) relies on Lemma 12, in which we show how to apply [6] (or rather its immediate consequence Lemma 13) to a possibly correlated concatenation of i.i.d. clouds of random points.
The growth condition on F is one-sided and sufficiently general to our opinion, however, it is only here to prevent us from explosion (we mean an infinite number of jumps during a finite time interval) and it should be possible to replace it by weaker condition like , at the price of more complicated proofs. So we believe that when θ > 0, our assumptions are rather reasonable.
On the contrary, when θ = 0, our conditions are restrictive, at least from a mathematical point of view. This comes from two problems when studying the nonlinear SDE (3). First, the term ds, and the square root is rather unpleasant. To solve this problem, we use that f 0 ((α, ∞)) > 0 and F (α) ≥ 0 imply that s → E[λ(V s )] is a priori bounded from below on each compact time interval. Since α is thought to be rather close to v min , we believe these two conditions are not too restrictive in practice. Second, the coefficients of (3) are only locally Lipschitz continuous, which is always a problem for nonlinear SDEs. Here we roughly solve the problem by assuming that f 0 is compactly supported, which propagates with time. Again, we believe this is not too restrictive in practice, since F (v) should rather tend to −∞ as v → ∞ and in such a case, it should not be difficult to show that any invariant distribution for (3) has a compact support. However, one may use the ideas of [17] to remove this compact support assumption, here again, at the price of a much more complicated proof.
2.4. The hard model. This case is generally difficult, but under the following quite restrictive assumptions and when θ = 0, it has the advantage to be explicitly solvable. (H2): The density f 0 satisfies f 0 (v min ) = f 0 (v max ), the repartition density H attains its maximum at x = 0 and, setting σ = ρH(0)w 2 the function Note that if the density f 0 is Lipschitz continuous and bounded from below by a positive constant, then G 0 is also Lipschitz continuous.
, written in the chronological order. Consider an i.i.d. family (X i ) i≥1 of random variables with density H, independent of the family with Γ t introduced in Definition 4 and with g r defined on [0, ∞) by with a k uniquely defined by Assume that we have N independent neurons, of which the electric potentials evolve as (V r t ) t≥0 and that spike as (J r t ) t≥0 . If all these spikes make start, without delay, some fronts on the dendrite of another neuron and that these fronts evolve and annihilate as described in Proposition 2, then the number of fronts hitting the soma (of the neuron under consideration) equals A t (ν r N ). If each of these excitation events makes increase the potential of the neuron by w N = wN −1/2 (with w > 0), then, at the limit, the electric potential of the neuron will be increased, due to excitation, by wΓ t (g r ) during [0, t].
Theorem 11. Assume (H1)-(H2) and let w > 0. There exists a unique non-decreasing , and that κ is periodic with period a.
Consider the n-particle system described in Subsection 1.1 (hard model), under the conditions (H1)-(H2) and with θ = 0. Denote by (V 1,n t ) t≥0 the time-evolution of the electric potential of the first neuron and by (J 1,n t ) t≥0 the process counting its spikes. Theorem 11 tells us that, if propagation of chaos holds true, (V 1,n t , J 1,n t ) t≥0 should tend in law (in the regime N = np n → ∞ and w n = wN −1/2 ) to (V κ t , J κ t ) t≥0 as defined in Proposition 9 and with the above explicit κ. See Subsection 1.6 for a discussion, in particular concerning the noticeable fact that κ is periodic.
The assumptions that θ = 0, that F (v) = I and that H(0) = max [0,L] H are crucial, at least to get an explicit formula. It might be possible to study the case where F (v) = I − Av for some A > 0 (maybe with the condition I − Av max > 0), but it does not seem so friendly. On the contrary, we assumed for convenience that f 0 (v min ) = f 0 (v max ), which guarantees that κ is of class C 1 . This assumption seems rather reasonable because the potentials directly jump from v max to v min so are in some sense valued in the torus [v min , v max ). However, it may be possible to relax it.
Annihilating fronts and longest subsequences
The goal of this section is to prove Proposition 2. We first introduce a few notation. For , we introduce the four sets, see Figure 2, Proof of Proposition 2. Let ν = i∈I δ Mi be Radon, the set We assume that ν = 0 (because otherwise the result is obvious) and (1). We recall that A(ν) ∈ N ∪ {∞} and A t (ν) ∈ N were introduced in Definition 1. We call B(ν) ∈ N ∪ {∞} the total number of fronts hitting the soma and B t (ν) ∈ N the number of fronts hitting the soma before t.
or, equivalently, N ∈ M + ) the positive front starting from N meets the negative front starting from M if none of these two fronts have been previously annihilated. More precisely, they meet at Step 1. Here we prove that A(ν) = B(ν).
Step 1.1. We introduce G 1 , the set of all minimal (for ≺) elements of S ν . See Figure 3. This set is non empty because ν = 0. It is also bounded (and thus finite since #(G 1 ) = ν(G 1 ) and since ν is Radon): fix M ∈ G 1 and observe that We thus may write G 1 = {P 1 , . . . , P k }, ordered in such a way that P 1 x < P 2 x < · · · < P k x . We now show that all the fronts starting in G 1 annihilate, except the positive one starting from P 1 (it reaches the soma at time P 1 s + P 1 x /ρ) and the negative one starting from P k (it reaches the other extremity of the dendrite). See Figure 3. • We first verify by contradiction that the positive front starting from P 1 hits the soma. If this is not the case, then, due to the above rules (a)-(b), it has been annihilated by some front starting from some Q ∈ S ν ∩ P 1− . This is not possible, because S ν ∩ P 1− = ∅.
Indeed, assume that S ν ∩ P 1− = ∅ and consider a minimal (for , which is not possible because P 1 is minimal. So Q is minimal in S ν , i.e. Q ∈ G 1 , and we furthermore have Q x < P 1 x . This contradicts the definition of P 1 . • Similarly, one verifies that the negative front starting from P k hits the other extremity (x = L) of the dendrite.
• We finally fix i ∈ {1, . . . , k − 1} and show by contradiction that the negative front starting from P i does meet the positive front starting from P i+1 . Assume for example that the negative front starting at P i is annihilated before it meets the positive front starting from P i+1 . Then there is a point Q ∈ S ν ∩ P i+ ∩ (P i+1− ∪ P i+1↓ ). Indeed, Q has to be in P i+ so that the positive front starting from Q kills the negative front starting from P i , and Q has to be in P i+1− ∪ P i+1↓ so that the killing occurs before the negative front starting from P i meets the positive front starting from P i+1 . But S ν ∩ P i+1↓ = ∅, since P i+1 is minimal in S ν . Hence Q ∈ S ν ∩ P i+ ∩ P i+1− , so that S ν ∩ P i+ ∩ P i+1− is not empty.
We thus may consider a minimal (for ≺) element R ∈ S ν ∩ P i+ ∩ P i+1− . But then R is minimal in S ν because else, we could find , which is not possible because P i and P i+1 are minimal. At the end, we conclude that R is minimal in S ν , i.e. R ∈ G 1 , with furthermore P i x < R x < P i+1 x , which contradicts the definition of P i and P i+1 .
Step 1.2. If S ν \ G 1 = ∅, we go directly to the concluding step. Otherwise, we introduce the (finite) set G 2 of all the minimal elements of S \ G 1 . The fronts starting from a point in G 2 cannot be annihilated by those starting from a point in G 1 (because as seen in Step 1.1, all the fronts in G 1 do annihilate together, except one that does hit the soma and one that does hit the other extremity: the fronts starting in G 1 do not interact with those starting in S ν \ G 1 ). And one can show, exactly as in Step 1.1, that all the fronts starting in G 2 annihilate, except one positive front that hits the soma and one negative front that hits the other extremity.
Step 1.3. If S ν \ (G 1 ∪ G 2 ) = ∅, we go directly to the concluding step. Otherwise, we introduce the (finite) set G 3 of all the minimal elements of S \ (G 1 ∪ G 2 ). As previously, the fronts starting from a point in G 3 cannot be annihilated by those starting from a point in G 1 ∪ G 2 . And one can show, exactly as in Step 1.1, that all the fronts starting in G 3 annihilate, except one positive front that hits the soma and one negative front that hits the other extremity. Step Concluding step. If the procedure stops after a finite number of steps, then there exists n ∈ N * such that S ν = ∪ n k=1 G k , where G 1 is the set of all minimal elements of S ν and, for all k = 2, . . . , n, G k is the set of all minimal elements of S ν \ (∪ k−1 i=1 G i ). We have seen that for each k = 1, . . . , n, exactly one front starting from a point in G k hits the soma, so that B(ν) = n. And we also have A(ν) = n. Indeed, choose Q n ∈ G n , there is necessarily Q n−1 ∈ G n−1 such that Q n−1 ≺ Q n , ..., and there is necessarily Q 1 ∈ G 1 such that Q 1 ≺ Q 2 . We end with an increasing sequence Q 1 ≺ · · · ≺ Q n of points of S ν , whence A(ν) ≥ n. We also have A(ν) ≤ n because otherwise, we could find a sequence R 1 ≺ · · · ≺ R n+1 of points of S ν , and S ν \ (∪ n k=1 G k ) would contain at least R n+1 and thus would not be empty.
Step 2. We now fix t ≥ 0. By Step 1 applied to ν| Dt , we know that B(ν| Dt ) = A(ν| Dt ), which equals A t (ν) by definition. To conclude the proof, it thus only remains to check that • A (positive) front hitting the soma does it before time t if and only if it starts from some point M ∈ S ν ∩ D t (because such a front hits the soma at time M s + M x /ρ, which is smaller than t if and only if M (t, 0)).
• A positive front starting from some M ∈ S ν ∩D t always remains in • A front starting from some M ∈ S ν \ D t always remains outside D t (for e.g. the positive front starting from M ,
Number of fronts in the piecewise i.i.d. case
The goal of this section is to check the following result, relying on [6].
-valued random variables with density g k . We assume that for each k ≥ 0, the family (X i ) i≥1 is independent of the family (T i k ) i≥1 (but the families (X i , T i k ) i≥1 and (X i , T i ) i≥1 , with k = , are allowed to be correlated in any possible way). For each N ≥ 1, we set This result will be applied, more or less directly, to prove our two main results, via Propositions 7 and 10. In both cases, we will indeed be able to partition time in a family of intervals [b k , b k+1 ) during which the stimuli arrive in an i.i.d. manner on the dendrite under consideration, even if the whole family of those stimuli is not independent. In the case of the soft model, this uses crucially the fact that Assumption (S1) induces a refractory period: a neuron spiking at time t cannot spike again during (t, t + δ] for some deterministic δ > 0 (depending on t ≥ 0 and on many other parameters).
This section is the most technical of the paper. We have to be very careful, because as already mentioned, Γ t (g) is rather sensitive. For example, modifying the density H at one point does of course not affect the empirical measure ν N , while it may drastically modify the value of Γ t (g) (recall that Γ t (g) depends on H, see Definition 4).
In the whole section, the continuous density H on [0, L] is fixed. We first adapt the result of [6]. Proof. We first recall a 2d version of [6, Theorem 1.2], which concerns the length of the longest increasing subsequence (for the usual partial order of R 2 ) one can find in a cloud of N i.i.d. points with positive continuous density on a regular domain O ⊂ R 2 . In a second step, we easily deduce the behavior of the length of the longest increasing subsequence (for the same random variables and the same order) included in a subset G of O. It only remains to use a diffeomorphism that maps the usual order on R 2 onto our order ≺: we study how the density of the random variables is modified in Step 3, and how this modifies the limit functional in Step 4.
For y = (y 1 , y 2 ) and y = (y 1 , y 2 ) in R 2 , we say that y y if y 1 ≤ y 1 and y 2 ≤ y 2 . We say that y y if y y and y = y .
Step 2. Consider some bounded open G ⊂ R 2 with Lipschitz boundary. Adopt the same notation and conditions as in Step 1. For each N ≥ 1, set Indeed, if c G = G φ(y)dy = 0, both quantities equal 0 (because φ ≡ 0 on G ∩ O by continuity, and φ1 G = 0 on O c by definition). Else, φ G = c −1 G φ1 G satisfies the assumptions of Step 1. For each N ≥ 1, we set S N = {i ∈ {1, . . . , N } : Y i ∈ G}. Since the law of the sub-sample (Y i ) i∈S N knowing |S N | is that of a family of |S N | i.i.d. random variables with density φ G , we have lim N |S N | −1/2 L N (G) = sup γ∈A 2 1 0 φ G (γ(r))γ 1 (r)γ 2 (r)dr a.s. But lim N N −1 |S N | = c G a.s., whence the conclusion.
Step 3. We now introduce the We next observe that for any (s, x), (s , x ) ∈ R 2 , we have (s, x) ≺ (s , x ) if and only if ψ(s, x) ψ(s , x ). Hence, by Definition 1, we have A(π N | B ) = L N (ψ(B)) (with the notation of Step 2 and the choice Y i = ψ(Z i )). Clearly, ψ(B) is a bounded open domain of R 2 . By Step 2, we thus have lim N N −1/2 A(π N | B ) = sup γ∈A K ψ(B) (γ) a.s.
One easily checks that γ ∈ A if and only if α = ψ −1 • γ ∈ C and that K ψ(B) (γ) = L B (α), where C is the set of all C 1 -maps α : [0, 1] → R 2 such that |α 2 (r)| ≤ ρα 1 (r) for all r ∈ [0, 1] and But sup α∈C L B (α) = sup α∈C L B (α), whereC consists of the elements of C such that |α 2 (r)| < ρα 1 (r) on [0, 1]. Indeed, it suffices to approximate α ∈ C by α n (r) = (α 1 (r) + r/n, α 2 (r)), that belongs toC, and to observe that L B (α) ≤ lim inf n L B (α n ) by the Fatou Lemma and since R(α(r))1 {α(r)∈B} ≤ lim inf n R(α n (r))1 {αn(r)∈B} for each r ∈ We can now give the Proof of Lemma 12. Let us explain the main ideas of the proof. The main tool consists in applying Lemma 13 in any reasonable subset of [b k , b k+1 ] × [0, L], for any k ≥ 0, which we do in Step 1 for a sufficiently large family of such subsets. In Step 4, we prove that ΛD t (g) = lim δ↓0 ΛD t+δ (g) = Γ t (g), which is very natural but tedious. The lowerbound lim inf N N −1/2 A t (ν N ) ≥ Γ t (g) is proved in Step 2: we consider some β ∈ B such that JD t (g, β) ≥ ΛD t (g) − ε, we introduce a tube B β,δ around the path {(s, β(s)) : s ∈ [0, t]} and observe that JD t (g, β) = JD t∩Bβ,δ (g, β). Using Step 1, we deduce that in each B β,δ ∩ ([b k , b k+1 ] × [0, L]), we can find an increasing subsequence of points with the correct length, that is, more or less, . We then concatenate these subsequences (with a small loss to be sure the concatenation is fully increasing) and find that, very roughly, The upperbound is more complicated be uses similar ideas: if one could find an increasing subsequence with length significantly greater than , this would mean that somewhere, in some [b k , b k+1 ] × [0, L], there would be an increasing subsequence with length significantly greater than established in Lemma 13.
Notation. Changing the value of g k on (b k+1 , ∞) does clearly not modify the definitions of g and of ν N , since T i k is not taken into account if greater than b k+1 . Hence we may (and will) assume that for each k ≥ 0, g k is a density, continuous on [b k , b k+1 + 1] and vanishing outside [b k , b k+1 + 1].
We fix t > 0 and call k 0 the integer such that t ∈ [b k0 , b k0+1 ). We assume that k 0 ≥ 1, the situation being much easier when k 0 = 0.
We first claim that (4) 0 ≤ k < ≤ k 0 and (s, x) ∈ B k β,a,δ and (s , x ) ∈ B β,a,δ imply that (s, x) ≺ (s , x ). It suffices to check that for any (s, x), (s , x ) ∈ B β,δ with s ≥ s + aδ, we have (s, x) ≺ (s , x ). This follows from the facts that |x − β(s)| < δ, |x − β(s )| < δ and |β(s Hence a.s., A t (ν N ) = A(ν N | Dt ) ≥ k0 k=0 A(ν N | B k β,a,δ ∩Dt ). Indeed, it suffices to recall Definition 1, to call S N the set of points in the support of ν N intersected withD t , and to observe that thanks to (4), the concatenation of the longest increasing (for ≺) subsequence of S N ∩ B 0 β,a,δ with the longest increasing subsequence of S N ∩ B 1 β,a,δ ... with the longest increasing subsequence of S N ∩ B k0 β,a,δ indeed produces an increasing subsequence of S N . Due to Step 1, we conclude that a.s., for all δ > 0, .
Step 3.3. Gathering Steps 3.1. and 3.2, we deduce that for all δ ∈ (0, 1), a.s., The first equality is obvious, because all our random variables have densities and thus a.s. never fall in D t \D t . The second equality uses that the set B δ t is finite.
We thus may write, using that (s, α k (s)) ∈ B k β,0,δ ∩D t for all k = 0, . . . k 0 and all s ∈ We then set which tends to 0 as δ → 0 because H is continuous on [0, L]. Recalling (5), There is a little work to prove the last inequality because γ / ∈ B. Since γ is ρ-Lipschitz continuous, it is easily approximated by a family γ of elements of B (with I γ = [0, t]) in such a way that γ tends to γ uniformly and γ tends to γ a.e. Using that H is continuous, thatD t+cδ is open and the Fatou lemma, we conclude that JD t+cδ (g, γ) ≤ lim inf JD t+cδ (g, γ ) ≤ ΛD t+cδ (g).
The hard model
We first give the Proof of Proposition 9. Let f 0 ∈ P([v min , v max )) and r : [0, ∞) → R + , continuous, non-decreasing and such that r 0 = 0. Consider V 0 ∼ f 0 . The process (V r t , J r t ) t≥0 can be built as follows (and is unique because there is no choice in the construction): set Z 0 t = V 0 + It + r t (for all t ≥ 0) and S 0 = inf{t ≥ 0 : Z 0 t = v max }, which is positive and finite, put V r t = Z 0 t and J r t = 0 for t ∈ [0, S 0 ); set Z 1 t = v min + I(t − S 0 ) + (r t − r S0 ) (for all t ≥ S 0 ) and S 1 = inf{t ≥ S 0 : Z 1 t = v max }, put V r t = Z 1 t and J r t = 1 for t ∈ [S 0 , S 1 ); set Z 2 t = v min + I(t − S 1 ) + (r t − r S1 ) (for all t ≥ S 1 ) and for all t ≥ 0, where x and {x} stand for the integer and fractional part of x ∈ [0, ∞).
We next handle the
Proof of Proposition 10. We recall that a non-decreasing C 1 -function r : [0, ∞) → [0, ∞) with r 0 = 0 is fixed, as well as the density f 0 on [v min , v max ] of V 0 , and that V r Step 1. We first observe that for all k ≥ 0, V r a k = V 0 . This is immediate from (6), since Step 2. We denote by 0 ≤ S 0 < S 1 < S 2 < . . . the instants of jump of (J r t ) t≥0 (so that S k is the (k + 1)-th instant of jump). Here we prove by induction that for all k ≥ 1, S k a.s. belongs to [a k , a k+1 ] and that its law has the density To this end, we introduce the function m(t) = It + r t (which increases from [0, ∞) into itself), and its inverse function a.s. (recall that a 0 = 0) and a simple computation shows that its density is given by h 0 (s) = f 0 (v max − m(s))m (s)1 {s∈[a0,a1]} as desired.
We next fix k ≥ 1 and assume that S k−1 ∈ [a k−1 , a k ] a.s. Then we write , and a computation shows that the density of S k is given by Step 3. For each k ≥ 0, h k is continuous on [a k , a k+1 ], since r is of class C 1 by assumption, since k(v max − v min ) + v max − Is − r s takes values, during [a k , a k+1 ], in [v min , v max ] and since f 0 is continuous on [v min , v max ] by (H1).
Step 4. We can apply Lemma 12, of which all the assumptions are satisfied, with b k = a k . Indeed, recalling the statement, ν r N = N i=1 k≥1 δ (T i k ,Xi) , which can be written as ]. Since the density of S i k is nothing but h k by Step 2 and since h k is continuous on [b k , b k+1 ] by Step 3, we conclude that for all t ≥ 0, lim We finally give the Proof of Theorem 11. We fix w > 0. By (H2) and Remark 5, Γ t (g) = 2ρH(0) t 0 g(s)ds. We say that κ : [0, ∞) → [0, ∞) is a solution if it is of class C 1 , non-decreasing, if κ 0 = 0 and if κ t = w 2ρH(0) t 0 g κ (s)ds for all t ≥ 0, g κ being defined in Proposition 10. To be as precise as possible, we indicate in superscript that (a k ) k≥0 depends on κ. For all k ≥ 0, a κ k is thus defined by Ia κ k + κ a κ k = k(v max − v min ). We always have a κ 0 = 0. We recall that σ = ρH(0)w 2 , Step 1. For any solution κ, it holds that a κ 1 = a and κ t = v max − It − ϕ −1 0 (t) on [0, a].
Indeed, we have κ t = w 2ρH(0)g κ (t) = 2σf 0 (v max − It − κ t )(I + κ t ) on [0, a κ 1 ], from which κ t = G 0 (v max −It−κ t ). Thanks to (H2), G 0 is Lipschitz continuous, so that this ODE has a unique solution such that κ 0 = 0, given by Step 2. For any solution κ, a κ 2 = 2a and Since G 0 is Lipschitz continuous by (H2), this ODE has a unique solution such that κ a = v max − Ia − ϕ −1 0 (a) = v max − v min − Ia (we require that κ is continuous and κ a− has been determined in Step 1), which is given by Step 3. Iterating the procedure, we conclude that for any solution, we have a κ k = ka for all k ≥ 0 Thus uniqueness is checked, and we only have to verify that this function is indeed a solution. It is continuous by construction, it is of course C 1 and non-decreasing on each interval (ka, (k + 1)a), because , and the two values coincide because f 0 (v max ) = f 0 (v min ) by (H2).
Finally, we have κ t = w 2ρH(0) t 0 g κ (s)ds for all t ≥ 0, since κ is continuous, starts from 0, since κ t = w 2ρH(0) g κ (t) for all t ∈ R + \ {ka : k ≥ 1} by construction and since both κ and g κ are continuous. Recalling the definition of g κ , this last assertion easily follows from the facts that κ ∈ C 1 ([0, ∞)), that f 0 is continuous on [v min , v max ], that f 0 (v max ) = f 0 (v min ) and that for all k ≥ 1, a k = ka and κ ka+ = κ ka− .
The soft model
We start with the Proof of Proposition 6. The existence of a pathwise unique solution (V r t ) t≥0 to (2), with values in [v min , ∞), is classical and relies on the following main arguments (here the continuity of λ is not required, one could assume only that λ : [v min , ∞) → R + is measurable and locally bounded).
• Extend F to a locally Lipschitz continuous function on R and λ to a locally bounded function on R. There is obviously local existence of a pathwise unique solution to (2). The only problem is to check non-explosion (i.e. to check that a.s., sup [0,T ] |V r t | < ∞ for all T > 0). • Any solution remains in [v min , ∞), because (a) r t is non-decreasing, (b) F is locally Lipschitz continuous and F (v min ) ≥ 0 and (c) each jump sends the solution to v min .
• Since F (v) ≤ C(1 + (v − v min )) and since all the jumps are negative, any solution ( • The two previous points prevent us from explosion, so that the pathwise unique solution is global. Furthermore, We next give the Proof of Proposition 7. We recall that a non-decreasing continuous function r : [0, ∞) → 0 with r 0 = 0 is fixed, as well as the initial distribution f 0 on [v min , ∞) of V 0 , that (V r t ) t≥0 is the unique solution to (2) and that J r Step 1. For t 0 ≥ 0 and v 0 ≥ v min we define (z t0,v0 (t)) t≥t0 as the unique solution to z t0,v0 (t) = v 0 + t t0 F (z t0,v0 (s))ds + r t − r t0 . It is valued in [v min , ∞) because F (v min ) ≥ 0 (and r is nondecreasing). For all t 0 < t 1 ≤ t, we have z t1,vmin (t) ≤ z t0,vmin (t). This follows from the comparison theorem, because z t1,vmin (t 1 ) = v min ≤ z t0,vmin (t 1 ) and since (z t1,vmin (t)) t≥t1 and (z t0,vmin (t)) t≥t1 solve the same Volterra equation (with different initial conditions). Also, since Step 2. By (S1), we have λ(v) = 0 on [v min , α], with α > v min . We claim that there is an increasing sequence (a k ) k≥0 such that lim k a k = ∞ and a.s., for all k ≥ 0, J r a k+1 − J r a k ∈ {0, 1}. We introduce the increasing sequence (a k ) k≥0 defined recursively by a 0 = 0 and, for k ≥ 0, a k+1 = inf{t ≥ a k : z a k ,vmin (t) ≥ α} ∧ (a k + 1), with the convention that inf ∅ = ∞.
One easily concludes that lim k a k = ∞: if a ∞ = lim k a k < ∞, then there is k 0 such that a k+1 − a k < η (and thus r a k+1 − r a k ≥ ε) for all k ≥ k 0 , whence r a∞ = k≥0 (r a k+1 − r a k ) = ∞. This is not possible since r is R + -valued.
Step 3. For k ≥ 0, let S k = inf{t ≥ a k : ∆V r t = 0} = inf{t ≥ a k : ∆J r t = 1}. The law of S k has a continuous density g k on [a k , ∞).
(t) during [a k , S k ) and since V r jumps, at time t, at rate λ(V r t− ), we have (t))] < ∞ for all T > a k : by (S1) and Step 1, , which has a finite expectation because E[(V r a k − v min ) p ] < ∞ by Proposition 6. We easily deduce that indeed, S k has the continuous density g k (t) = E[λ(z a k ,V r a k (t)) exp(− t a k λ(z a k ,V r a k (s))ds)] on [a k , ∞).
On the one hand, we have ds. On the other hand, since V r has at most one jump in each time interval [a k , a k+1 ), one easily checks that J r t 0 h r (s)ds for all t ≥ 0, which completes the step.
Step 5. Observe that for T 1 < T 2 < . . . the successive instants of jump of (V r t ) t≥0 , we have because for each k ≥ 1, S k is the first instant of jump of (V r t ) t≥0 after a k and since (V r t ) t≥0 has at most one jump during [a k , a k+1 ).
Hence, coming back to the notation of the statement, with b k = a k + θ. We thus can directly apply Lemma 12 to conclude that indeed, for any Before concluding, we need a few preliminaries on the functional Γ.
This equation has a pathwise unique solution, see Proposition 6, which is furthermore [v min , ∞)valued and we have E[sup [0,θ] for all s ∈ [0, θ], and this quantity is well-defined and bounded, since This equation has a pathwise unique solution, see Proposition 6, which is furthermore [v min , ∞)valued and we have E[sup [θ,2θ] for all s ∈ [θ, 2θ] (and this quantity is well-defined and bounded). First, for any solution (V t ) t≥0 to (3) )ds by (S1). Since V 0 is bounded by (S2), the conclusion follows from the Gronwall Lemma.
Next, we prove that for any solution To this end, we consider M such that a.s., We used that F (v) ≤ C(1 + (v − v min )) and Lemma 14. But, the value of C (not depending on K) being allowed to vary, . For the last inequality, it suffices to note that there is a constant A > 0 (still denoted by C) All in all, we have checked that for all K ≥ 1, all t ∈ [0, T ], In particular, < ∞ by the Gronwall lemma. But then, we use (7) again to write ). Using that V 0 is bounded and the Gronwall lemma, we deduce that there is a deterministic constant M such that a.s., sup K≥1 sup [0,T ] V K t ≤ M as desired. Finally, we conclude the existence proof: for any K > M , we a.s. have λ K (
On stationary solutions for the limit soft model
The goal of this section is to show, with the help of some numerical computations, that, depending on the parameters, there may generically be 1 or 3 stationary solutions for the limit soft model (and sometimes 2 in some critical cases). In the whole section, we assume that F (v) = I − v for some I > 0. We also assume for simplicity that θ = 0 (no delay), that v min = 0 and that H(0) = max [0,1] H, so that the nonlinear SDE (3) rewrites where γ = 2ρH(0)w 2 > 0. Finally, although such an explicit form is necessary only at the end of the section, we assume that λ(v) = (v − α) p + for some α > 0 and some p ∈ N * . Assumptions (S1) and (S2) are satisfied (for a large class of initial conditions) if I ≥ α, but we may also study stationary solutions when I ∈ (0, α).
Definition 15. We say that g ∈ P([0, ∞)) is an invariant distribution for (8) if, setting m = ∞ 0 λ(v)g(dv) and a = I + √ γm, the solution (V a t ) t≥0 to starting from some g-distributed V 0 is such that L(V a t ) = g for all t ≥ 0.
The conditions are slightly different from those of [17,Proposition 21] (mainly because α = 0 there), but the extension is straightforward.
Proof. Point (i) follows from Definition 15 and Proposition 16. Concerning point (ii), let us first rewrite, using the substitutions v = au and x = ay, It is then easy to prove that a → K a is continuous (and decreasing) on (α, ∞) and that lim a↓α K a = 1 0 (1 − u) −1 du = ∞, so that a → K −1/2 a (and thus ϕ γ ) is continuous on [0, ∞). We obviously have ϕ γ (0) = 0, while lim ∞ ϕ γ = ∞ follows from the fact that K a ≥ e −λ(1) a −1 for all a ≥ 2. Indeed, since λ is non-decreasing and since 1/a 0 (1 − y) −1 dy ≤ 1/2 0 (1 − y) −1 dy = log 2 ≤ 1, we find Concerning the uniqueness/non-uniqueness of this invariant distribution, the theoretical computations seem quite involved and we did not succeed. We thus decided to compute numerically a → ϕ γ (a) in a few situations.
Let us first compute a little, recalling that λ(v) = (v − α) p + with p ∈ N * . Let us define g p (x) = x + x 2 /2 + · · · + x p /p and observe that z 0 (1 − x) −1 x p dx = − log(1 − z) − g p (z) for all z ∈ [0, 1). Separating the cases u ≤ α/a and u > α/a and using, in the latter case, the substitution x = (ay − α)/(a − α), one verifies that, for all u ∈ (0, 1), Then, a few computations (using the substitution z = (au − α)/(a − α)) show that, for all a > α, Naive methods to compute K a numerically do not work well, because with a = α (actually, the problem is when a − α > 0 is very small), one has to approximate Both values are very far from the true one, which is ∞. One possibility is to proceed to the substitution z = 1 − e −r/(a−α) p , which gives But this expression has other defaults. The numerical computations below use a Monte-Carlo method based on (11) (with exponential random variables with parameter 1) when a ∈ (α, α + 1) and based on (10) (with uniform random variables on [0, 1]) when a ≥ α + 1.
Let us comment on Figure 4. Recall that for γ > 0 and I > 0, each stationary solution to (8) corresponds to one solution a to ϕ γ (a) = 1.
Simulations
In all the simulations below, we have chosen the following values: the minimum potential is v min = 0, the length of the dendrites is L = 1, the repartition density is H(x) = 2(1 − x) on [0, 1], the front velocity is ρ = 1 and the excitation parameter is w = 1. Concerning the particle systems presented in Subsection 1.1, we consider a fully mean-field interaction, i.e. p n = 1 and N = n.
The code we use to simulate the soft particles system presented in Subsection 1.1 relies on a rejection method. The only difficulty concerns the treatment of the dendrites, that we need to incrementally update with new fronts. This is based on the recent algorithm of Yakupov and Buzdalov [39].
8.1. An isolated dendrite with i.i.d. impulses. We will observe in the next subsections a small temporal shift between the particle system and its mean-field limit. To explain this phenomenon, we consider a single dendrite with length 1, on which two fronts start from each X i at time T i (for i = 1, . . . , n), where the family (T i , X i ) i≥1 is i.i.d. with density 1 {t∈[0,1]} dtH(x)dx. The situation is thus very simple and, as seen in Proposition 2, A t ( n i=1 δ (Ti,Xi) ) represents the number of fronts hitting the soma of the dendrite before time t. By Lemma 12, Y n t = n −1/2 A t ( n i=1 δ (Ti,Xi) ) goes to y t = Γ t ((1 {s∈[0,1]} ) s≥0 ) as n → ∞. By Remark 5, we have y t = 2ρH(0) t 0 1 {s∈[0,1]} ds = 2 min(t, 1).
We want to show that there is a systematic bias. So, we fix K = 10000, we simulate K i.i.d. copies (Y i,n t ) t∈[0,2] of the process (Y n t ) t∈[0,2] , for different values of n, namely n = 10000, n = 40000 and n = 80000. On Figure 5, we plot, as a function of time t ∈ [0, 2], the average values We observe a systematic negative bias, which remains important for large values of n. For example at time 1 we have a bias around −0.14 (i.e. 7%) when n = 10000 and −0.08 (i.e. 4%) when n = 80000.
We see that a few late fronts arrive after time 1 (while the limiting value stops increasing at time 1) and this slightly makes decrease the bias.
8.2. The soft model without delay. Here we consider the soft model with the following parameters: the delay is θ = 0, the rate function is λ(v) = max(v − 0.2, 0) 8 , the drift function is F (v) = 1 − 0.1v and the initial distribution is f 0 (v) = 1 {v∈[0,1]} . On Figure 6.a, we plot on the first picture the maps t → n −1 n i=1 λ(V i,n t ), for the particle system (soft model) described in Subsection 1.1 with n = 40000 particles, as well as t → E[λ(V t )], for (V t ) t≥0 the unique solution to the nonlinear SDE (3). We observe that the two curves are very similar, but there is a small temporal shift. This is related to what we explained in Subsection 8.1. The second picture represents (g(t, v)) t≥0,v≥0 , where g(t, ·) is the density of the law of V t . The third picture represents (g n (t, v)) t≥0,v≥0 , still with n = 40000, where g n (t, ·) is a smooth version of the empirical measure n −1 n i=1 δ V i,n t . Here again, the second and third pictures seem rather close, up to a small temporal shift. On Figure 6.b, the first picture represents v → g(t, v) (with t = 0.5) and v → g n (t, v) (with t = 0.526). So, we took into account the temporal shift to make the histogram fit the continuous curve as well as possible. The second picture is similar, with t = 1 and t = 1.046. Finally, Figure 6.c contains a plot of t → E[λ(V t )] and of t → n −1 n i=1 λ(V i,n t ) for different values of n. We see that the temporal shift decreases as n increases, but the convergence seems to be rather slow.
Let us mention that g(t, v) is computed here by solving numerically the PDE associated to the nonlinear SDE (3), using an Euler scheme relying on finite differences in t and in v, with a regular grid. There is a source term at v min = 0 involving the integral ∞ 0 λ(v)g(t, v)dv, that is incorporated in the spatial finite difference at the extremity v = 0 of the space-grid. We take absolute values and normalize at each time step to ensure the positivity of the solution and that its total mass equals 1. All the figures involving this scheme were compared to a simple interacting particle system (see the next subsection) and we found very similar results.
8.3. The soft model with delay. Here we proceed exactly as in Subsection 8.2, with the same parameters, except that the delay θ = 0.4. The results are presented in Figure 7. The unpleasant temporal shift is slightly smaller.
Let us mention that we use here a different scheme to approximate the law g(t, ·) of V t , based on a simple interacting particle system (V i,K t ) i=1,...,K,t≥0 , with K = 10 6 particles. Indeed, the scheme of the previous section was not stable with a nonzero delay. Roughly, each particle solves the same SDE as (3) (with i.i.d. initial conditions and driving Poisson measures), but with the nonlinear term (t−θ)∨0 0 (γE[λ(V s )]) 1/2 ds replaced by its empirical version (t−θ)∨0 0 (γK −1 K i=1 λ(V i,K s )) 1/2 ds. Of course, we also have to proceed to a time discretization. 8.4. The soft model with another rate function. Here again, we proceed exactly as in Subsection 8.2, with the same parameters (in particular θ = 0), except that the rate function λ(v) = v 8 does not satisfy our assumptions, since α = inf{v ≥ v min : λ(v) > 0} = v min (recall that v min = 0). The results are presented in Figure 8 and are not less convincing than those of the previous subsections. It thus seems that our assumption that α > v min is not necessary.
8.5. The hard model. Concerning the hard model, we did not code the particle system described in Subsection 1.1. However, we would like to validate numerically the explicit formula of Theorem 11. We consider the following set of parameters: F (v) = I = 0.5, θ = 0, v min = 0, v max = 1.2, We compute numerically (κ t ) t≥0 by solving the ODE κ t = G 0 (v max − It − κ t ) (with κ 0 = 0), using an Euler scheme, until time a > 0 such that κ a + Ia = v max and by using that κ is aperiodic, see the proof of Theorem 11. On Figure 9, we plot in red the curve t → κ t . Recall that κ t represents the excitation rate, i.e. the increase of potential of the neurons, during [t, t + dt], due to excitation.
Next, the hard model can be seen as the soft model with the choice λ(v) = ∞1 {v>vmax} , that we approximate by λ(v) = max(v − 0.2, 0) 300 . We then the mean-field particle system introduced in Subsection 8.3) with K = 200000 particles, to approximate numerically t → E[λ(V t )], (V t ) t≥0 being the solution to the nonlinear SDE (3). And we plot, in blue, the approximation of t → 2 E[λ(V t )], which also represents the excitation rate, since it is the derivative of t → Γ t ((E[λ(V s )]) s≥0 ) = 2 t 0 E[λ(V s )]ds, see Remark 5 and recall that H(0) = 2. The two curves are close to each other and this is rather convincing concerning our explicit formula. However the precision is not high, which is not surprising due to the (numerical) singular behavior of λ around v = v max . | 2018-02-12T15:27:11.000Z | 2018-02-12T00:00:00.000 | {
"year": 2018,
"sha1": "41ba9e3d65c7e9775563ef8c4106213273f5b0b1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.04118",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8d8e2a019f9cbbe47d713b4ff1752299d17b91e9",
"s2fieldsofstudy": [
"Biology",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
} |
256044028 | pes2o/s2orc | v3-fos-license | Confluent conformal blocks of the second kind
We construct confluent conformal blocks of the second kind of the Virasoro algebra. We also construct the Stokes transformations which map such blocks in one Stokes sector to another. In the BPZ limit, we verify explicitly that the constructed blocks and the associated Stokes transformations reduce to solutions of the confluent BPZ equation and its Stokes matrices, respectively. Both the confluent conformal blocks and the Stokes transformations are constructed by taking suitable confluent limits of the crossing transformations of the four-point Virasoro conformal blocks.
Introduction
Virasoro conformal blocks [4] are holomorphic building blocks of correlations functions in two-dimensional conformal field theories. The AGT correspondence [2] triggered the study of so-called irregular conformal blocks [3,8,9]. Such blocks arise when singularities of the Virasoro conformal blocks merge, thus giving rise to irregular singularities. The present work studies one of the simplest classes of irregular conformal blocks, namely, the blocks that arise when two regular singularities of the four-point Virasoro conformal block merge into an irregular singularity of rank one.
In the so-called BPZ limit [4], the four-point Virasoro conformal blocks degenerate to solutions of a hypergeometric equation with three regular singular points in the complex plane. Since the equation is second order, the solution space is two-dimensional. For each regular singularity, it is possible to choose a basis for the solution space which diagonalizes the corresponding monodromy matrix. More precisely, assuming without loss of generality that the three regular singularities are located at 0, 1, and ∞, there are vectors F p (z) = (F p + (z), F p − (z)), p = 0, 1, ∞, such that {F p + , F p − } forms a basis of solutions for each p and where the three monodromy matrices M p , p = 0, 1, ∞, are diagonal. The solutions F p ± (z) can be expressed in terms of hypergeometric functions. The bases F p , p = 0, 1, ∞, are known as the s-channel, t-channel, and u-channel degenerate conformal blocks, respectively, see e.g. [22]. In a similar way, the general (i.e. nondegenerate) s-channel, t-channel, and u-channel conformal blocks form infinite-dimensional bases for the space of four-point conformal blocks. The purpose of this paper is to describe what happens to these bases as the regular singular point at 1 tends to infinity and merges with the regular singular point at ∞ to form an irregular singularity. In this limit, the hypergeometric equation degenerates into the confluent hypergeometric equation; we therefore call it the confluent limit and the resulting conformal blocks confluent conformal blocks.
In order to describe our results, it is convenient to first consider the hypergeometric case. The solutions F p ± (z), p = 0, 1, ∞, of the hypergeometric BPZ equation are most easily constructed by means of the Frobenius method. To apply this method in the case of p = 0 for example, one substitutes the ansatz z α (1+a 1 z+a 2 z 2 +· · · ) into the equation and equates coefficients of powers of z. It follows that there are two possible values α ± of α, and that the associated power series coefficients a ± j can be determined recursively (α ± are the two solutions of the indicial equation and our assumptions on the parameters will be such that α + = α − ). This yields two solutions F 0 ± (z) = z α ± (1 + ∞ j=1 a ± j z j ), which when combined into the vector F 0 = (F 0 + (z), F 0 − (z)) satisfy the desired relation in (1.1) with the diagonal monodromy matrix M 0 = diag(e 2iπα + , e 2iπα − ). We emphasize that this construction relies on the fact that the two power series ∞ j=1 a ± j z j converge in a neighborhood of z = 0. In the confluent limit, the hypergeometric equation degenerates into the confluent hypergeometric equation, which has a regular singular point at 0 and an irregular singular JHEP06(2020)133 Figure 1. The Stokes sectors Ω n , n = 1, 2, 3, in the complex t-plane. point at ∞. Since 0 is a regular singular point, a basis of solutions B(t) = (B + (t), B − (t)) which diagonalizes the monodromy matrix at 0 can be constructed with the help of the Frobenius method, as in the nonconfluent case. The solutions B ± (t) can be expressed in terms of the confluent hypergeometric function of the first kind (also known as Kummer's function) and we therefore call them degenerate confluent conformal blocks of the first kind.
Since the singular point at ∞ is irregular, it is not possible to construct solutions near ∞ in the same way. In fact, if one substitutes the ansatz t α e βt (1 + d 1 t −1 + d 2 t −2 + · · · ) into the confluent equation and equates coefficients of powers of t, one finds that there are two possible choices (α + , β + ) and (α − , β − ) of the parameters (α, β). Given one of these choices, the associated power series coefficients d ± j can be computed recursively. But in contrast to the regular case, the two power series ∞ j=1 d ± j t −j , in general, do not converge anywhere in the complex t-plane. Thus it is not possible to construct solutions in this way. The best one can do is to find solutions whose asymptotic behavior as t → ∞ is given by these power series, see e.g. [21]. These solutions can be expressed in terms of the confluent hypergeometric function of the second kind (also known as Tricomi's function) and we therefore call them degenerate confluent conformal blocks of the second kind. Since the asymptotic expansion of Tricomi's function is only valid in a certain sector a < arg t < b of the complex t-plane, an infinite sequence of bases of solutions D n (t), n ∈ Z, are needed to cover all values of arg t. This is an example of the Stokes phenomenon. Thus each basis D n (t) has the desired asymptotics only in a sector Ω n of the complex t-plane: where D asymp (t) denotes the formal asymptotic series (1.2) With our conventions, the Stokes sectors Ω n are given by (see figure 1) Ω n = − 3π 2 + π(n − 1) < arg t < π 2 + π(n − 1) , n ∈ Z. Of particular interest are the connection matrices C n and the Stokes matrices S n which, by definition, are the unique 2 × 2 matrices such that D n (t) = C n B(t), D n+1 (t) = S n D n (t), t ∈ Ω n , n ∈ Z. (1.4)
JHEP06(2020)133
These matrices can be obtained by taking appropriate confluent limits of the connection matrices which relate the three bases F p (z) of the hypergeometric BPZ equation. The goal of this paper is to show that the above description of the hypergeometric BPZ limit can be generalized to the setting of nondegenerate conformal blocks. Whereas the space of solutions of the confluent hypergeometric equation is two-dimensional, the space of confluent conformal blocks is infinite-dimensional. The infinite-dimensional analogs of the bases B(t) and D n (t), n ∈ Z, will be denoted by B(t) ≡ B θ * ; σ; θt θ 0 ; t and D n (t) ≡ D n θt θ * ; ν; θ 0 ; t , respectively. Here θ 0 , θ t , θ * are parameters characterizing the conformal dimensions of the fields entering the correlation function, while σ and ν are continuous indices labeling the infinite set of basis elements. Elements of the basis B(t) will be called confluent conformal blocks of the first kind. Up to prefactors of the form t α e βt , these blocks can be represented as power series in t which are conjectured to converge in the whole complex plane [9,12,13]. Elements of the bases D n (t) will be called confluent conformal blocks of the second kind. These blocks are characterized by the fact that they admit a particular asymptotic expansion D asymp θt θ * ; ν; θ 0 ; t ≡ t α e βt ∞ j=1 d j t −j in the Stokes sectors Ω n as t approaches the irregular singularity. The power series part of this expansion is believed to diverge everywhere in the complex t-plane and no closed formula is known for its coefficients. The formalism of irregular vertex operators developed in [13] provides a recursive method to compute the coefficients of the series. A different but equivalent approach which relies on the computation of a term-by-term limit of the u-channel conformal blocks series expansion was proposed in [9,12].
In this paper, we will take a different approach to the construction of the confluent conformal blocks of the second kind D n (t). As mentioned above, the infinite-dimensional analogs of the bases F p (z), p = 0, 1, ∞, are the s-channel, t-channel, and u-channel conformal blocks, which we denote by respectively. These blocks are related by crossing transformations which can be viewed as infinite-dimensional analogs of the connection matrices for the hypergeometric BPZ equation. By taking appropriate confluent limits of these transformations, we will show that the relations in (1.4) admit the following generalizations in the context of nondegenerate conformal blocks: where S n and C n are integral operators. The first of these relations, established in theorem 1, provides a construction of the confluent conformal blocks of the second kind. Abusing notation and denoting the kernel by the same symbol as the operator, this relation can be written in more detail as where the integral kernel C n will be computed explicitly, see equation (5.5 where the kernels S n will be computed explicitly by taking suitable confluent limits of the Virasoro fusion kernel, see equation (5.9). In analogy with the finite-dimensional case, we refer to these transformations as Stokes transformations and to the corresponding kernels S n as Stokes kernels.
We will verify explicitly that the constructions of the connection and the Stokes transformations are consistent with the BPZ limit in the sense that (1.5) reduces to (1.4) In symbols, this diagram takes the following form: , D n (t) confluent limit BPZ limit BPZ limit confluent limit
Organization of the paper
The hypergeometric BPZ equation and its confluent limit are studied in section 2. Section 3 reviews some properties of four-point Virasoro conformal blocks. In section 4, we recall previous results on confluent conformal blocks of the first kind. The statements of our main results are gathered in section 5, with proofs postponed to section 6. The BPZ limit is studied in section 7 and conclusions are drawn in section 8. Finally, the appendix collects some results on two special functions which play a prominent role in the paper.
Confluence of the hypergeometric BPZ equation
We consider the hypergeometric BPZ equation and its confluent limit. The parameter b will be used to parametrize the central charge c of the conformal field theory according to (2.1) In addition to b, the BPZ equation also depends on four conformal dimensions ∆(θ 0 ), ∆(θ 1 ), ∆(θ ∞ ), and ∆(θ degen ), which we parametrize with the help of three parameters θ 0 , θ 1 , θ ∞ according to To begin with, we allow b, θ 0 , θ 1 , θ ∞ to be any nonzero complex numbers such that θ 1 = θ ∞ ; later, in section 2.3, we will assume that b > 0 for simplicity.
Hypergeometric BPZ equation
The BPZ equation is given by (2. 2) It has three regular singular points at z = 0, 1, ∞. Defining the functions F p ± , p = 0, 1, ∞, by is a basis of solutions of (2.2) that diagonalizes the monodromy operator at z = p for each p = 0, 1, ∞. In fact, it is easy to see that the monodromy relations (1.1) hold with the diagonal monodromy matrices The bases F p (z) are related by the connection matrices C pq defined by A computation using the connection formulae for the Gauss hypergeometric function shows that (2.5c) JHEP06(2020)133
Confluent BPZ equation
Let us write where Λ and θ * are new parameters and t is a new complex variable. We are interested in the confluent limit Λ → ∞ in which the two nonzero regular singular points of the BPZ equation (2.2) merge at infinity. In this limit, equation (2.2) reduces to the confluent BPZ equation given by This equation has a regular singular point at t = 0 and an irregular singular point of rank one at t = ∞.
Degenerate confluent conformal blocks of the first kind
A basis of solutions that diagonalizes the monodromy operator at t = 0 is given by where M k,µ (t) is the Whittaker function of the first kind; it is defined in terms of the confluent hypergeometric function of the first kind 1 F 1 by (2.9)
Degenerate confluent conformal blocks of the second kind
The procedure described in the introduction (see (1.2)) leads to the following formal asymptotic series which forms a formal basis of solutions of (2.7) diagonalizing the monodromy matrix at infinity: (2.10) Here 2 F 0 denotes the formal power series where the Pochhammer symbol (q) n is defined by (q) n = q(q + 1) · · · (q + n − 1).
Let Ω n be the Stokes sectors defined in (1.3). The basis of solutions D n (t) of (2.7) which asymptotes to D asymp (t) as t → ∞ in the Stokes sector Ω n can be expressed in terms of the Whittaker function of the second kind W k,µ (t) defined by
JHEP06(2020)133
where U (a, b, z) is the confluent hypergeometric function of the second kind: More precisely, using the asymptotic expansion the bases D n (t) are found to be where x denotes the greatest integer less than or equal to x. It follows from (2.14) that the bases D n (t) satisfy the periodicity relation where σ 3 = diag(1, −1) denotes the third Pauli matrix.
Stokes matrices
The bases D n (t) are connected by the Stokes matrices S n which are defined by D n+1 (t) = S n D n (t), n ∈ Z. (2.16) Because of the periodicity relation (2.15), there are only two independent Stokes matrices S 1 and S 2 . In fact, starting from the relation D n+3 (te 2iπ ) = S n+2 D n+2 (te 2iπ ) and using (2.15), we obtain S n+2 = e 2πbθ * σ 3 S n e −2πbθ * σ 3 . (2.17) Using the following analytic continuation formula for the Whittaker function of the second kind: we infer that the Stokes matrices S 1 and S 2 are given by JHEP06(2020)133
Connection matrices
The bases B(t) and D n (t) are related by the connection matrices C n which are defined by D n (t) = C n B(t), t ∈ Ω n , n = 1, 2, . . . . (2.20) Here and in what follows we have restricted ourselves to positive values of n ∈ Z for simplicity.
Proposition 2.1. The connection matrices C n are given by .
(2.22)
Proof. It is easy to verify that the first connection matrix is given by S n−k C 1 = S n−1 S n−2 · · · S 1 C 1 , n = 2, 3, . . . , (2.24) it follows from (2.19) and (2.23) that (2.21) holds for n = 1 and n = 2. Moreover, by (2.20), the connection matrices C n satisfy the periodicity relation Proceeding by induction, it is therefore sufficient to prove that and direct computations show that the matrices defined in (2.21) obey these relations.
Confluence of the solutions
In this subsection, we explain how to obtain the solutions of the confluent BPZ equation (2.7) from the solutions of the hypergeometric BPZ equation (2.2) by taking an appropriate confluent limit. Let us first set θ 1 = Λ+θ * 2 and θ ∞ = Λ−θ * 2 . We introduce renormalized versionsF p (z, Λ) of the bases F p (z) defined in (2.3) as follows: After performing the change of variables z = t ibΛ , equation (2.2) has three regular singular points at t = 0, t = ibΛ, and t = ∞. The next proposition shows that the solution basis B(t) of the confluent BPZ equation defined in (2.8) is the confluent limit of the renormalized basis of solutionsF 0 ( t ibΛ , Λ) of the hypergeometric BPZ equation.
We next explain how to recover the solution bases D n of the confluent BPZ equation adapted to the irregular singular point at ∞. Actually, the D n can be obtained as the confluent limit in two different ways: either starting fromF ∞ or fromF 1 . This is consistent with the fact that the two basesF ∞ andF 1 diagonalize the monodromy matrices at the two singular points that merge at ∞ in the confluent limit.
JHEP06(2020)133
For the rest of this section we assume that b, Λ > 0 for simplicity. Defining the renormalized connection matricesC pq by the relationF p =C pqF q , we havẽ where C ∞0 , C 10 , and C ∞1 are the connection matrices defined in (2.4). The crucial point is that (2.4) holds for 0 < arg z < π. To access all the Stokes sectors Ω n , it is therefore convenient to write where j ≥ 1 is an integer and = ±1. It is straightforward to show that 0 < arg z < π if and only if Decomposing the Stokes sector Ω n into the two halves Ω − n and Ω + n defined by , (2.34) Our next proposition utilizes (2.31) and (2.34) to construct the solution basis D n (t) everywhere in Ω n from the renormalized solution basesF ∞ andF 1 .
Proposition 2.3. The following limits hold for any integer j ≥ 1: Proof. Let us first prove (2.35a). We start from the connection formulã where the renormalized connection matrixC ∞0 is given by (2.30). The renormalized basis
JHEP06(2020)133
Thus the connection formula (2.36) can be rewritten as Letting Λ → +∞ in this equation and using the limit (2.29) ofF 0 ( t ib Λ , Λ), we find (2.37) It remains to compute the limit ofC ∞0 ( Λ). This matrix can be explicitly written as Using the asymptotic formula . (2.41) Gathering the previous computations, it is straightforward to obtain the two limits for any integer j ≥ 1. Hence we have shown that (2.43) The proof of the second limit (2.35b) involves a similar computation. Indeed, we have (2.45) The renormalized connection matrix takes the form Using the asymptotics (2.38), a direct computation shows that Finally, observing that for j = 1, 2, . . . , equation (2.35b) follows.
JHEP06(2020)133
Proposition 2.3 can be used to determine the Stokes matrices S n of the confluent BPZ equation defined by (2.16). In fact, consider the connection formula between the renormalized solution basesF ∞ andF 1 : Introducing the prefactors appearing in (2.35a) and (2.35b), this relation can be rewritten as Proposition 2.3 implies that the confluent limit of the relation (2.47) must lead to the formulas D 2j+1 (t) = S 2j D 2j (t) and D 2j (t) = S 2j−1 D 2j−1 (t) for = +1 and = −1, respectively. Let us verify this explicitly. As Λ → +∞, a direct computation utilizing (2.38) yields (2.48) In each of the two cases = ±1, one of the off-diagonal entries on the right-hand side of (2.48) has exponential decay, and it is straightforward to obtain where S 1 and S 2 are given by (2.19). Moreover, thanks to the periodicity relation (2.17), for any integer j ≥ 1. Recalling propositions 2.3 and noting that it follows that the two relations are recovered by taking the limit Λ → +∞ of (2.46) for = +1 and = −1, respectively.
JHEP06(2020)133 3 Four-point Virasoro conformal blocks
The remainder of this article will be devoted to generalizing the picture developed in section 2 to the case of generic four-point Virasoro conformal blocks. We start by recalling their main properties. The (regular) 4-point conformal block is often represented in the literature by a trivalent graph encoding the expectation value of a composition of two primary vertex operators as follows: It depends on the Virasoro central charge c, five conformal dimensions ∆ (x) = c−1 24 + x 2 attached to the edges labeling highest weight modules, and the anharmonic ratio t of four points on CP 1 . The vertices of the graph represent chiral vertex operators [18]. The series representation for conformal blocks was made explicit by the discovery of the AGT relation [2] between two-dimensional conformal field theories and four-dimensional supersymmetric gauge theories. Denoting by Y the set of Young diagrams, the 4-point conformal block is expressed as [1,14] F θ 1 θt σs θ∞ θ 0 where |λ| denotes the number of boxes in the diagram λ ∈ Y. In order to write the coefficients F λ,µ explicitly, let a λ ( ) and l λ ( ) denote the arm-length and leg-length of the box in λ. Moreover, for θ ∈ C and λ, µ ∈ Y, introduce the Nekrasov functions Z λ,µ (θ) by The expansion coefficients F λ,µ can then be expressed as The series in (3.1) is believed to be convergent inside the unit disk |z| < 1. Another hypothesis is that the only singularities of the conformal blocks as a function of z are branch points at 0, 1, ∞ [11,24]. Under this assumption, conformal blocks are naturally defined for z ∈ C \ ((−∞, 0] ∪ [1, ∞)). Moreover, the conformal blocks are believed to be analytic in the external dimensions θ p , p = 0, t, 1, ∞, and meromorphic in the internal momentum σ, with the only possible poles located at ±σ
Crossing transformations
The linear span of conformal blocks forms an infinite-dimensional representation of Γ(Σ 0,4 ) = PSL 2 (Z), the mapping class group of the four-puncture Riemann sphere. It is generated by the braiding B and fusion F moves, such that F 2 = (BF ) 3 = 1. The three ways of splitting four points on CP 1 into two pairs define the s-, t-, and u-channel bases for the space of conformal blocks. The cross-ratio argument of conformal blocks in these channels are chosen from {z, z z−1 },{1 − z, z−1 z }, and { 1 z , 1 1−z }, respectively. The braiding move B acts on the s-channel conformal blocks as follows: On the other hand, the fusion move F is represented by the integral transform The kernel of this transformation, called the Virasoro fusion kernel, has been related to Racah-Wigner coefficients for a continuous series of representations of U q (sl 2 ), and can be expressed as [16,17] F θ 1 θt θ∞ θ 0 where the special functions g b (x) and s b (x) are defined in the appendix. For 0 < b ≤ 1, the integrand in (3.5) has eight vertical semi-infinite sequences of poles, four of them increasing and the other four decreasing; the contour of integration F runs from −∞ to +∞, separating the upper and lower sequences of poles. When the conformal dimensions are real and positive (which is Liouville's spectrum), the contour of integration lies in the strip Im x ∈] − iQ 2 , 0[. More generally, the fusion kernel (3.5) can be extended to a meromorphic function of all of its parameters provided that c ∈ C \ R ≤1 , which corresponds to b / ∈ iR. Finally, further crossing transformations can be obtained by composing braiding and fusion moves. Let us fix 0 < arg z < π. The first two crossing transformations of interest are
JHEP06(2020)133
Such transformations can be seen as infinite-dimensional analogs of the connection formulas for the BPZ equation given by the connection matrices (2.5a) and (2.5b), respectively. In section 2, a suitable confluent limit of these matrices allowed us to recover the solutions of the confluent BPZ equation normalized at t = ∞ in any Stokes sector. We will adopt a similar approach to construct the confluent conformal blocks of the second kind in any Stokes sector. The last crossing transformation that we will use is This is the analog of the connection formula for the BPZ equation given by the connection matrix (2.5c). In section 2, we recovered the Stokes matrices of the confluent BPZ equation by taking appropriate confluent limits of this connection matrix. We will use a similar approach to find the Stokes transformations acting on the confluent conformal blocks of the second kind.
Main results
Before stating our two main results, theorem 1 and theorem 2, we need to make some assumptions and define the confluent conformal blocks of the second kind.
Assumptions
In the remainder of this article, we make the following two assumptions.
Assumption 5.1 (Analyticity of B(t)). We assume that the sum over λ, µ ∈ Y in (4.3) converges and defines an entire function of t, which is furthermore analytic in the parameters θ 0 , θ t , θ * , and meromorphic in σ s except for possible poles located at σ = ±σ Assumption 5.2 (Restrictions on the parameters). We assume that Assumption 5.1 is believed to be true [12]. Assumption 5.2 is made primarily for simplicity; we expect all our results to admit analytic continuations to more general values of the parameters, such as b ∈ C \ iR and (θ 0 , θ * ) ∈ C 2 .
Confluent conformal blocks of the second kind
Recall that proposition 2.3 provides a construction of the solution bases D n (t) of the confluent BPZ equation in any Stokes sector t ∈ Ω n from suitable confluent limits of the renormalized solutions basesF ∞ (t) andF 1 (t) of the hypergeometric BPZ equation. We will define the confluent conformal blocks of the second kind by generalizing proposition 2.3 to the nondegenerate case. First, we define the renormalized four-point conformal blocks where the normalization factor N 0 is given by (4.2) and the normalization factor N ∞ is defined by Second, recall that the singular point at t = ∞ of the degenerate conformal blocks of the second kind is irregular. Therefore, as it was shown in (2.35), an infinite sequence of confluent limits is needed to cover all values of arg t. The nondegenerate analog of (2.35) is constrained by the fact that it must reduce to (2.35) in the BPZ limit. Hence we define the confluent conformal blocks of the second kind as follows.
JHEP06(2020)133
Definition 5.3 (Confluent conformal blocks of the second kind). The confluent conformal blocks of the second kind D n θt θ * ; ν; θ 0 ; t are defined by the following confluent limits for any integer j ≥ 1: where the "half " Stokes sectors Ω ± n are defined in (2.33) and the renormalized conformal blocksF ∞ andF 1 are defined in (5.2).
It is verified in Remark 7.4 that the above definition indeed reduces to (2.35) in the BPZ limit.
First main result
Our first main result provides an explicit integral representation for the confluent conformal blocks of the second kind D n in terms of B.
Let g b (z) and s b (z) be the special functions defined in the appendix. We define the kernel C n for any integer n ≥ 1 by where the prefactor P (n) is defined by 1 , (5.6) and the integrand I (n) is given by The integration contour C in (5.5) is defined as follows. In view of (A.9), the numerator in (5.7) has three decreasing semi-infinite sequences of poles, while the denominator has three increasing semi-infinite sequences of zeros. The contour C in (5.5) is any curve from
JHEP06(2020)133
−∞ and +∞ which separates the increasing from the decreasing sequences. For example, if (θ t , σ s , ν) ∈ R 3 , assumption 5.2 implies that the decreasing sequences start at points on the horizontal line Im x = − iQ 2 , whereas the increasing sequences start at points on the real axis Im x = 0. Thus, in this case the contour C can be any horizontal line in the strip Im x ∈] − iQ 2 , 0[. More generally, the function (5.5) extends to a meromorphic function of (θ 0 , θ t , θ * , ν, σ s ) provided that b / ∈ iR. The following theorem is our first main result. It describes how the confluent conformal blocks of the second kind D n can be constructed from B.
Second main result
We define the Stokes kernel S n for any integer n ≥ 1 by where the prefactor P n is defined by the integrand I n is given by 11) and the integration contour S is such that it separates the two increasing sequences of poles of the integrand from the two decreasing ones. If (ν n , ν n+1 , θ t ) ∈ R 3 , then assumption 5.2 implies that the integrand I n has two increasing sequences of poles starting from points on JHEP06(2020)133 the line Im x = 0, and two decreasing sequences of poles starting from points on the line Im x = − iQ 2 . Therefore, in this case the contour of integration S can be chosen to be any curve in the strip Im x ∈] − iQ 2 , 0[ going from −∞ to +∞. More generally, the Stokes kernel S n can be extended to a meromorphic function of θ * , θ t , θ 0 , ν n+1 , ν n provided that b / ∈ iR. The next theorem, which is our second main result, describes how the confluent conformal blocks in different Stokes sectors are related.
Theorem 2 (Stokes transformations). For any integer n ≥ 1, the confluent conformal blocks of the second kind in the two overlapping Stokes sectors Ω n and Ω n+1 are related by The proof of theorem 2 consists of taking suitable confluent limits of the crossing transformation (3.7) and is presented in section 6.2. The Stokes transformations (5.12) are infinite-dimensional analogs of the equations (2.16) that relate the solutions D n (t) of the confluent BPZ equation in different Stokes sectors.
Remarks
We conclude this section with some remarks on the definition (5.4) of D n . We observed in section 2 that the solution basis D n (t) given in (2.14) of the confluent BPZ equation asymptotes to the series D asymp (t) given in (2.10) as t approaches ∞ in the Stokes sector Ω n . Similarly, the confluent conformal blocks of the second kind D n (t) are expected to admit a particular asymptotic expansion D asymp (t) in Ω n . We believe that the expansion D asymp coincides with the one given in [12, eq. where the first two coefficients are given bŷ The leading asymptotics of the series in (5.13) was found in [9] by computing a confluent limit of the first terms in the series expansion of the u-channel four-point Virasoro conformal blocks. This recipe was extended to higher orders in [12]. The left-hand side of (5.4a) for j = 1 and = +1 is similar to the right-hand side of [12, eq. (1.7)]. However, here we take the confluent limit of the crossing transformation (3.6a) rather than of the series expansion of the u-channel conformal blocks.
JHEP06(2020)133
The framework of irregular vertex operators developed in [13] provides a different but equivalent approach to the construction of D asymp (t). The series in (5.13) is expected to diverge everywhere in the complex plane of t and no closed formula is known for its coefficients.
Two observations suggest that the confluent conformal blocks of the second kind D n (t) defined in (5.4) asymptote to D asymp (t) as t → ∞ in the Stokes sector Ω n . First, using (4.4), we observe that It follows from this relation that the D n (t) satisfy the periodicity relation Second, comparison of the first few terms of D asymp given by (5.14) with the ones of D asymp (t) given by (2.10) suggests that the following BPZ limit holds: Moreover, we will show in proposition 7.3 that D n (t) tends to the solution D n (t) of the confluent BPZ equation in the BPZ limit. Also, we know from section 2 that D n (t) asymptotes to D asymp (t) as t → ∞ in Ω n . We can summarize these observations as follows: D n (t) D asymp (t)
BPZ limit BPZ limit asymptotic expansion
This diagram suggests that the confluent conformal blocks of the second kind D n (t) asymptote to D asymp as t approaches ∞ in the Stokes sector Ω n .
Proofs
We will establish theorem 1 and theorem 2 by computing suitable confluent limits of the crossing transformations (3.6) and (3.7), respectively.
Proof of theorem 1
The proof of theorem 1 is achieved by computing confluent limits of the crossing transformations (3.6a) and (3.6b). Let us first consider (3.6a). Introducing appropriate normalization JHEP06(2020)133 factors and recalling the definitions (5.2) ofF ∞ andF 0 , we can write (3.6a) as for = ±1, the factors in the first line of the integrand take the explicit form Moreover, thanks to (3.3), the renormalized conformal blocksF 0 satisfies On the other hand, using the confluent limit (4.1) of the s-channel conformal blocks, we obtain 3) The limit of the Virasoro fusion kernel F remains to be computed. The conformal blocks are symmetric under any sign changes of the parameters, so the Virasoro fusion kernel also has this symmetry. Thus, replacing θ t by −θ t in (3.5) and shifting the contour by
JHEP06(2020)133
Using (A.7), it is straightforward to compute the asymptotics of the first line as Λ → +∞: Moreover, the asymptotic formula (A.11) for s b yields, as Λ → +∞, Multiplication of the preceding two equations produces a factor The first part of this factor cancels part of the integrand in (6.3), and the second part yields the phase in the integrand of the confluent fusion kernel (5.5). Finally, the two families of confluent fusion kernels C 2j+1 and C 2j are obtained after gathering the phases and taking = +1 and = −1, respectively. To summarize, we have shown that e 2iπ(j−1) Recalling the definition (5.4) of D n , this concludes the proof of theorem 1 in the case when t ∈ ∪ ∞ n=1 Ω − n . The proof when t ∈ ∪ ∞ n=1 Ω + n is rather similar and consists of computing the confluent limit of the crossing transformation (3.6b). Introducing the relevant normalization factors, (3.6b) becomes
JHEP06(2020)133
Using the analytic continuation (3.3) and the confluent limit (4.1), we obtain e 2iπ(j−1) Performing a contour shift x → x + ν − θ * 2 and using the even symmetry of the bottom left parameter of the Virasoro fusion kernel, we find from (3.5) that Using (A.7), the first line has the following asymptotics as Λ → +∞: and using (A.11) we find, as Λ → +∞, The multiplication of the preceding two equations produces a factor Substitution into (6.7) leads to the family of confluent fusion kernels C 2j and C 2j−1 for = +1 and = −1, respectively. This proves (5.8) also for t ∈ ∪ ∞ n=1 Ω + n and concludes the proof of theorem 1.
JHEP06(2020)133
6.2 Proof of theorem 2 Theorem 2 will be established by computing an appropriate confluent limit of the crossing transformation (3.7) relating the u-and t-channel conformal blocks. The cases of odd and even n will be considered separately.
Derivation of the Stokes transformations for
The integrand in (3.7) is an even function of σ t and the Virasoro fusion kernel F is an even function of θ t . Thus, performing the change of variables and using the symmetry θ t → −θ t of the Virasoro fusion kernel, we can write (3.7) as Introducing appropriate normalization factors, letting recalling the definitions (5.2) ofF ∞ andF 0 , and taking the limit Λ → +∞, we can write (6.11) as Using the limits (5.4a) and (5.4b) for = −1, we obtain Deforming the contour of integration F in the expression (3.5) for the Virasoro fusion kernel by shifting x → x − Λ+θ * 2 , we obtain
Derivation of the Stokes transformations for n = 2j
We perform the following change of variables in the integrand of (3.7): The confluent limit of the crossing transformation (3.7) can then be written as Using the limits (5.4a) and (5.4b) for = +1, we obtain The evaluation of the confluent limit of the Virasoro fusion kernel F is similar to the evaluation presented in subsection 6.2.1. In the end, one arrives at the Stokes kernel S 2j , which proves theorem 2 also for even values of n. JHEP06(2020)133
The BPZ limit
In this section, we verify explicitly that the connection formula (5.8) of theorem 1 reduces to the connection formula (2.20) of the confluent BPZ equation in the BPZ limit. We also verify that the Stokes transformation (5.12) of theorem 2 reduces to the Stokes formula (2.16). We will make the following assumption on the BPZ limit of the confluent conformal blocks of the first kind B(t).
Assumption 7.1 (BPZ limit of B(t)). We assume that the following BPZ limit holds: where B(t) is the degenerate confluent block of the first kind defined in (2.8).
The limits in (7.1) can be verified numerically to high order by expanding both sides in power series of t, but we are not aware of an analytic proof.
BPZ limit of C n
We first compute the BPZ limit of the right-hand side of equation (5.8).
Proposition 7.2 (BPZ limit of C n ). Define ν ± = − θ * 2 ± ib 2 . The following limit holds: Proof. The function P (n) defined in (5.6) can be split into two parts as follows: and JHEP06(2020)133 Using this notation, recalling the definition (5.5) of C n , and adopting the short-hand notation B(t) ≡ B θ * ; σ s ; θt θ 0 ; t , we can write the integral on the left-hand side of (7.2) as follows: (7.5) Before presenting a detailed evaluation of the BPZ limit of (7.5), we briefly describe the main idea of the argument. Recall that 0 < b < 1. The special functions s b (z) and g b (z) defined in (A.1) and (A.2) possess semi-infinite sequences of poles, whose locations are given by (A.9) and (A.5), respectively. The prefactor P 1 (ν, θ t , θ * ) in (7.5) satisfies thus, by (A.5), P 1 (ν ± , θ t , θ * ) has a double zero at θ t = iQ 2 + ib 2 . In the BPZ limit, the contours of integration C and R + in (7.5) are pinched between pairs of moving poles; a similar mechanism was described in [10,11,15]. For example, the pinching of the integration contour R + is due to the factor g b (σ s − θ 0 − θ t ) g b (−σ s + θ 0 − θ t ) in (7.4). Indeed, this factor has two poles which cross R + in the limit θ t → iQ . Therefore, before taking the limit, we deform the contour R + to a new contour R + as shown in figure 2b and pick up two residue contributions from the two poles; these contributions are easily computed with the help of (A.6). A similar mechanism occurs for the integral over x. In the end, after performing the deformations of both C and R + , we are able to express the double integral on the right-hand side of (7.5) as a sum of three types of terms: terms which are regular at θ t = iQ 2 + ib 2 , terms which have a simple pole at θ t = iQ 2 + ib 2 , and terms which have a double pole at θ t = iQ 2 + ib 2 . Since the prefactor P 1 has a double zero at θ t = iQ 2 + ib 2 , only those terms that have a double pole will yield a nonzero contribution to (7.5) in the limit θ t → iQ 2 + ib 2 . Computing this contribution explicitly and using that B(t) → B(t) in the BPZ limit by assumption 5.1, the proposition will follow.
The case ν → ν + . We first deform the contour C in (7.5) into a suitable contour C such that no pole in x crosses C in the limit ν → ν + . For ν = ν + , we have In the limit θ t → iQ 2 + ib 2 , the pole of s b x + ib 2 − θ t located at x = − iQ 2 − ib 2 + θ t crosses the contour of integration C and collides with the pole of s b x + iQ 2 −1 located at x = 0. Therefore, we choose to deform the contour C as shown in figure 2a, so that the integral JHEP06(2020)133 Deformation of the contour R + Figure 2. Schematic illustration of the deformations of the contours C and R + .
over C becomes an integral over C plus a residue at x = θ t − iQ 2 − ib 2 . Moreover, as described above, we deform the integration contour R + as in figure 2b.
−2iπ
Res where C is regular in the limit θ t → iQ 2 + ib 2 . Consequently, the pole of the right-hand side of (7.8) at θ t = iQ 2 + ib 2 is only simple, so again, thanks to the double zero of P 1 (ν + , θ t , θ * ),
JHEP06(2020)133
we have T 2 = 0. The term T 3 also vanishes because of a similar argument. Indeed, a calculation yields −2iπ Res , and the pole of this expression at θ t = iQ 2 + ib 2 is only simple. It only remains to compute T 4 . Let us write where X is defined for = ±1 by A tedious but straightforward computation shows that Due to the first two factors on the right-hand side of (7.9), X has a double pole at θ t = iQ 2 + ib 2 . This pole is canceled by the double zero of P 1 (ν + , θ t , θ * ) and a computation yields, for = ±1, Using the identity (A.4) satisfied by the function g b (z), we find We conclude that −+ (θ * , θ 0 ), (7.11) JHEP06(2020)133 where K (n) −+ (θ * , θ 0 ) is defined in (2.22). Using the BPZ limit B(t) → B(t) = (B + (t), B − (t)) given in (7.1), it is concluded that (7.12) In view of the expression (2.21) for C n , the right-hand side equals the second component of C n B(t). This completes the proof of the second component of (7.2).
The case ν → ν − . A similar mechanism occurs in this case. For ν = ν − , we have In the limit θ t → iQ 2 + ib 2 , the poles of s b x − ib 2 − θ t located at x 1 := θ t + ib 2 − iQ 2 and x 2 := θ t + ib 2 − iQ 2 − ib cross the contour of integration and collide with the poles of s b x + iQ 2 −1 located at x = +ib and x = 0, respectively. Deforming the contour C into C in a similar manner as before, and noting that the analogs of T 1 , T 2 , and T 3 vanish also in this case, we arrive at The residues at x 1 and x 2 can be computed using the property (A.8) of the function s b (z). We obtain , Defining R j, for j = 1, 2 and = ±1 by
BPZ limit of the Stokes transformations
Our next proposition provides the BPZ limit of the right-hand side of (5.12) in any odd Stokes sector.
The case ν 2j → ν + . The contour S in (5.9) can be deformed into a suitable contour S such that no pole in x crosses S in the limit ν 2j → ν + . For ν 2j = ν + , we have In the limit θ t → iQ 2 + ib 2 , the pole of s b (x + θ 0 − θ t ) at x = − iQ 2 +θ t −θ 0 crosses the contour of integration S and collides with the pole of s b x + θ 0 − ib 2 + iQ 2 −1 at x = ib 2 − θ 0 . Hence, similarly to what has been described in figure 2, we deform the contour S downward past the moving pole so that the integral over S turns into a sum of an integral over S and a residue contribution from x = − iQ 2 + θ t − θ 0 . We find
Summary
The results of this section can be summarized as follows.
Corollary 7.7. In the BPZ limit, the connection formula (5.8) of theorem 1 and the Stokes transformation (5.12) of theorem 2 reduce to the connection formula (2.20) and the Stokes formula (2.16) for the confluent BPZ equation, respectively. Schematically, this can be expressed as D n (t) = C n B(t) D n+1 (t) = S n D n (t) D n (t) = C n B(t) D n+1 (t) = S n D n (t)
Conclusions and perspectives
In this article we have constructed the confluent conformal blocks of the second kind. We also constructed the Stokes transformations which map such blocks in one Stokes sector to another. Both the confluent conformal blocks and the Stokes transformations were found by taking suitable confluent limits of the crossing transformations of the four-point Virasoro conformal blocks. We explicitly verified that in the BPZ limit the constructed blocks and the associated Stokes transformations reduce to solutions of the confluent BPZ equation and its Stokes matrices, respectively. An interesting problem is to combine the holomorphic and anti-holomorphic confluent conformal blocks with an integration measure to construct confluent Liouville correlation functions. Such a construction should be possible by applying the confluence procedure that we have described to the Liouville correlations functions built from the s-, t-, and u-channel conformal blocks. The result is expected to be invariant under the generalized fusion and Stokes transformations that we have constructed in this article. Hence this generalized crossing symmetry would be translated into orthogonality relations satisfied by the confluent fusion and Stokes kernels. Moreover, there exist close connections between quantum Teichmuller theory and Liouville theory [19,20]. The collision of holes in Teichmuller theory was studied in [6,7]. It would be interesting to understand the connections between the above two subjects after taking the confluent limit.
As mentioned in section 3, there exists a relation between two-dimensional conformal field theories and four-dimensional supersymmetric gauge theories [2] (see [9] for the case of irregular singularities). It would be interesting to interpret our definition of confluent conformal blocks of the second kind (see Definition 5.3) and the associated Stokes transformations (5.12) in a gauge theory framework using this relation.
Finally, the approach that we have developed can in principle be adopted to construct also confluent conformal blocks with irregular singularities of rank r > 1. | 2023-01-21T14:55:47.581Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "6f31855508a405d5a11536768a68203c252b7c13",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2020)133.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6f31855508a405d5a11536768a68203c252b7c13",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
4393546 | pes2o/s2orc | v3-fos-license | The Health Risk of Cd Released from Low-Cost Jewelry
The composition of the surface layer of 13 low-cost jewelry samples with a high Cd content was analyzed using an energy-dispersive X-ray fluorescence spectrometer (ED XRF). The analyzed jewels were obtained in cooperation with the Czech Environmental Inspectorate. The jewels were leached in two types of artificial sweat (acidic and alkaline) for 7 days. Twenty microliters of the resulting solution was subsequently placed on a paper carrier and analyzed by an LIBS (Laser-Induced Breakdown Spectrometry) spectrometer after drying. The Cd content in the jewelry surface layer detected by using ED XRF ranged from 13.4% to 44.6% (weight per weight—w/w). The samples were subsequently leached in artificial alkaline, and the acidic sweat and leachates were analyzed using laser-induced breakdown spectrometry (LIBS). The amount of released Cd into alkaline sweat ranged from 24.0 to 370 µg Cd per week, respectively 3.23–61.7 µg/cm2/week. The amount of released Cd into acidic sweat ranged from 16.4 to 1517 µg Cd per week, respectively 3.53–253 µg/cm2/week. The limit of Cd for dermal exposure is not unequivocally determined in the countries of the EU (European Union) or in the U.S. Based on the US EPA (United States Environmental Protection Agency) approach used to establish the reference dose (RfD) for Cd contained in food and information about the bioavailability of Cd after dermal exposure, we assessed our own value of dermal RfD. The value was compared with the theoretical amount of Cd, which can be absorbed into the organism from jewelry in contact with the skin. The calculation was based on the amount of Cd that was released into acidic and alkaline sweat. The highest amount of Cd was released into acidic sweat, which represents 0.1% of dermal RfD and into alkaline sweat, 0.5% of dermal RfD. These results indicate that the analyzed jewelry contains Cd over the limit for composition of jewelry available within the territory of the EU. The determined amount of Cd in analyzed jewelry does not, however, pose a threat in terms of non-carcinogenic toxic effects.
Introduction
Cadmium is a toxic metal that is easily accumulated in the human body. Even low exposure levels can cause accumulation in human tissues, especially in the kidneys [1]. Chronic cadmium poisoning can inflict renal dysfunction or emphysema, among other afflictions [2]. A number of epidemiological studies have suggested that cadmium is a human carcinogen. According to the EU (European Union) regulation (No. 1272/2008) on classification, labeling, and packaging of substances and mixtures, cadmium is classified as carcinogenic (cat. 1B), mutagenic (cat. 2), toxic for reproduction (cat. 2), acutely toxic (cat. 2), and toxic for the aquatic environment.
The European Commission issued a document on the results of a risk evaluation and on risk reduction strategies for cadmium and cadmium oxide (2008/C 149/03) in 2008. The commission concluded that risk management measures were needed for the protection of consumers because of concerns for genotoxicity and carcinogenicity irrespective of the route of exposure, as the substance was considered a non-threshold carcinogen, arising from wearing (imported) jewelry. As of 2011, cadmium has been restricted in jewelry in EU/EEA (European Union/European Economic Area) countries by the REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) regulation (No. 1907, Annex XVII, entry 23 (10). This restriction was implemented by Commission Regulation No. 494/2011 and limits the concentration of cadmium in the metal parts of jewelry and imitation jewelry articles and hair accessories to a maximum of 0.01% by weight. Jewelry and jewelry-like articles containing cadmium over the limit cannot be placed on the market in EU/EEA countries as of January 2012.
The number of non-compliant jewelry articles on the market is deemed to be extremely high. The Czech Environmental Inspectorate tested, for example, 105 random pieces of jewelry placed on the market in the Czech Republic in 2015. Twenty-three articles contained cadmium levels over the limit. The average cadmium content of those non-compliant articles was 35% w/w (weight per weight). Non-compliant articles were found in 2016 as well, the maximum amount of cadmium found in an article was 91% w/w. It can be concluded that cadmium in jewelry is not present as an unwanted contaminant but is rather deliberately used during the production of jewelry articles. Cadmium is used in all probability in the production of such articles due to its favorable properties. It is easy to utilize, resistant to rust, and relatively cheap. Jewelry articles with cadmium are also abundant in other EU/EEA states. The database of the Rapid Exchange of Information System RAPEX (The Rapid Alert System for non-food dangerous products); EU alert system for unsafe consumer products) listed, for example, 157 notifications from member states regarding cadmium jewelry over the 2012-2016 period [3]. Notifications involving cadmium jewelry amounted to 6.5% of all RAPEX notifications concerning chemical risk in 2016.
Cd content is not only limited in jewelry, but also in other objects of human daily use, such as cosmetics, [4] articles for contact with food [5,6], or children's toys [7]. Cd content is monitored in textile or plastic in EU countries [8], with the Cd limit for both types of articles being 0.01 wt %.
In case of skin contact with objects containing Cd, the possibility of dermal exposure and the emergence of various types of irritation have been discussed. Assays for assessing the dermal toxicity of Cd have been described in several publications [9][10][11]. The test of percutaneous absorption of Cd was also performed on human skin samples [12]. The aim was to determine the absorption of Cd as a chloride salt from the aqueous solution through human skin into the plasma. Only 0.5 or 0.6% of the total amount of Cd, which was contained in the aqueous solution, was then absorbed through the skin into the blood plasma. According to the authors, the surface Cd concentration has an influence on the amount of Cd diffused into the skin, but Cd transfer into the plasma is independent from the concentration of Cd applied to the skin.
To determine the amount of released analyte from a solid sample, it is advisable to perform a leaching test. An artificial human sweat was used as a leaching agent for objects that may come into contact with the skin. A number of model solutions with defined content elements, organic compounds, pH, etc. were discussed. Artificial human sweat has been used to dissolve the chemical components of jewelry, textiles, cosmetics, pharmaceuticals, industrial chemicals, and others [13]. Tests of leaching are often used for determining the amount of released Ni from objects that are in contact with the skin. Ni is a significant allergen, which can cause dermatitis and other allergic reactions amongst sensitive individuals. Determination of Ni in sweat extracts of the analyzed objects has been described in several publications [14][15][16][17][18][19].
Non-destructive methods such as X-ray fluorescence spectrometry (XRF) can be used for analysis of the surface composition of the jewelry [20]. In order to assess the amount of Cd released from jewelry into human perspiration and for the assessment of dermal exposure, a leaching test with simulated human sweat can be performed. The most commonly used methods for water solution analysis are AAS (Atomic Absorption Spectrometry), ICP OES (Inductively Coupled Plasma Optical Emission Spectrometry), and ICP MS (Inductively Coupled Plasma Mass Spectrometry). Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Table 1. Surface composition of jewelry from an Energy-Dispersive X-ray Fluorescence Spectrometer (ED-XRF) and parameters of the analyzed samples. from jewelry into human perspiration and for the assessment of dermal exposure, a leaching test with simulated human sweat can be performed. The most commonly used methods for water solution analysis are AAS (Atomic Absorption Spectrometry), ICP OES (Inductively Coupled Plasma Optical Emission Spectrometry), and ICP MS (Inductively Coupled Plasma Mass Spectrometry). Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Table 1. Surface composition of jewelry from an Energy-Dispersive X-ray Fluorescence Spectrometer (ED-XRF) and parameters of the analyzed samples. Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1. Here, we provide a new perspective on the possibility of leachate analysis and the determination of toxic elements in said leachates using Laser-Induced Breakdown Spectrometry (LIBS). Depositing a small solution volume on a solid support can be a suitable alternative method of analysis of leachates and a new option for the assessment of non-compliant subjects. This method could bring several advantages, such as speed of analysis, the minimization of the consumption of the sample, and the possibility of storing the dried samples and adequate detection limits.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1.
Samples
Samples of cheap Cd containing jewelry were obtained from the Czech Environmental Inspectorate. The subject of our interest was a total of 13 pieces of jewelry (3 sets of earrings, 6 pendants, and 1 ring) originating from inspections of three e-shops trading in cheap Chinese goods. Illustrative photos of the analyzed jewelry are shown in Table 1.
Analysis of Surface Composition
The surface composition of the samples was analyzed using an Elva X energy-dispersive X-ray fluorescence spectrometer (Elvatech Ltd., Kiev, Ukraine) equipped with a Pd X-ray tube and a thermoelectrically cooled Si-pin detector, PF 550 (MOXTEC, Orem, UT, USA). The power supply of the X-ray tube was operated at 40 kV, and the current was set via the auto-optimization procedure taking into account the optimal loading of the detector in a range of 6000-6500 counts per second (cps). The spectra were integrated for 90 s. Each sample was analyzed at five measuring points evenly spaced on that part of the sample that was supposed to be in contact with the skin. Parts of the sample that usually do not come in direct long-term contact with the skin, such as the solder holding the stone, were omitted from the analysis. Concentrations of elements detected in the samples were calculated by the standard-less module based on the fundamental parameters method.
Leaching the Samples in an Artificial Sweat
The leaching of jewelry samples was performed with two types of artificial sweat-acidic and alkaline [21]. Acidic sweat was prepared dissolving 0.5 g of L-histidine mono-chloride monohydrate (C6H9O2N3·HCl·H2O), 5 g of NaCl, and 2.2 g of NaH2PO4·2H2O in 1 L of demineralized water. The pH value was adjusted by 0.1 moL·L −1 NaOH to 5.5. To prepare 1 L of alkaline sweat, 0.5 g of C6H9O2N3·HCl·H2O, 5 g of NaCl and 5 g of Na2HPO4·12H2O was diluted in demineralized water. A solution of NaOH of a concentration of 0.1 moL·L −1 was added for a pH adjustment to 8. The volume of artificial sweat for the leaching of the particular jewelry piece was chosen based on the sample surface area, so that 1 mL of the reagent was used for each 1 cm 2 of the surface. The sample parts, which were not supposed to come into direct contact with the skin (e.g., stones), were not included into the calculated surface. These parts were covered by resistant adhesive tape during leaching to prevent the release of Cd. The leaching procedure lasted 7 days and was performed at 37 °C. The pieces of jewelry were then removed from the leachate, and 20 µL of the solution was spotted onto a circular piece of paper with a diameter of 17 mm and dried under an infrared lamp.
Leachate samples deposited on the paper carrier were analyzed using the commercially available compact LIBS spectrometer (LEA S500, Solar TII Ltd., Minsk, Belarus). The system consists of a dual pulse Q-switched Nd:YAG (Neodymium-doped Yttrium Aluminum Garnet) laser operating at 1064 nm. A nanosecond laser emitting two collinear pulses of 12 ns was operated at double-pulse mode with an inter-pulse delay of 7 µs. A laser beam with an energy of 110 mJ was focused on the sample surface, where the analytical point with a diameter of 200 µm was ablated. Each sample was analyzed at nine independent analytical points, while every point was ablated by one laser shot. Radiation emitted by arising plasma was led through the entrance slit of 25 µm into a Czerny-Turner monochromator and the spectral window in a range from 205 to 235 nm was recorded by a back-thinned and front-illuminated CCD (Charge-Coupled Device) camera (2048 × 14 pixels). Quantitative analysis of Cd was carried out on an analytical line of 214.441 nm. Separate calibration curves were constructed for acidic and alkaline sweat samples in a range from 0 to 40 mg·L −1 . Solutions of artificial sweat, spiked with Cd, were used as calibration standards. The precision and accuracy of the LIBS methods was validated comparing LIBS and ICP OES for a set of 5 samples of each sweat type. The mean concentration values measured by both methods were equal, although the LIBS results suffered from a higher standard deviation. The sample volume
Analysis of Surface Composition
The surface composition of the samples was analyzed using an Elva X energy-dispersive X-ray fluorescence spectrometer (Elvatech Ltd., Kiev, Ukraine) equipped with a Pd X-ray tube and a thermoelectrically cooled Si-pin detector, PF 550 (MOXTEC, Orem, UT, USA). The power supply of the X-ray tube was operated at 40 kV, and the current was set via the auto-optimization procedure taking into account the optimal loading of the detector in a range of 6000-6500 counts per second (cps). The spectra were integrated for 90 s. Each sample was analyzed at five measuring points evenly spaced on that part of the sample that was supposed to be in contact with the skin. Parts of the sample that usually do not come in direct long-term contact with the skin, such as the solder holding the stone, were omitted from the analysis. Concentrations of elements detected in the samples were calculated by the standard-less module based on the fundamental parameters method.
Leaching the Samples in an Artificial Sweat
The leaching of jewelry samples was performed with two types of artificial sweat-acidic and alkaline [21]. Acidic sweat was prepared dissolving 0.5 g of L-histidine mono-chloride monohydrate (C6H9O2N3·HCl·H2O), 5 g of NaCl, and 2.2 g of NaH2PO4·2H2O in 1 L of demineralized water. The pH value was adjusted by 0.1 moL·L −1 NaOH to 5.5. To prepare 1 L of alkaline sweat, 0.5 g of C6H9O2N3·HCl·H2O, 5 g of NaCl and 5 g of Na2HPO4·12H2O was diluted in demineralized water. A solution of NaOH of a concentration of 0.1 moL·L −1 was added for a pH adjustment to 8. The volume of artificial sweat for the leaching of the particular jewelry piece was chosen based on the sample surface area, so that 1 mL of the reagent was used for each 1 cm 2 of the surface. The sample parts, which were not supposed to come into direct contact with the skin (e.g., stones), were not included into the calculated surface. These parts were covered by resistant adhesive tape during leaching to prevent the release of Cd. The leaching procedure lasted 7 days and was performed at 37 °C. The pieces of jewelry were then removed from the leachate, and 20 µL of the solution was spotted onto a circular piece of paper with a diameter of 17 mm and dried under an infrared lamp.
Leachate samples deposited on the paper carrier were analyzed using the commercially available compact LIBS spectrometer (LEA S500, Solar TII Ltd., Minsk, Belarus). The system consists of a dual pulse Q-switched Nd:YAG (Neodymium-doped Yttrium Aluminum Garnet) laser operating at 1064 nm. A nanosecond laser emitting two collinear pulses of 12 ns was operated at double-pulse mode with an inter-pulse delay of 7 µs. A laser beam with an energy of 110 mJ was focused on the sample surface, where the analytical point with a diameter of 200 µm was ablated. Each sample was analyzed at nine independent analytical points, while every point was ablated by one laser shot. Radiation emitted by arising plasma was led through the entrance slit of 25 µm into a Czerny-Turner monochromator and the spectral window in a range from 205 to 235 nm was recorded by a back-thinned and front-illuminated CCD (Charge-Coupled Device) camera (2048 × 14 pixels). Quantitative analysis of Cd was carried out on an analytical line of 214.441 nm. Separate calibration curves were constructed for acidic and alkaline sweat samples in a range from 0 to 40 mg·L −1 . Solutions of artificial sweat, spiked with Cd, were used as calibration standards. The precision and accuracy of the LIBS methods was validated comparing LIBS and ICP OES for a set of 5 samples of each sweat type. The mean concentration values measured by both methods were equal, although the LIBS results suffered from a higher standard deviation. The sample volume
Analysis of Surface Composition
The surface composition of the samples was analyzed using an Elva X energy-dispersive X-ray fluorescence spectrometer (Elvatech Ltd., Kiev, Ukraine) equipped with a Pd X-ray tube and a thermoelectrically cooled Si-pin detector, PF 550 (MOXTEC, Orem, UT, USA). The power supply of the X-ray tube was operated at 40 kV, and the current was set via the auto-optimization procedure taking into account the optimal loading of the detector in a range of 6000-6500 counts per second (cps). The spectra were integrated for 90 s. Each sample was analyzed at five measuring points evenly spaced on that part of the sample that was supposed to be in contact with the skin. Parts of the sample that usually do not come in direct long-term contact with the skin, such as the solder holding the stone, were omitted from the analysis. Concentrations of elements detected in the samples were calculated by the standard-less module based on the fundamental parameters method.
Leaching the Samples in an Artificial Sweat
The leaching of jewelry samples was performed with two types of artificial sweat-acidic and alkaline [21]. Acidic sweat was prepared dissolving 0.5 g of L-histidine mono-chloride monohydrate (C 6 H 9 O 2 N 3 ·HCl·H 2 O), 5 g of NaCl, and 2.2 g of NaH 2 PO 4 ·2H 2 O in 1 L of demineralized water. The pH value was adjusted by 0.1 moL·L −1 NaOH to 5.5. To prepare 1 L of alkaline sweat, 0.5 g of C 6 H 9 O 2 N 3 ·HCl·H 2 O, 5 g of NaCl and 5 g of Na 2 HPO 4 ·12H 2 O was diluted in demineralized water. A solution of NaOH of a concentration of 0.1 moL·L −1 was added for a pH adjustment to 8. The volume of artificial sweat for the leaching of the particular jewelry piece was chosen based on the sample surface area, so that 1 mL of the reagent was used for each 1 cm 2 of the surface. The sample parts, which were not supposed to come into direct contact with the skin (e.g., stones), were not included into the calculated surface. These parts were covered by resistant adhesive tape during leaching to prevent the release of Cd. The leaching procedure lasted 7 days and was performed at 37 • C. The pieces of jewelry were then removed from the leachate, and 20 µL of the solution was spotted onto a circular piece of paper with a diameter of 17 mm and dried under an infrared lamp.
Leachate samples deposited on the paper carrier were analyzed using the commercially available compact LIBS spectrometer (LEA S500, Solar TII Ltd., Minsk, Belarus). The system consists of a dual pulse Q-switched Nd:YAG (Neodymium-doped Yttrium Aluminum Garnet) laser operating at 1064 nm. A nanosecond laser emitting two collinear pulses of 12 ns was operated at double-pulse mode with an inter-pulse delay of 7 µs. A laser beam with an energy of 110 mJ was focused on the sample surface, where the analytical point with a diameter of 200 µm was ablated. Each sample was analyzed at nine independent analytical points, while every point was ablated by one laser shot. Radiation emitted by arising plasma was led through the entrance slit of 25 µm into a Czerny-Turner monochromator and the spectral window in a range from 205 to 235 nm was recorded by a back-thinned and front-illuminated CCD (Charge-Coupled Device) camera (2048 × 14 pixels). Quantitative analysis of Cd was carried out on an analytical line of 214.441 nm. Separate calibration curves were constructed for acidic and alkaline sweat samples in a range from 0 to 40 mg·L −1 . Solutions of artificial sweat, spiked with Cd, were used as calibration standards. The precision and accuracy of the LIBS methods was validated comparing LIBS and ICP OES for a set of 5 samples of each sweat type. The mean concentration values measured by both methods were equal, although the LIBS results suffered from a higher standard deviation. The sample volume needed for the analysis was substantially lower, by contrast, in the case of LIBS, and this method also offered the possibility of the long-term storage of liquid samples deposited on the solid carrier.
Surface Analysis
The analyzed jewelry samples revealed a truly variable surface composition. As can be seen in Table 1
Leaching in Artificial Sweat
After measuring calibration standards, the limit of detection (LOD) for both acid and alkaline artificial sweat solutions was determined. The LOD was determined according to definition 3σ/s, where σ is the standard deviation of intensity calculated from 36 repeated measurements of the lowest calibration standard (blank) performed under optimal conditions, and s is the slope of the calibration curve. The calculated LOD has a value of 0.08 mg·L −1 and 0.06 mg·L −1 for acidic and alkaline artificial sweat, respectively. The relative standard deviation (RSD; %) calculated for measuring data of jewelry leachates was in a range from 4.79% to 22.6%.
One piece of each pair of earrings (1B, 2B, and 3B) was leached in acidic and one piece in alkaline artificial sweat. Data presented in Table 2 reveal that the total amount of cadmium, which was released into the alkaline sweat, constitutes in all cases about 20% of the amount released into the acidic sweat. Apart from the earrings, four pieces of pendants (4A, 6A, 7A, and 9A) were leached in alkaline sweat and two pieces of pendants (5A and 10A) together with one piece of ring in acidic sweat. The total amounts of Cd released into the particular types of artificial sweat are not clearly correlated with the Cd content in the sample surface layer. The correlation coefficients calculated for Cd content in jewelry and in alkaline (r = 0.786, p = 0.04) or in acidic sweat (r = 0.830, p = 0.04) were quite high, but these results were strongly biased by the influential point of 1B. After exclusion of the influential 1B point, the correlation coefficients significantly decreased (alkaline sweat r = 0.614, p = 0.19; acidic sweat r = 0.607, p = 0.28). The same relations were observed when the correlation coefficients were calculated for Cd content measured by ED XRF and for Cd released from one square centimeter of sample over one week (% Cd w/w vs. µg Cd/cm 2 /week). The correlation coefficients for alkaline (r = 0.763, p = 0.05) and acidic (r = 0.741, p = 0.09) sweat were also quite high and similar, but these results were also biased by the influential point of 1B. In this case, the sharp decline in the correlation coefficient was also observed after exclusion of the influential 1B point (alkaline sweat r = 0.312, p = 0.55; acidic sweat r = 0.56, p = 0.33). It is apparent that the available set of samples is too small in number to be able to clearly describe the relationship between the surface composition of the sample and the amount of leached cadmium. Streicher porte et al. in their work from 2008 analyzed 21 samples of low cost jewelry containing 1.4-43.9% of Cd on the surface layer. Migration of the toxic metals was tested after sample submersion in 0.07 M HCl (Hydrochloric acid-simulation of gastric acid) for 7 days at 30 • C [20]. The observed correlation between the released Cd in µg/cm 2 /week was low (r = 0.49), which is in agreement with our results, if we evaluate the correlation after exclusion of the influential points.
The obtained sample set contained 3 pairs of earrings. One earring out of the pair was leached in acidic and the other in alkaline artificial sweat. When comparing the results from leachate extracts for pairs of earrings, alkaline artificial sweat provided higher results for dissolved Cd. This is surprising because it is generally assumed that metals are better dissolved in an acidic environment.
Systemic Non-Carcinogenic Health Risk of Released Cd
A reference dose (RfD) is the regulatory limit established by the United States Environmental Protection Agency (US EPA) representing the maximum oral dose of a toxic substance, below which no adverse non-carcinogenic health effects should result from a lifetime of exposure. According to the Integrated Risk Information System [22], the calculation of RfD for chronic oral exposure was based on the assumption of increased proteinuria occurring when the Cd content in the renal cortex exceeds the value of 200 µg Cd/g wet tissue. Cd daily intake of 0.352 mg (or 0.005 mg/kg/day for 70-kg adult), which is necessary in order for the concentration of this element in the renal cortex to reach the critical value, was estimated based on the work of Friberg et al. [23]. This work assumed a Cd biological half-life (t 1/2 ) of 19 years, an exposure duration of 50 years, and an absorption of 4.5% of Cd contained in the food. US EPA postulated only 2.5% absorption of Cd from the food and consequently established the NOAEL (No-Observed-Adverse-Effect Level) value of 0.01 mg of Cd/kg/day. RfD of 0.001 Cd/kg/day was then obtained dividing NOAEL by the uncertainty factor (UF) of 10. The US EPA document shared at the IRIS (Integrated Risk Information System) database unfortunately does not provide any further information regarding the used toxicokinetic model. When the one-compartment standard first-order elimination model with bolus administration described by Amzal et al. [24] was used, the same value of NOAEL (0.01 mg Cd/kg/day) was obtained for the subsequent set of parameters: a Cd gastrointestinal absorption index equal to 2.5%; a fraction of absorbed Cd transported to the kidney equal to 33%; a ratio of Cd content in the entire kidney and renal cortex of 1.25; a kidney weight of equal to 300 g; and a Cd biological half-life t 1/2 of 18.3 years.
The EPA did not establish a limit of a similar meaning as the RfD for dermal exposure. To be able to assess the health risk of Cd released from low-cost jewelry, we performed our own approximation of dermal RfD based on the same toxicokinetic model. The parameters mentioned above were used except for the absorption index, which was set to 0.6%. The calculated value of NOAEL for dermal exposure was 0.042 mg/kg/day. The resulting dermal RfD in this case was also obtained dividing NOAEL by the uncertainty factor UF = 10 with a value of 0.004 mg/kg/day. The dermal RfD estimated in such a way represents a kind of worst-case scenario, in as much as the Cd absorption into the plasma could be lower than the absorption into the renal cortex with published data for absorption into the plasma varying between 0.1% and 0.6% [11].
The total amounts of Cd in µg released from a particular piece of jewelry (TRA-Total Released Amount) into acidic or alkaline artificial sweats over one week of leaching are summarized in Table 1. The maximum absorbable daily dose (MADD) of Cd from a particular piece of jewelry was calculated based on the assumed Cd bioavailability of 0.6% and an average human body weight of 70 kg according to the following equation: MADD = 0.006 × TRA/(70 × 7). A factor of 7 in the denominator was used to convert the amount of Cd released over one week of leaching to the daily exposure. The risk characterization ratio (RCR) was then calculated as the hundredfold ratio of MADD and RfD. This factor serves as an estimate as to what percentage of the safe daily dose can be covered by Cd released from the jewelry. The amount of Cd leached into alkaline artificial sweat typically represents about 0.01-0.02% of the safe daily dose, while the maximal RCR for Sample 1B was 0.1%. In the case of acidic artificial sweat, RCR values are higher and more variable (ranging from 0.05% to 0.46%). Although the process of health risk estimation used is extremely simplified, it can be concluded that the evaluated set of Cd containing jewelry do not pose any serious health risk in terms of systemic non-carcinogenic effects.
Conclusions
A composition of a surface layer of 13 low cost jewelry samples with a high content of Cd was analyzed by ED XRF. These samples were subsequently leached in artificial acidic and alkaline sweat, and the resulting digests were applied onto solid carriers and analyzed by LIBS. The content of Cd in the jewelry surface layer ranged from 13.40% to 44.64% (w/w) with the measured values significantly exceeding permissible limits in the EU or U.S. The results of the analysis suggest that this jewelry should not be available in the countries of the EU. The analysis of the leachates indicates that acidic artificial sweat released an amount of Cd roughly 5-fold higher than that of artificial alkaline sweat. The relationship between the surface composition of the samples and the amount of Cd released into artificial sweat was not clearly demonstrated. The low bioavailability of Cd for dermal exposure, along with the small amounts of Cd released from the surface layer of the jewelry, leads to the conclusion that even the long-term use of these jewels does not constitute major health risks in terms of the biological and toxic effect of Cd. The maximum amount of released Cd from the analyzed jewelry makes up about 0.5% of a safe dose. | 2017-07-26T20:24:58.343Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "1f74cb7aded197cdca1b1e711250c01b7cfacf0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/14/5/520/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f74cb7aded197cdca1b1e711250c01b7cfacf0c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
238583582 | pes2o/s2orc | v3-fos-license | A Theory of Tournament Representations
Real world tournaments are almost always intransitive. Recent works have noted that parametric models which assume $d$ dimensional node representations can effectively model intransitive tournaments. However, nothing is known about the structure of the class of tournaments that arise out of any fixed $d$ dimensional representations. In this work, we develop a novel theory for understanding parametric tournament representations. Our first contribution is to structurally characterize the class of tournaments that arise out of $d$ dimensional representations. We do this by showing that these tournament classes have forbidden configurations which must necessarily be union of flip classes, a novel way to partition the set of all tournaments. We further characterise rank $2$ tournaments completely by showing that the associated forbidden flip class contains just $2$ tournaments. Specifically, we show that the rank $2$ tournaments are equivalent to locally-transitive tournaments. This insight allows us to show that the minimum feedback arc set problem on this tournament class can be solved using the standard Quicksort procedure. For a general rank $d$ tournament class, we show that the flip class associated with a coned-doubly regular tournament of size $\mathcal{O}(\sqrt{d})$ must be a forbidden configuration. To answer a dual question, using a celebrated result of \cite{forster}, we show a lower bound of $\mathcal{O}(\sqrt{n})$ on the minimum dimension needed to represent all tournaments on $n$ nodes. For any given tournament, we show a novel upper bound on the smallest representation dimension that depends on the least size of the number of unique nodes in any feedback arc set of the flip class associated with a tournament. We show how our results also shed light on upper bound of sign-rank of matrices.
Introduction
In this work, we lay the the foundations for a theory of tournament representations. A tournament is a complete directed graph and arises naturally in several applications including ranking from pairwise preferences, sports modeling, social choice, etc. We say that a tournament T on n nodes can be represented in d dimensions if there exists a skew symmetric matrix M ∈ R n×n of rank d such that a directed edge from i to j is present in T if and only if M ij > 0. Real world tournaments are almost always intransitive ( [16,11]) and it is not known what type of tournaments can be represented in how many dimensions. This is important to understand because of the following reason: As a modeler of preference relations using tournaments, it is often more natural to have structural domain knowledge such as 'The tournaments under consideration do not have long cycles' as opposed to algebraic domain knowledge such as 'The rank of the skew symmetric matrix associated with the tournaments of interest is at most k'. However, algorithms that learn rankings from pairwise comparison data typically need as input the algebraic quantity -the rank of the skew symmetric matrices associated with tournaments or equivalently the dimension where they are represented ( [14]). To bridge the gap between the structural and the algebraic world, we ask and answer two fundamental questions regarding the representations of tournaments.
1) What structurally characterizes the class of tournaments that can be represented in d dimensions?
2) Given a tournament T on n nodes, what is the minimum dimension d needed to represent it?. Figure 1: (Left) Partitions of the set of all tournaments on n nodes using flip classes. Every shaded region is a flip class partition and every circle indicates a tournament. The flip class that contains the transitive tournament (Flip class 1) is precisely the set of all locally transitive tournaments. This is also the set of all tournaments that can be represented in 2 dimensions (Section 5). Every flip class contains a canonical representative termed the R-cone (Section 4), indicated using the larger circle inside each flip class. The tournaments that cannot be represented using d dimensions appear as union of Forbidden Flip classes (Flip class k in Figure) (Section 4). (Right) Explicit flip class partition of the 4 possible non-isomorphic tournaments on 4 nodes. Tournaments in flip class 1 can be represented using 2 dimensions whereas tournaments in flip class 2 cannot (see Section 5). .
We answer the first question by investigating the intricate structure of the rank d tournament class via the notion of forbidden configurations. Specifically, we show that the set of forbidden configuration for the rank d tournament class must necessarily be a union of flip classes, a novel way to partition the set of all tournaments into equivalence classes. We explicitly characterize the forbidden configurations for the rank 2 tournament class and exhibit a forbidden configuration for the general rank d tournament class. Specifically, we show that the rank 2 tournaments are equivalent to locally transitive tournaments, a previously studied class of tournaments ( [7]). Our results throw light on the connections between transitive and locally transitive tournaments and also lets us develop a classic Quicksort based algorithm to solve that the minimum feedback arc set problem on rank 2 tournaments with O(n 2 ) time complexity. Our results for the general rank d tournament class have connections to the classic long standing Hadamard conjecture and we discuss this as well. Figure 1 gives a glimpse of some of the main results.
We answer the second question by proving lower and upper bounds on the smallest dimension needed to represent a tournament on n nodes. We exhibit a lower bound of O( √ n) using a variation of the celebrated dimension complexity result of [10] for sign matrices. To show upper bounds, we introduce a novel parameter associated to a tournament called the Flip Feedback Node set of a Tournament. This quantity depends on the least number of unique nodes in any feedback arc set of an associated tournament class for the tournament of interest and upper bounds linearly the representation dimension of any tournament. We show how our results can be used to provide upper bounds on the classic notion of sign-rank of a matrix. Previously known upper bounds for sign rank depended on the VC dimension of the associated binary function class ( [1]). Our upper bounds on the other hand have a graph theoretic flavour.
Organization of the Paper: We discuss briefly in Section 2 the foundational works this paper builds upon. We introduce necessary preliminaries in Section 3. The answer to the first question about the structural characterization of d dimensional tournament classes span Sections 4, 5 and 6. We devote Section 7 to answer the second question about the number of dimensions needed to represent a tournament. Section 7.1 explores connections of our results to upper bounds on the sign rank of a sign matrix. Finally, we conclude in Section 8.
Related Work
The work in this paper builds on several pieces of work across different domains. We summarize below the most important related works under different categories: Intransitive Pairwise Preference Models: One of the main reasons to study representations of tournaments is to model pairwise preferences. Parametric pairwise preference models that can model intransitivity have gained recent interest. [14] develop a low rank pairwise ranking model that can model intransitivity. However, their study and results were restricted to just the transitive tournaments in these classes. A generalization of the classical Bradley-Terry-Luce model ( [3], [12]) was studied in [4]. However, no structural characterization is known. Same holds for the more recent models studied in [6], [13] and [2].
Flip Classes: The notion of flip classes, a novel way to partition the set of all tournaments on n nodes, was first introduced in [8]. The goal however was completely different and was on studying equilibrium on certain generalized rock-paper-scissors games on tournaments. Interestingly, and perhaps surprisingly, the notion of flip classes turn out to be fundamental to our study of understanding forbidden configurations of d dimensional tournament classes.
Dimension Complexity and Sign Rank: Dimension complexity and sign rank of sign pattern matrices were studied in [9]. These results have found significant applications in learning theory and lower bounds in computational complexity ( [10]). More recently, [1] study the sign rank for function classes with fixed Vapnik-Chervonenkis (VC) dimension and show upper bounds. Our upper bounds however depend on certain graph theoretic properties.
Preliminaries
Tournaments: A tournament T is a complete directed graph. The number of nodes n in T will be usually clear from the context or will be explicitly specified. For nodes i and j in T, we say i T j if there is a directed edge from i to j. Given a node i, we define the out and in neighbours of i as T + i = {j : i T j} and T − i = {j : j T i} respectively. Given a set of nodes S, we denote by T(S) the induced sub-tournament of T on the nodes in S.
Feedback Arc Set and Pairwise Disagreement Error: Given a permutation σ on n nodes, the feedback arc set of σ w.r.t the tournament T is defined as . It is known that finding the σ that minimizes the pairwise disagreement error w.r.t a general tournament T is a NP-hard problem ( [5]).
Skew Symmetric Tournament Classes: In this paper, whenever we refer to a skew symmetric matrix M ∈ R n×n , we always assume M ij = 0 ∀i = j and M ii = 0 ∀i. Given such an M, we denote by T{M} the tournament on n nodes induced by M where i T j ⇐⇒ M ij > 0. We refer to class of tournaments induced by rank d skew symmetric matrices as the rank d tournament class.
Forbidden Configurations: A tournament class T is a collection of tournaments. T is said to forbid a tournament T if no tournament in T has a sub-tournament that is isomorphic to T. We call T a forbidden configuration for T if T forbids T but does not forbid any sub-tournament of T. For example, the class of all acyclic/transitive tournaments has the 3-cycle as a forbidden configuration i.e, the tournament T on three nodes i, j, k where i T j T k T i. Positive Spans: A set of vectors H = {h 1 , . . . , h n } ∈ R d is said to positively span R d if for any w ∈ R d , there exists non-negative constants c 1 , . . . , c n ≥ 0 such that i c i h i = w. If H positively spans R d , then there does not exist a w ∈ R d such that w T h i > 0 ∀i. This is a easy consequence of Farka's Lemma.
Remark on Notation:
We reiterate that we use T(·), T{·} and T[·] to mean different objects -the tournament induced by a subset of nodes, the tournament induced by a skew symmetric matrix and the tournament induced by a representation of a set of vectors respectively. These will be usually clear from the context.
Flip Classes, Forbidden Configurations and Positive Spans
The main purpose of this section is understand the space of forbidden configurations of rank d tournaments. The main result of this section shows that the forbidden configurations for rank d tournament classes occur as union of certain carefully defined equivalence classes of non-isomorphic tournaments. Towards this, we define the notion of flip classes which was introduced first in [8] although in a different context: Definition 1. Given a tournament T on n nodes and a set S ⊆ [n], define φ S (T) to be the tournament obtained from T by reversing the orientation of all edges (i, j) such that i ∈ S, j ∈S.
In other words, φ S (T) is obtained from T by reversing the edges across the cut (S,S) = {(i, j) : i ∈ S, j ∈S}.
Definition 2.
A class of tournaments on n nodes is called cut-equivalent if for every pair of tournaments T, T in the class, there exists a S ⊆ [n] such that T is isomorphic to φ S (T) It is easy to show that the set of cut-equivalent tournaments form an equivalence relation over the set of all tournaments on n nodes [8]. The corresponding equivalence classes are called a flip classes. We denote by F(T) the flip class of T i.e., the equivalence class of all cut-equivalent tournaments to T. In the following theorem we show the fundamental relation between flip classes and forbidden configurations. Theorem 1. Let T, a tournament on k nodes, be a forbidden configuration for some rank d tournament class. Then every tournament in the flip class of T is also a forbidden configuration for the rank d tournament class.
Proof. Assume there is a T ∈ F(T) which is not forbidden for rank d tournament class. Thus, which is a contradiction to the assumption that T is a forbidden configuration for rank d tournaments. Proof. This follows directly from Theorem 1.
Thus, to characterize the forbidden configuration of rank d tournament classes, we need to understand the flip classes. We begin with the following simple but useful definition.
R-cones are essentially the tournament R along with an additional node that either beats or loses to nodes in R. R-cones are useful as they be viewed in some sense as canonical tournaments in flip classes. This is justified because of the following observation. Proposition 1. Every flip class contains an R-cone for some tournament R.
Proof. Consider any tournament T. Let T = φ {i∪T − i } (T) for an arbitrary node i. By definition T ∈ F(T). Also, T is a R-cone, coned by i.
The above observation says that to identify the forbidden configurations for a given tournament class, it suffices to identify all forbidden R-cones. Then by Corollary 1, the associated flip classes will be the set of all forbidden configurations. However, it does not throw light on what property the tournament R must satisfy. The following lemma establishes this. Lemma 2. Let R be a tournament with the property that H positively spans R d whenever R = R[H] for some representation H = {h 1 , . . . , h n } ∈ R d . Then rank d tournament class forbids R-cones.
Proof. Consider any representation H that realizes R. By assumption of the theorem, H positively spans R d . Thus, by But this contradicts the conclusion drawn earlier from Farka's lemma.
The above lemma is extremely helpful in the sense that it reduces the study of forbidden configurations to the study of finding tournaments such that any representation that realizes it must necessarily positively span the entire Euclidean space. Note that there may be several representations which positively span the entire space. This does not mean their the associated coned tournaments are forbidden configurations. Instead, we start with an R cone and conclude it is forbidden if every representation that realizes R necessarily positively spans the entire space. It is a non-trivial problem to identify such tournaments R for an arbitrary dimension d.
In the following sections, we explicitly identify the only forbidden flip class for rank 2 tournaments, one forbidden flip class for rank 4 and then (a potentially weaker) forbidden flip class for the general rank d case.
Rank 2 Tournaments ⇐⇒ Locally Transitive Tournaments
The goal of this section is to characterize the forbidden configurations of rank 2 tournaments. Thanks to Lemma 2, this reduces to the problem of identifying a tournament whose representation necessarily spans the entire space. The following lemma exhibits this tournament. Lemma 3. Let H = {h 1 , h 2 , h 3 } ∈ R 2 be two dimensional representation of 3 nodes which induce a 3 cycle tournament. Then the set H positively spans R 2 . Furthermore, the 3 cycle is the only such tournament on 3 nodes.
The above lemma immediately implies that a coned 3-cycle is the only forbidden configuration for rank 2 matrices. This is indeed true. However, for the purposes of generalizing our result (which as we will see will be useful when discussing the higher dimension case), we will view the 3-cycle as a special case of a doubly regular tournament (defined next).
Trivially the 3-cycle is the only 3-doubly regular tournament. The following lemma establishes that the flip class of a coned 3 doubly regular tournament only contains itself. Proposition 2. The flip class of 3-doubly-regular-cone does not contain any other tournament.
Proof. This is easily verified by checking all tournaments in F(T) where T is the coned 3-cycle. We have thus far established that any rank 2 tournament on n nodes forbids the 3-doubly-regular-cone. The advantage of this result is that we can go one step further and explicitly characterize the rank 2 tournament class. To do this, we need the definition of a previously studied tournament class [7].
Before seeing why locally transitive tournaments are relevant to our study, we first show that they are intimately connected to transitive tournaments via the following characterization. Theorem 5. (Connection between Transitive and Locally Transitive Tournaments) The set of all non isomorphic locally transitive tournaments on n nodes is equivalent to the flip class of the transitive tournament on n nodes.
We will see next that this key result allows us to immediately characterize the rank 2 tournament class. We will see later (Section 7) that this result is also crucial in determining an upper bound on the dimension needed to represent a given tournament. Theorem 6. (Characterization of rank 2 tournaments) A tournament T on n nodes is locally transitive if and only if there exists a skew symmetric matric M ∈ R n×n with rank(M) = 2 such that the T{M} = T.
It is perhaps surprising that a purely structural description of a tournament class namely that of local transitivity turns out to exactly characterize the rank 2 tournament class. To the best of our knowledge, this characterization appears novel and hasn't been previously noticed. One of the interesting consequence of the above characterization is that the minimum feedback arc set problem on rank 2 tournaments can be solved using a standard quick sort procedure. This is formalized below. Theorem 7. (Minimum Feedback Arc Set is Poly-time Solvable for Rank 2 tournaments) Let T be a locally transitive tournament on n nodes and let σ 1 be the permutation returned by running a standard quick sort algorithm choosing 1 as the initial pivot node and where the outcome of comparison between items i and j is given according to the sign of T. Let σ k be obtained from σ 1 by k − 1 clockwise cyclic shifts for k ∈ [n]. Let E k be the feedback arc set of σ k w.r.t T.
Then min k |E k | achieves the minimum size of the feedback arc set for T.
Proof. (Sketch) The proof involves two steps: first arguing that by fixing any pivot, quick sort would return a ranking that is a cyclic shift of σ 1 . The second step involves inductively arguing that one of the cyclic shift must necessarily minimize the feedback arc set.
Rank 4 Tournaments
We now turn to rank 4 tournaments. We could have directly considered rank d tournaments, but it turns out that what we can show is a slightly stronger result for rank 4 tournaments than the general case and so we focus on them separately.
While it is arguably simple in the rank 2 case to identify the tournament that necessitates the positive spanning property, it is not immediately clear in the rank 4 case. A first guess would be to consider the regular tournaments (as the 3 cycle for rank 2 is also a regular tournament) on 5 nodes or 7 nodes. However, these turn out to be insufficient as one can construct counter examples of regular tournaments on up to 7 nodes with representations that don't span the entire R 4 . In fact, as we had defined earlier, the right way to generalize to higher dimension turns out to be using doubly regular tournaments. Theorem 8. The 11-doubly-regular-cone is a forbidden configuration for rank 4 tournament class.
Note that while for the rank 2 case, we were able to prove that the only forbidden flip class is the one that contains a coned 3 cycle, we have not shown that the only forbidden configuration for rank 4 class is the 11 doubly regular cone. In fact, we believe that the smallest forbidden tournament for rank 4 class is the 7doubly regular cone. However, we haven't been able to prove this. On the other hand, as we will see in the next section, the result in Theorem 8 is still stronger than the result for the general rank d tournaments.
Having discussed rank 2 and rank 4 cases separately, we next turn our attention to the general rank d tournament class.
Rank d tournaments and the Hadamard Conjecture
From the understanding of rank 2 and rank 4 tournament classes in the previous sections, and noting that the corresponding forbidden configurations are intimately related to doubly regular tournaments, it is tempting to conjecture that this is true in general. Conjecture 1. Rank 2d tournament class forbids 4d − 1-doubly-regular-cones.
Ideally, the conjecture above must have a qualifier 'if they exist' for the 4d − 1 doubly regular cone. This is because of the equivalence between doubly regular tournaments on 4d − 1 nodes and Hadamard matrices in {+1, −1} 4d×4d ( [15]). A matrix H ∈ {+1, −1} n×n is called Hadamard if H H = nI where I is the identity matrix. It is known that there is a bijection between skew Hadamard matrices and doubly regular tournaments [15]. A long standing unsolved conjecture about Hadamard matrices is the following: Conjecture 2. (Hadamard) There exists a Hadamard matrix of order 4d for every d > 0.
If Conjecture 1 were true, then it would imply the existence of 4d − 1-doubly regular tournament for every d and thus would imply the Hadamard conjecture is true. In fact, Conjecture 1 being true would say more which we state below: Conjecture 3. There exists a skew symmetric Hadamard matrix of order 4d for every d > 0.
The main result of this section is a weaker form of the conjecture: Theorem 9. Rank 2(d − 1) tournament class forbids 12d 2 − 1-doubly-regular-cones if they exist.
Proof. The result follows the arguments in [10]. We note that the arguments in [10] work only for non-zero sign matrices. By overloading T to denote the signed adjacency matrix of the corresponding tournament, we can consider the non-zero sign matrix G = T + diag(b) where b ∈ {1, −1} n . By Gershgorin's circle theorem and exploting the fact that any two rows of a doubly regular tournament are orthogonal, one can show that ρ i (G) 2 ≤ ρ i (T) 2 + 2n − 2 where ρ i denotes the i-th largest singular value of the corresponding matrix. It then follows from [10] that if G has an representation in dim dimensions, then it must be the case that Noting that for a doubly regular cone T on n nodes, ρ i (T). 2 = n − 1 ∀i, we get Thus to get a matrix M such that sign(M) = G, one needs at least n 3 dimensions i.e., rank(M) ≥ n 3 . Now we show that this also is a lower bound for representing T. To see this, assume for the sake of contradiction that T has a representation in d dimension where d < The entries of the perturbation matrix E ∈ R d×n can be chosen to be small enough such that the sign of the off-diagonal entries of H T A rot H is same as that of (H + E) T A rot H. However, the diagonal entries of (H + E) T A rot H can get an arbitrary sign pattern, say b. Let G be the sign pattern matrix corresponding to (H + E) T A rot H. By definition, G has a representation in dimension d < n 3 . But this contradicts inequality (1). Thus, it must be the case that T has a representation in dimension at least n 3 . Finally, setting n = 12d 2 to the number of nodes in the doubly regular cone as given in the Theorem, we get that at least 2d dimensions are needed to embed such a tournament. The result follows.
How Many Dimensions are needed to Represent A Tournament?
The previous sections considered a specific rank d tournament class and tried to characterize them using forbidden configurations. In this section, We turn to the dual question of understanding the minimum number of dimensions needed to embed a tournament. We start by not considering a single tournament T but the set of all tournaments on n nodes. We show below a general result which provides a lower bound on the minimum dimension needed to embed any tournament on n nodes. Proof. The proof uses the same ideas as the first part of the proof of Theorem 9.
The result follows the arguments in the celebrated work of [10]. As [1] point out, Forster's technique cannot be stretched further in obvious ways to get upper bounds.
The above theorem tells us that in the worst case at least O( √ n) dimensions are necessary to represent all tournaments on n nodes. In fact if Conjecture 1 were true, the minimum dimension would be O(n). However, in practice one might not encounter tournaments with such extremal/worst-case properties. Typically, a smaller number of dimensions might be enough to represent tournaments of practical interest. Our goal below is to upper bound on the number of dimensions needed to embed a given tournament T.
Recall that E σ (T) denotes the feedback arc set of a permutation σ w.r.t a tournament T. We define the number of nodes involved in the feedback arc set as follows: θ(σ, T) = |{i : ∃j : σ(i) > σ(j), (i, j) ∈ E σ (T)}|. We next define a crucial quantity µ(T) which we term as the Flip Feedback Node set size. This quantity will determine an upper bound on the dimension where a tournament can be represented: In words, given a tournament T, the quantity µ(T) captures the minimum number of nodes involved in any feedback arc set among all tournaments in the flip class of T. For instance if T is a locally transitive tournament then µ(T) would be 0 -as T is necessarily in the flip class of a transitive tournament and so the E σ corresponding to the topological ordering of the transitive tournament would have an empty feedback arc set. As another example, consider T to be the coned 3-cycle. The flip class of this tournament contains only itself and the best permutation will have one edge in the feedback arc set. Thus µ(T) = 1. In general, it is trivially true that µ(T) is upper bounded by n, the number of nodes. However µ(T) could be much smaller than n depending on T. The main result of this section is the theorem below that shows that µ(T) gives an upper bound on the number of dimension needed to represent any tournament. Proof. Given a tournament T, we first show that we can start with an arbitrary transitive tournament and add enough rank 2 corrections to obtain a representation for T. Every addition of a rank 2 matrix will increase the representation dimension by at most 2. The result will follow then by noting that for the choice of the transitive tournament which minimizes the number of corrections needed, one needs at most 2(µ(T) + 1) representation dimension.
Let T be an arbitrary transitive tournament which has a 2 dimensional representation and let M ∈ R n×n be the associated skew symmetric matrix that represents T . W.l.o.g, assume that M ij > 0 if and only if i < j. Define E k (T) = {i : i < k, i < T k}. By definition, the feedback arc set E σ (T) = ∪ n k=1 (k, E k (T)) where σ = [1, . . . , n] is the topological order corresponding to the transitive tournament T . We start with k = n and correct the feedback arc errors arising from E k iteratively. Define ∆ n = min i<n (M in + ) for some small enough > 0. Let u ∈ R n be such that u i = ∆ n ∀i ∈ E n and u i = 0 otherwise. Let v ∈ R n be such that v n = −∆ n , v i = 0 ∀i = n. It is easy to verify that M + uv T − vu T represents a tournament that has the feedback arc set ∪ n−1 k=1 (k, E k (T)) i.e., the errors in E n have been corrected. The cost of correcting the error is adding a rank 2 skew symmetric matrix which increases the representation dimension by at most 2. One can repeat the same procedure for n − 1, n − 2, . . . until all errors are corrected.
The upper bound in the theorem follows noting that the above argument works for any transitive tournament T and so we can start with the one which has the least number of nodes involved in the feedback arc set to minimize the number of extra dimensions needed to represent T.
Remark 1:
The above theorem says that one can always obtain an representation H in dimension d = O(µ(T)) that realizes T. The bound gets tighter for tournaments with smaller feedback arc sets, which is what one might typically expect in practice. Note that even for some tournaments that may have a large feedback arc set, the associated flip class might contain a tournament with a smaller feedback arc set.
Remark 2:
We note that in general µ(T) is not necessarily the cardinality of the minimum feedback arc set among all tournaments in the flip class of T. Instead µ(T) captures the cardinality of the set of nodes involved in any feedback arc set. To see why these two could be different, consider the tournament in Figure 2. Here, σ a = [6 1 2 3 4 5] is the permutation minimizing the feedback arc set. Let σ b = [1 2 3 4 5 6]. Note that |E σa | = 2, θ(σ a , T) = |{1, 2}| = 2. However |E σ b | = 3, yet θ(σ b , T) = |{6}| = 1. Thus, σ b gives a tighter upper bound on the dimension needed to represent T.
Connections to Sign Rank
The sign rank of a matrix G ∈ {+1, −1} m×n is defined as the smallest integer d such that there exists a matrix M ∈ R m×n of rank d that satisfies sign(M ) = G 1 . Here sign(z) = 1 if z > 0 and −1 otherwise. A breakthrough result on the lower bound on the sign rank was given by [10]. However, good upper bounds have been harder to obtain. We show below how Theorem 11 also translates to an upper bound on the sign rank of any sign matrix G. Proof. From Theorem 11, T can be represented using at most 2(µ(T) + 1) dimensions. By construction any representation of T must also represent G. The result follows.
As far as we know, the previous known upper bounds for sign rank depended either on the VC-dimension of the natural binary function class associated with G or assumed extra regularity conditions (for instance see [1] for upper bounds of ∆ regular sign pattern matrices). Our bound reduces the study of sign rank to a more graph theoretic study of the feedback arc set problem. It is not clear if the upper bound of 2(µ(T) + 1) can be improved and we leave this to future work.
Real World Tournament Experiments
We conducted simple experiments on real world data sets. Specifically, we considered 114 real world tournaments that arise in several applications including election candidate preferences, Sushi preferences, cars preferences, etc (source: www.preflib.org). The number of nodes in these tournaments varied from 5 to 23. Out of the 114 tournaments considered, 76(66.67%) were in fact locally transitive. For these tournaments, the upper bounds and lower bounds given by our theorems matched and was equal to 2. Interestingly, even for the non-locally transitive tournaments, the lower bound still turned out to be 2. We computed the upper bound for tournaments of size at most 9 (we did not do it for larger tournaments as this involves a brute force search) and found the value to be either 4 or 6. This shows that the upper bounds are usually non-trivial and efficiently approximating it is an interesting direction for future work.
Conclusion
In this work, we develop a theory of tournament representations. We show how fixing the representation dimension enforces, via forbidden configurations, restrictions on the type of tournaments that can be represented. We study and characterize rank 2 tournaments and show forbidden sub-tournaments for the rank d tournament class. We develop upper and lower bounds for minimum dimension needed to represent a tournament. Future work includes attempting to look deeper into some of the conjectures presented and possible strengthening of some of the bounds presented.
Proof of Lemma 3
Proof. Wlog, assume that 1 ≥ T 2 ≥ T 3 ≥ T 1. Then, it must be the case that the counterclockwise angle between the representation of the corresponding items must be ≤ 180 degrees. However, if the representations {h 1 , h 2 , h3} did not positively span R 2 , then by Farka's Lemma, there must be some supporting hyperplane for the representations. However, this would imply at least one of the node pairs {(1, 2), (2, 3), (3, 1)} must necessarily make an angle ≥ 180 degrees. But this contradicts the assumption that the nodes form a 3 cycle.
Proof of Theorem 5
Proof. Let T be a transitive tournament on n nodes and let T ∈ F(T). Then, there exists some S ⊆ [n] such that T = φ S (T). Consider any node i ∈ [n]. Define following 4 subset of nodes associated with i: Note that each of T(S k ) for k = 1 to 4 is a transitive sub-tournament and the relationship across S i , S j for any two sets is either one completely beats the other or completely loses to the other. Note that in φ S (T) the orientation of the edges across these sets is either flipped as a whole or not flipped at all. Thus, exactly two of these sets will be part of T + i and two part of T − i (the exact sets among these sets will depend on whether i ∈ S or not), thus preserving the local transitivity property. Thus T is locally transitive.
To prove the opposite direction, let T be a locally transitive tournament. Let i ∈ [n] be an arbitrary node. We argue that T := φ T + i (T) is a transitive tournament. We will show this by arguing that there does not exist a 3 cycle in T . Consider any 3-cycle a > T b > T c > T a. As T is locally transitive, not all {a, b, c} can be in T + i . Also not all {a, b, c} can avoid T + i as [n] \ T + i is transitive. Thus at least one and at most two of {a, b, c} belongs to T + i (T). This means that the 3-cycle becomes a transitive tournament in T := φ T + i (T). Thus every cycle in T becomes transitive in T . Now consider any 3 nodes which forms a transitive tournament a > T b > T c and involves at least one node and at most two nodes in T + i . Then there are only two cases to consider: (1) a ∈ [n] \ T + i and {b, c} ∈ T + i or (2) {a, b} ∈ [n] \ T + i and c ∈ T + i . In both these cases, it is easy to verify that the the corresponding tournament in T is either the transitive tournament b > T > c > T a or the transitive tournament c > T a > T b. As these are the only possibilities, the result follows by noting that T = φ T + i (T) =⇒ T = φ T + i (T ) and so T ∈ F(T ).
Proof of Theorem 6
Proof. Assume T is locally transitive. Then by Theorem 5, it must be in the flip class of some transitive tournament T i.e. T = φ S (T ) for some S ⊆ [n]. It is easy to represent a transitive tournament using a rank 2 skew symmetric matrix. Indeed pick any vector u ∈ R n which is sorted according to the topological ordering of T . Let v ∈ R n be the all ones vector. Then M = uv T − vu T represents T . Let M = (H ) T A rot H for some H ∈ R 2×n . Then the columns of H represent T . Now consider the representation H obtained from H where the columns indexed by S are multiplied by −1. This does not change the rank of H and it can be verified that T = T[H].
To prove the other direction, consider any rank 2 skew symmetric matrix M ∈ R n×n . Then there must exist u, v ∈ R n such that M = uv T − vu T . Consider any node i ∈ [n]. Consider three nodes a, b, c such that , v c }, one can conclude that it must be the case that u a v c > v a u c . This just shows that T + i is transitive. Analogously one can show that T − i is also transitive. As i was arbitrary, the result follows.
Proof of Theorem 7
Proof. Recall that the classic quick sort algorithm picks a pivot node (say 1) and places all nodes that beat the pivot to the right in the ranking and those that lose to the left and then recurses on the left and right subsets. As T is locally transitive, choosing any pivot i would correspond to fixing the pivots position and simply returning the ranking correspond to the topological ordering of the transitive tournaments T(N + i ) and T(N − i ) respectively. We first argue that changing the pivot only cyclically shifts the final ranking. To see why this is true, consider two pivots i and j and their corresponding rankings σ i and σ j . Without loss of generality, assume that the ranking σ i = [1, . . . , n] and i > T j. We argue that there exists an integer k such that N + j = {j + 1 mod n, (j + 2) mod n, . . . , (j + k) mod n} (where by convention n mod n = n). As N + i is transitive, and j is part of it, it must be the case that all the nodes {j + 1, . . . , j + n} ∈ N + j . Then to prove the claim, it remains to be shown that the set N − i ∩ N + j is either empty or must be the nodes {1, . . . , } for some < i. If empty, we are done. If not, assume for the sake of contradiction that there exists three succesive integers a , b , c < i such that a , c < T j but b > T j. It is easy to verify that this cannot happen as it would lead to T({ a , b , c , j}) being a forbidden configuration.
Notice that as every locally transitive tournament is in the flip class of a transitive tournament i.e., T = φ S (T) for some S, one can divide the set of all nodes into 2k + 1 groups as follows: Let [1, . . . , n] be the ordering corresponding to the transitive tournament wlog. Starting from 1, add as many nodes to a group such that all the elements belong to either S orS. Once the condition is violated, create a new group and continue the same process. It is not hard to verify that 2k + 1 groups will be formed in this process for some k ≥ 0. Moreover, each group would separate two other groups by construction.
We can first show that the items of a single group must appear in consecutive positions in one of the optimal rankings. This is proven as follows.
Consider there exists an optimal ranking with items which belong to the same group not occurring consecutively. Consider two items belonging to the same group, which have items from other groups present in between them in the ranking. Consider these items to be a 1 , a 2 , with a 1 present above in the rankings. Consider the number of upsets that the two items are involved in to be u 1 and u 2 . If u 1 ≤ u 2 , a 2 can be placed right after a 1 in the ranking, creating a better or equivalent ranking in terms of upsets. Similarly if u 1 ≥ u 2 , a 1 can be placed directly above a 2 in the rankings to create an equivalent or better ranking. Therefore there exists an optimal ranking which has all items in the same group consecutively.
This theorem is then reduced to finding a ranking of groups, which is proven using induction on k.
Base Case
Consider the base case with k = 1. Let there be 3 groups, C, A 1 , B 1 . We can say that the optimal ranking cannot be any of the following since all three rankings can be made better by swapping the second and third ranked groups. Therefore the 3 possible optimal rankings are CA 1 B 1 which are cyclic shifts of each other.
Inductive
Step One property of rankings which is useful for the inductive step proof is as follows. Let there be 2k + 1 groups G = {g 1 , g 2 . . . g 2k+1 }. Label the optimal ranking with the condition that g i be placed first in the ranking as R i . The ranking R i with g i removed must be the optimal ranking for G \ {g i }. This can be shown using contradiction i.e, if there was a better ranking for G \ g i , that ranking with g i appended to the front would be better than R i .
We now assume the theorem is true for size 2k − 1 instances and aim to prove for the same for size 2k + 1 instances. Consider g 1 as the first group in the ranking. This creates a certain number of upsets, for the purposes of ranking the remaining groups, 2 of the remaining groups can be merged into a single group. This follows from the observation earlier that each group also 'separates' two groups. This can be considered an instance of the size 2k − 1 problem. Therefore the set of optimal rankings with g 1 as the first group in the rankings is made up of g 1 as the first group and a cyclic sweep of the remaining items to fill the remaining positions. Therefore the optimal permutation must be among the sets created by considering each of the 2k + 1 groups as the first group in the rankings. Let R i,j represent the ranking which has group g i as the first group and the remaining groups present as a cyclic sweep from g j . Consider the case of R 1,k . Let x i represent the number of items in group g i . If R 1,k is a better ranking than R k,k+1 , it implies that by considering the shift of g 1 in the two rankings. The difference in the number of upsets between R 1,2 and R 1,k is given by . . x 2n+1 ) . . .
Using Equation 2
, it can be seen that each of the above terms are negative for any j ≤ n + 1, making R 1,2 the better ranking. Any j > n + 1 cannot be considered as an optimal ranking since the first group as per the ranking must precede the second(otherwise switching them would decrease the upsets). Since either R 1,2 or R k,k+1 (both counterclockwise orderings) is better than R 1,k whenever k ≤ n + 1(R 1,k cannot be the optimal ranking when k > n + 1), and since this can be generalised for any R i,j , it is shown that one of the counterclockwise orderings of the items is the optimal ranking.
A.0.1 Finding Forbidden Configurations
Given a tournament and a representation dimension, there is no known method to check whether the tournament can be represented by vectors in the given dimension. The forbidden configurations presented above were found by carefully creating an exhaustive set of cases, and showing each one causes a contradiction. The techniques used are presented below.
Proof of Theorem 8
Consider the representation of tournaments as given in Subsection 3. Since adding minor noise to each h i will not change tournament, we can deal only with cases in which any size 4 subset of h 1 , h 2 , ...h 1 2 consists of linearly independent vectors. By using this property, and without loss of generality, We can construct cases based on the signs of the coefficients(c i ) in the above equation. There are 2 4 = 16 sign patterns/cases for any equation. These cases are filtered by multiplying the entire expression with expressions of the form A rot h i .
In the above equation, the signs of all expressions of the form h i A rot h j is known from the tournament configuration. Therefore simply comparing the sign of the LHS and RHS for all possible values of i rules out many cases.
We now use multiple equations together in a bid to further filter the remaining cases. Consider the following equations without loss of generality.
h 5 can be eliminated from the first two equations, leaving two expressions of h 6 in terms of h 1 , . . . h 4 . The two sets of coefficients of h 1 , . . . h 4 can be equated, and sign based arguments can eliminate a few more cases. Also, a set of 3 tuples can be constructed, with each item representing possible sign patterns for the 3 equations. Note that this set of 3 tuples does not contain entries which cannot apply simultaneously on the 3 equations.
The above elimination step can be considered as a filtration procedure given a size 6 tuple (h 1 , h 2 , h 3 , h 4 , h 5 , h 6 ) by using 3 equations. We can perform a similar filtration step given any size 6 tuple as well. where the equivalence relation represents that the 2 equations are identical. Since identical equations must have identical sign patterns, the sign patterns not present in both sets of possible sign patterns can be filtered out. For the 11-DRT cone, this leaves us with null set, proving that it is a forbidden configuration. | 2021-10-12T01:34:09.941Z | 2021-10-06T00:00:00.000 | {
"year": 2021,
"sha1": "c8008a051073285a8ee8463c1eaf69c5f2d801a3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c8008a051073285a8ee8463c1eaf69c5f2d801a3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13706474 | pes2o/s2orc | v3-fos-license | Surface Modification of Nanoclays with Styrene-Maleic Anhydride Copolymers
This study presents the modification of surfaces of nanoclays, halloysite nanotubes (HNT) and sepiolite (SEP), with styrene-maleic anhydride copolymers (SMA) via esterification reaction between hydroxyl groups of the nanoclays and anhydride groups of SMA. The structural, thermal, and morphological analyses of the modified nanoclays were performed by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction analysis (XRD), thermal gravimetric analysis (TGA), and field emission scanning electron microscopy (FESEM). All of these results suggested that the expected modification of HNT and SEP surfaces were performed. Although XRD patterns of HNT containing samples showed that the basal spacing shifted to higher distances, it was found that those of the crystalline structure of SEP remained unchanged. Thermal gravimetric analysis exhibited that SMA copolymers were grafted onto the surfaces of nanoclays varying amounts between 15 and 43 wt. % depending on the types of nanoclays and SMA copolymers. This modification indicates that these nanoclays can be added to the polystyrene matrix without any compatibilizers.
Halloysite nanotubes (HNT), due to its hollow tubular structure, large aspect ratio, natural availability, rich functionality, good biocompatibility, and high mechanical strength, has been used in the production of polymer/clay nanocomposites in recent years [15].HNT consists of aluminosilicate nanotubes similar to the structure of kaolinite and has a molecular formula of Al 2 Si 2 O 5 (OH) 4 •nH 2 O.The internal diameter of the hollow nanotubes is in the range of 10 -70 nm, and the length of the nanotubes is varying from 0.2 to 1.5 μm [16].HNT's body consists of two different interlayer surfaces, which are the internal surface containing aluminol (Al-OH) groups and the external surface covered by siloxane (Si-O-Si) groups.Further, some silanol (Si-OH) and aluminol groups exist at the edges of the tubes [17] [18] [19].These hydroxyl groups caused polarity on the HNT surface offer an opportunity to obtain efficient dispersion of HNT into the polar polymeric matrices [20] [21].Sepiolite (SEP), a natural hydrated magnesium silicate with the theoretical formula of Si 12 Mg 8 O 30 (OH) 4 (OH 2 ) 4 •8H 2 O, is used as a reinforcing nanofiller in various polymer matrices [22] [23] [24].The discontinuity of the silica sheets in sepiolite leads to the characteristic structural tunnels, which possess silanol groups (Si-OH) at their edges.[25] [26].In the polar polymeric matrix, these silanolgroups not only improve the interactions between SEP and the host polymer, but also enhance the dispersion of SEP without any kind of modification [27] [28] [29] [30].
The surface modifications of inorganic nanoclays with organic compounds are widely applied to improve the dispersion and compatibility of the polymer/ nanoclay composites.Joo et al. (2012) modified the functional groups of HNT from hydroxyl groups (HNT-OH) to carboxylic acids (HNT-COOH) to show only the ability of these hydroxyl groups [31].Du et al. (2006) modified HNT surface via chemically grafting polypropylene (PP) with a two-step method and investigated the compatibility of modified HNT into PP matrix.In the first step, HNT surface was functionalized with γ-aminopropyltriethoxysilane and then PP chains were grafted onto the surface of HNT via commercialized maleic anhydride grafted PP (PP-g-MAH).They found that the modified HNT showed much lower polarity compared to pristine HNT and increased the mechanical properties compared with neat PP and PP/pristine HNT nanocomposites [32].Pasbakhsh et al. (2010) modified the surface of HNT by γ-methacryloxypropyltrimethoxysilane to improve their dispersion in ethylene propylene diene monomer [19].Garcia et al. (2011) modified SEP surface with trimethoxysilane via chemically in aqueous gel procedures to improve its compatibility in PP, LDPE and PS matrices [33].Garcia-Lopez et al. (2010) modified SEP surface with trimethyl hydrogenated tallow quaternary ammonium (3MTH) and PA6/organomodified SEP nanocomposites were obtained by using different amounts of modified SEP.They observed that the nanocomposites with the highest amount of modifier displayed the best mechanical properties [34].Di et al. ( 2004) also studied grafting reaction of methyltriethoxysilane on SEP surfaces to investigate the reactivity of silanol groups [35].
In this paper, we studied the surface modification of HNT and SEP with the two different commercial styrene-maleic anhydride copolymers (SMA), which have different molecular weight and styrene/maleic anhydride molar ratio, through esterification reaction in THF medium.The modified nanoclays were characterized by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction analysis (XRD), thermal gravimetric analysis (TGA), and field emission scanning electron microscopy (FESEM).
Modification of Nanoclays
Modification of the nanoclays was prepared through the esterification reaction by using PTSA as the catalyst in THF medium.The modification reaction is detailed in Figure 1.
Characterization of the Modified Nanoclays
The FTIR measurements were carried out by Perkin Elmer Spectrum 100 FTIR spectrometer in attenuated total reflection (ATR) accessory.Infrared spectra were collected between 4000 -650 cm −1 at a resolution of 4 cm −1 .
The TG analysis (TGA) was performed using a SEIKO Exstar 6200 TG/DTA instrument.The samples were heated from room temperature to 700˚C with a heating rate of 10˚C/min under nitrogen atmosphere with a purging rate of 150 mL/min.The nanoclays samples for field emission scanning electron microscopy (FESEM) were prepared by coating with a thin conductive layer of gold onto the samples.FESEM images of the samples were performed under high vacuum with FEI Quanta FEG 450 at 30 kV.The FTIR spectrum of HNT-1000 displays some new absorption bands at 1707 and 1218 cm −1 , which can be attributed to the formation of ester bonding between anhydride and -OH groups [36] [37].The ester bonding formation exhibits that SMA-1000 copolymer is more efficiently grafted to HNT surface than SMA-EF40 due to its higher maleic anhydride content and lower molecular weight.Figure 3 gives the details of FTIR spectra of the pristine SEP and modified SEP samples which are modified with SMA-1000 (SEP-1000) and modified with SMA-EF40 (SEP-EF40).The bands at 3655 and 3564 cm −1 are assigned to the Mg-OH groups [38], the broad band at 3404 and the peak at 1464 cm −1 show O-H band stretching vibrations of zeolitic water, the peak at 1648 cm −1 is due to the stretching vibration of coordinated bonded water in SEP [35].The band appeared at 1209 and 975 cm −1 are associated with Si-O bending and Si-O-Si stretching vibration, respectively [39].Some new peaks are observed in the spectra of SEP-1000 and SEP-EF40, as well as modified HNT samples' FTIR spectra.The weak bands at 3028 and 2919 cm −1 are assigned to =C-H stretching vibrations.The peaks at 1853 and 1776 cm −1 are attributed to the C=O stretching vibration, the bands at 1493 and 1453 cm −1 are ascribed to C=C stretching of aromatic benzene ring.Moreover, the wideness of the peak at 1706 cm −1 and 1656 cm −1 is owing to overlapping of the carbonyl (C=O) stretching vibration with the coordinated bonded water [25].These new bands show that the kinds of SMA are chemically grafted onto the surface of SEP.As can be seen, SMA-1000 copolymer, possessing higher maleic anhydride content and lower molecular weight than SMA-EF40, is also efficiently grafted to SEP surface, as well as grafted to HNT.
Structural Analysis of the Modified Nanoclays
Figure 4(a) exhibits XRD patterns of pristine and modified HNT-1000 samples.The characteristic d 001 diffraction peak of pure HNT is located at 2θ =12.0˚, corresponding to a basal spacing of 7.57 Ǻ.After the modification, the diffraction peak of HNT remains unchanged, which indicates that the tubular structure of HNT remained stable [17].However, the modification causes a displacement of the d 001 diffraction peak of HNT-1000 to lower angles with increasing in the basal spacing from 7.57 to 9.84 Ǻ.These results confirm that the interlayer distance of HNT increases with modification.
Thermal Analysis of the Modified Nanoclays
Figure 5 displays the weight loss (TGA) curves of the pristine HNT, the modified HNT samples and the commercial SMA copolymers.The pristine HNT exhibits two thermal decompositions in its TGA curve.The first weight loss started at the temperature of 30˚C and ended at 80˚C can be attributed to the loss of physically adsorbed water from the surface and internal channels of the tubes [41] [42].The second weight loss occurs at the temperatures between 430˚C -500˚C.This weight loss is assigned to dehydroxylation of the structural of the Al-OH [18] and Si-OH [43] groups of HNT.The char residue of the pristine HNT is 74 wt.% at 700˚C.The commercial SMA-1000 and SMA-EF40 show one-step thermal degradation process in the temperatures between 280˚C -450˚C and 320˚C -430˚C, respectively.The char residues of SMA-1000 and SMA-EF40 are 6 and 0 wt.% at 700˚C, respectively.
For the modified HNT samples, HNT-1000 and HNT-EF40, the first weight losses which come from the adsorbed water molecules at their surfaces are occurred between the temperatures of 30˚C -200˚C and 30˚C -120˚C, respectively.Compared to the pristine HNT, the modified HNT samples show a reducing These results confirm that HNT surface is efficiently modified with SMA-1000 copolymer as demonstrated by FTIR and XRD analysis.
Thermal analysis was performed up to 900˚C for the samples based on SEP.
As can be seen from Figure 6, the pure SEP decomposes with four-step weight loss, where the first step is attributed to the removal of surface-adsorbed water at the temperature between 30˚C -90˚C, the second step at the range of 230˚C -300˚C corresponds to the degradation of zeolitic water.The other steps occurred at the temperatures between 600˚C -670˚C and 770˚C -820˚C are related to the decomposition of coordinated water and hydroxyl groups, respectively [44].The char residue of SEP is 76 wt.% at 900˚C.
The modified SEP samples, SEP-1000 and SEP-EF40, decompose with twostep weight loss instead of four-step, compared with pristine SEP.The first weight losses at the temperatures between 30˚C -120˚C for SEP-1000 and 30˚C -100˚C for SEP-EF40 can be ascribed to the elimination of adsorbed water molecules on the surfaces.The second weight losses of the modified SEP samples at the temperature between 300˚C -700˚C are attributed to thermal degradation of SMA copolymers grafted onto the SEP surface.The char residue values of SEP-1000 and SEP-EF40 at 900˚C are 43 and 58 wt.%, respectively.According to char residue values, it can be said that SMA-1000 and SMA-EF copolymers could be grafted on the surface at 43 and 15 wt.%, respectively.This result shows that SMA-1000 is also grafted to SEP surfaces at higher efficiency than SMA-EF40, as well as grafted to HNT surfaces.
Morphological Properties of the Modified Nanoclays
FESEM images of the pristine HNT and modified HNT with SMA-1000 (HNT-1000) are presented in Figure 7(a) and Figure 7(b), respectively.Typical cylindrical and some irregular shapes of HNT nanotubes with varying lengths in individual form can be seen in Figure 7(a).After the surface modification, the agglomeration occurred within the modified nanotubes and the singular form of the tubes disappeared (Figure 7(b)) due to organic structures on the surfaces.
Figure 7(c) and Figure 7(d) give the FESEM micrographs of pristine SEP and modified SEP with SMA-1000 (SEP-1000), respectively.The pristine SEP seems uniform and smooth surfaces at low magnification image.However, the modification caused to needle type aggregates on the surfaces and hence the smoothness of the SEP surfaces reduced due to organic molecules assembly (Figure 7(d)).
Conclusion
In this study, halloysite and sepiolite were modified with styrene-maleic anhydride copolymers via chemical method.The modified nanoclays were characterized by FTIR spectroscopy, X-ray diffraction, thermal gravimetric analysis, and scanning electron microscopy.The characterization results showed that SMA copolymers were grafted onto the nanoclays surfaces at different grafting ratios depending on their molecular weight and styrene/maleic anhydride molar ratios.The differences of the nanoclays also affected the grafting ratios.It can be concluded that SMA-1000, having lower molecular weight and higher maleic anhydride content than those of SMA-EF40, displays higher efficiency to graft onto both nanoclays surfaces.
1 Figure 1 .
Figure 1.The modification reaction scheme of the nanoclays with SMA copolymers.
FTIR
spectra of the pristine HNT and modified HNT with SMA-1000 and SMA-EF40 (represented by HNT-1000 and HNT-EF40, respectively) are presented in Figure2.The characteristic absorption peaks of the pristine HNT, such as the O-H stretching of inner hydroxyl groups at 3625 cm −1 , O-H deformation of water at 1631 cm −1 , O-H bending of inner hydroxyl groups at 906 cm −1 and Si-O stretching at 1004 cm −1[18] [19] can be seen.The new vibration bands are observed in the FTIR spectra of HNT-1000 and HNT-EF40.The new bands at 1852 and 1776 cm −1 are assigned to the C=O stretching vibration.Further, the FTIR bands at 3028 -2919 cm −1 and 1495 -1455 cm −1 are assigned to =C-H and C=C stretching of aromatic benzene ring, respectively.
Figure 5 .
Figure 5. Thermal analysis results of HNT, commercial SMAs and modified HNT samples.
Figure 6 .
Figure 6.Thermal analysis results of SEP, commercial SMAs and modified SEP samples.
Table 1 .
The trade names, molecular weight and molar ratio of styrene/maleic anhydride of SMA. | 2018-05-11T00:43:38.643Z | 2017-03-24T00:00:00.000 | {
"year": 2017,
"sha1": "32031ac2731d239f2084310f5879486c83dc1aa3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75109",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "32031ac2731d239f2084310f5879486c83dc1aa3",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
265414980 | pes2o/s2orc | v3-fos-license | Clusterization-Triggered Emission Enhanced by Energy Transfer in the Layered Metal Halide Hybrid Material (TET)2[Pb4Cl16]
Clusterization-triggered emission (CTE) of nonconventional chromophores has recently attracted increased interest for its synergetic photophysical properties and promising applications, such as optical anticounterfeiting, white-light emission devices, or molecular sensing. Many studies have been conducted on pure organic clusteroluminogens (CLgens), but very few have explored organic–inorganic hybrid (OIH) materials. This work deals with optical properties of a new OIH compound (TET)2[Pb4Cl16] (TET = N,N′-bis(2-aminoethyl)-1,3-propanediamine (C7N4H24)), which presents an unprecedented two-dimensional perovskitoid structure formed by strongly distorted [Pb4Cl16] layers of corner and edge-sharing [PbCl6] octahedra, separated by TET tetra-ammonium cations. Under UV–visible excitations, (TET)2[Pb4Cl16] exhibits white-light emission, tunable excitation-wavelength-dependent emission, and green afterglow room temperature phosphorescence (RTP) lasting for more than 0.63 s, all of which are direct signatures of CTE. The optical interpretations are supported by density functional theory (DFT) calculations of the band structure. Two mechanisms are involved in the emission process: resonant energy transfer (RET) between the organic and the inorganic components, and clusteroluminescence (CL) governed by a rigid conformation of the organic cations and extended electron delocalization over supramolecular organic clusters confined within the interlayer spacing. The different features of CLgens in (TET)2[Pb4Cl16] are discussed, and the role of the organic nonconjugated molecule in the emission process is emphasized.
■ INTRODUCTION
5][6][7][8]13,14 Alternatively, white-light (broad-band) emission has been reported for OIH materials, as first evidenced in 2014 by Dohner et al. for the layered hybrid perovskite material (EDBE)[PbBr 4 ], 15,16 followed by Yangui et al., 17 and an increasing variety of OIH materials since then. 18−21Such single-source white-light emitters have become a hot topic, motivated by their inherent ability to overcome the drawbacks associated with conventional white-light multiple component phosphors, such as self-absorption.22 Over the past few years, many studies have been conducted to improve the emission performance and better understand the correlation between structural parameters and optical emission characteristics of OIH white light emitters.18,19,21,23 The exact origin of broadband emission, especially for materials with low dimensional connectivity of the inorganic framework, remains a matter of debate due to the variety of processes that can occur in such complex systems.Three common mechanisms have been identified: (i) Selftrapped excitonic states can be formed and confined within the highly distorted [MX 4 ] 2− infinite inorganic layers.24,25 The organic molecules play the role of insulating barriers, generating dielectric confinement in addition to quantum confinement.26,27 (ii) In the case of conjugated organic cations containing aromatic rings, both inorganic and organic components are possibly luminescent.This process may further involve energy/charge transfer between a donor inorganic and an acceptor organic sublattices or vice versa, assisted by a specific band alignment at the interface. 28 Ma studies have also found that hybrid materials with distorted inorganic networks and luminescent organic molecules can involve a competition between the two aforementioned mechanisms.29 (iii) Finally, lattice defects are widely considered to generate broadband emission through the formation of in-gap self-localized emissive states.30−32 It is generally accepted that the luminescence of purely organic compounds arises from typical chromophores containing conjugated π-electrons and aromatic rings through allowed π−π* electronic transitions. It−36 This photophysical phenomenon is termed clusterization-trigged emission (CTE), 34,37,38 the emission is known as Clusteroluminescence (CL) and the associated nonconventional chromophores are classified as Clusteroluminogens (CLgens). Seeral common features characterize CLgens and it is possible that they may not be applicable simultaneously.(i) The chemical structure is based on nonconjugated molecules in which the luminogen functional groups are separated by a saturated alkyl backbone.39,40 (ii) CLgens show concentration-dependent luminescence in solution under the formation of aggregates.41 (iii) The excitation spectra of CLgens are red-shifted with respect to their absorption spectra.42 (iv) CLgens exhibit excitationdependent luminescence properties, with red-shifted emission when excited at longer wavelengths.45 For such compounds, the accepted mechanism for luminescence may rely on the restriction of intramolecular motions (RIM model 46,47 ) that hinders nonradiative decay (and thus enhance luminescence).Short intra-or intermolecular contacts between the luminogen groups in the aggregated state enhance so-called through-space electron conjugation (TSC).48 These outstanding features make CLgens highly promising for potential applications, including optoelectronics, 49,50 anticounterfeiting, 51 and sensing.45 Extensive studies on pure organic molecules (e.g., dendrimers or hyperbranched polymers) have been reported to explore these features, 52−57 but only a few deal with OIH materials.58−60 Given the specific structural architecture of crystalline hybrid materials, in which organic molecular entities are densely packed and confined within an inorganic framework, one might expect that the supramolecular connection between the organic groups can form clusters in the crystal packing (called "intermolecular clusters" hereafter), hosting electron conjugation while restricting intramolecular motions.All of the ingredients at the origin of CTE are therefore present in the OIH crystals.The inorganic moiety can also contribute to the TSC process by providing additional free electrons, can improve the crystallinity as well as the broadness and efficiency of the emission as compared to pure organic systems. 37,61,62 Heein, we report theoretical and experimental investigations of the structural and optoelectronic properties of a novel OIH m a t e r i a l ( C 7 N 4 H 2 4 ) 2 [ P b 4 C l 1 6 ] a b b r e v i a t e d a s (TET) 2 [Pb 4 Cl 16 ] with an unprecedented 2D layered structure.The investigation of the original optical behavior, including white-light emission, tunable luminescence, and green after-glow phosphorescence, is based on a comparison between the hybrid and its corresponding organic salt TET.Cl 4 .Surprisingly, both compounds exhibit similar optical behavior, and fulfill almost all characteristics of CLgens.These features will be discussed, highlighting the crucial role of the organic molecules in the emission process despite being nonconjugated.CL is enhanced by resonant energy transfer (RET) between the organic and inorganic sublattices, hydrogen-bonding interactions, and other weak interactions.
■ EXPERIMENTAL SECTION Single Crystal Synthesis.All reagents and solvents were purchased from commercial sources (Sigma-Aldrich) and used without further treatment.Single crystals of the OIH compound (TET) 2 [Pb 4 Cl 16 ] were obtained at room temperature by the slow solvent evaporation method. 63Equimolar amounts of N,N′-bis(2-aminoethyl)-1,3-propanediamine (C 7 N 4 H 24 ), abbreviated as TET, and PbCl 2 were dissolved in an aqueous solution of hydrochloric acid HCl (37%) under heating and stirring.The obtained solution was then kept in the dark at room temperature.Golden brown single crystals of the title compound were harvested after 24 h and filtered.The ammonium organic salt, abbreviated, TET•Cl 4 was prepared following the same method by dissolving TET in hydrochloric acid, HCl.In order to verify the purity of the synthesis, a powder X-ray diffraction pattern was collected on ground single crystals using a Panalytical X'Pert Pro diffractometer with monochromatized Cu Kα1 radiation and XCelerator detector.The corresponding pattern is compared to a pattern simulated from the single crystal structure; the good similarity confirms the purity of the crystals (see Supporting Information).
Crystallographic Data Collection.A crystallographic study of (TET) 2 [Pb 4 Cl 16 ] was performed at 293 K.A single crystal of 0.19 × 0.09 × 0.02 mm 3 was selected for the X-ray diffraction (XRD) investigations.Data collection was carried out on a Supernova diffractometer equipped with an Atlas CCD detector and using graphite monochromatized Mo Kα radiation (λ = 0.71073 Å).The crystal structure was solved by direct methods and successive Fourier difference syntheses with the SHELXS-2014/7 program and refined by the full matrix least-squares method using SHELXL-2014/7.The TET2 cation exhibits static disorder, which has been taken into account during the refinement using split atomic positions for several carbon atoms.The detailed crystallographic data are given in Table S1, atomic coordinates and U iso* /U eq are given in Table S2, bond lengths and angles are given in Table S3, and octahedron distortion parameters are provided in Table S4.
Electronic Band Structure Calculation.The firstprinciples electronic band structure calculations based on density functional theory (DFT) were carried out for the title compound.The Kohn−Sham equations were solved using the full-potential linearized augmented plane wave method FP-LAPW, 64,65 as implemented in the Wien2k package. 66The exchange and correlation effects of interacting electrons were treated by the generalized gradient approximation (GGA) within the Perdew−Burke−Ernzerhof (PBE) scheme. 67To ensure the energy eigenvalue convergence, the number of the FP-LAPW basis functions in the interstitial regions (IR) was expanded to a cutoff R MT × K max = 5.0, where R MT is the minimum radius of the Muffin-Tin (MT) spheres, and K max is the magnitude of the largest k-vector in the plane-wave bases.We have picked The Journal of Physical Chemistry C 1.05, R MT (C) = 1.15, and R MT (H) = 0.57 as the MT radii in bohr units.The valence wave functions inside MT spheres were enlarged to l max = 10.The total energy reliance on the number of k-points in the irreducible Brillouin zone (IBZ) wedge was checked, and the mesh size was set to 100 k-point.The self-consistent calculation convergence criteria were set at 1 × 10 −4 Ryd per formula unit.
Optical Measurements.Optical measurements were performed on the corresponding OIH (TET) 2 [Pb 4 Cl 16 ] compound and its corresponding organic salt, TET•Cl 4 .Room temperature optical absorption (OA) measurements were performed on an aqueous solution and solid state using a conventional UV−vis spectrophotometer (HITACHI, U-3300).Photoluminescence (PL) measurements were recorded using a JOBIN YVON HR 320 spectrophotometer under 375 nm laser excitation wavelength.A helium closed-cycle cryostat was used for temperature-dependent studies.Fluorescence and excitation photoluminescence (PLE) of the aqueous solution and solid state were recorded at room temperature on a HORIBA Fluoromax-4 spectrofluorimeter equipped with a xenon lamp as an excitation source.Time-resolved fluorescence (ns scale) was performed using a HORIBA EasyLife instrument with a laser at a 375 nm excitation wavelength.The delayed lifetime of the PL emission was estimated from the analysis of a smartphone video.At room temperature, the sample is illuminated with a UV laser (375 nm) in a dark room.The emission was then video captured using a smartphone camera (Galaxy A52 (2021), Samsung electronics) with a frame rate set to 30 fps and a 1980−1080 pixel spatial resolution.Matlab was then used to analyze the video in the region of interest in which the luminescence area of the crystals is observed. 68,69ptical Imaging.The optical microscope EUROMEX-ImageFocus Alphab (10× objective) with a CMEX-18PRO camera was used to capture an optical micrograph.
■ RESULTS AND DISCUSSION
Structure Description and Analysis.The crystal structure of (TET) 2 [Pb 4 Cl 16 ] has been derived at room temperature by single-crystal X-ray diffraction; detailed crystallographic data are collected in Table S1, while relevant structural parameters are given in Tables S2 (10)°at room temperature and exhibits a 2D nonperovskite layered topology.As shown in Figure 1, the asymmetric unit contains four lead atoms (Pb1−Pb4), 17 chlorine atoms (Cl1−Cl17), and two symmetry-independent TET cations (noted TET1 and TET2 hereafter).Each Pb atom adopts a distorted octahedral geometry, with Pb−Cl bond distances ranging from 2.654(4) to 3.114(5) Å, and average values of 2.881, 2.895, 2.881, and 2.908 Å for Pb01− Pb04, respectively, corresponding to mostly ionic Pb−Cl bonds (Shannon ionic radii: r Pb 2+ = 1.19 Å and r Cl − = 1.81 Å). 70 The octahedral deformations can be quantitatively assessed by Δd, λ oct , and σ oct 2 distortion parameters, 71,72 which are calculated and reported in Table S4 and Figure S1.The obtained values show very strong intraoctahedral distortion compared to those reported for other corrugated 2D and 1D hybrids. 18,21,73In such OIH materials, the strong octahedral distortion results from the interaction between the inorganic framework and the organic cations, and is also a stereochemical expression of the Pb(6s 2 ) electron pairs. 74,75he TET cation is a tetra-ammonium nonconjugated molecule, potentially appropriate for CTE, with the four ammonium groups being the CLgens.The two TET cations adopt a different conformation, as shown in Figure S2.TET1 exhibits a nearly planar geometry, while TET2, which is observed as disordered over two orientations in the crystal structure, deviates significantly from planarity.In order to maintain overall crystal neutrality, the TET cations are fully protonated on the four ammonium groups (two terminal primary ammonium and two secondary ammonium), leaving four strong potential hydrogen bond donor sites on each cation.
These octahedra can be considered as the building elements (secondary building units or SBU) of the inorganic framework.The layer can be described as infinite Pb01 and Pb04•••Pb04 being connected by edges, while all the other interoctahedra connections proceed through corners.The inorganic framework is furthermore strongly distorted, corrugated, and noncontinuous.
The Journal of Physical Chemistry C
In (TET) 2 [Pb 4 Cl 16 ], the two TET cations are located in the interlayer spacing.TET1 is oriented almost parallel to the layers, while TET2 is strongly inclined with respect to the layers.TET1 and TET2 form strong N−H•••Cl hydrogen bonds through the two terminal protonated primary ammonium groups and their secondary ammonium groups (Figure 3 and Table S5).As shown in Figures S3 and S4, this leads to a very dense hydrogen bond network that interconnects the two TET cations and the inorganic layers.
The strongly distorted inorganic framework described here for (TET) 2 [Pb 4 Cl 16 ] is very unusual and results from the templating ability of the TET cation.It is known that the use of monoammonium (RNH 3 + ) or diammonium cations ( + H 3 N−R 1 −NH 3 + ) containing only primary ammonium groups leads to conventional 2D hybrid materials with a perovskite structural topology with the cations being sandwiched within the corner-sharing inorganic octahedral layers.These structures can be further classified into (100), (110), and (111) oriented structures derived by slicing the 3D perovskite along specific (hkl) crystallographic planes. 8−80 Following this, the 2D layers of (TET) 2 [Pb 4 Cl 16 ] result from a delicate balance between N− H•••Cl hydrogen bonds and steric interactions and originate from the ability of the TET dication to form hydrogen bonds through the terminal primary ammonium and the secondary ammonium as well.−85 Band Structure Calculations.In order to connect the specific crystal structure of (TET) 2 [Pb 4 Cl 16 ] to the resulting optical properties, an analysis of the electronic band structure and an evaluation of the contribution of both organic and inorganic components are required.We have performed an electronic band structure simulation of (TET) 2 [Pb 4 Cl 16 ] with and without including spin−orbit coupling (SOC) effects based on DFT methods using the full-potential linearized augmented wave method (FP-LAPW) implemented in the Wien2k package (Figures 4a and S5a).Without SOC (woSOC) the valence band (VB) maximum and the conduction band (CB) minimum lie along the M and N points of the first Brillouin zone, respectively, corresponding to a wide band gap of 3.5 eV.To obtain a more accurate electronic structure, the SOC effect was taken into account (wSOC), considering the presence of heavy atoms and heteroatoms.A splitting of the CB and a shifting of both the
The Journal of Physical Chemistry C
CB minima and the VB maxima away from the symmetry points in the Brillouin zone are noted, resulting in a narrowed band gap of 3.1 eV.The band structure reveals band edges with low dispersion, indicating relatively localized states on the inorganic framework.The calculated value of the band gap is quite large, resulting from the discontinuity of the layered structure and combination of corner-and edge-sharing connectivity, which prevents an efficient long-range coupling among the Pb-6p and halogen p orbitals.−88 The low dispersion is attributed first to the layer structural topology, with larger effective mass for electrons and holes in the layer stacking direction, and edgeconnectivity which reduces the Pb−Cl orbital overlap, leading to flatter valence and conduction bands. 71The projected partial densities of states (P-DOS) of the inorganic and organic atoms are plotted in Figure 4b (Figure S5b for P-DOS woSOC).The VB is composed mostly of Pb-6s and Cl-3p with a small but non-negligible contribution from the TET atomic orbitals, whereas the CB is mainly made up of Pb-6p with a weak contribution from Cl-3p and TET atomic orbitals.This indicates a direct contribution of the organic TET cations to the electronic states constituting the band edges, in contrast to similar OIH materials based on saturated organic cations, where the inorganic component is primarily accountable for the band gap.This situation is similar to the band alignment characterized for organic−inorganic two-dimensional hybrid perovskites with polycyclic π-conjugated cations, 89−91 leading to a charge transfer from the inorganic semiconducting layer to the organic cation and phosphorescence emission.Furthermore, due to the strong interactions through the N−H•••Cl bonding, these states are superposed in terms of energy, allowing orbital overlap (hybridation) between the organic and inorganic components. 92These results indicate that resonant energy transfer (RET) and delocalization of the exciton may be operative from the [Pb 4 Cl 16 ] to the TET moieties under optical excitation. 28,93ptical Study.As discussed in the previous section, both organic and inorganic components of (TET) 2 [Pb 4 Cl 16 ] contribute to the electronic band gap and, consequently, to the optical properties.To identify the contribution of each component, room temperature optical absorption (OA) and photoluminescence (PL) measurements have been performed on single crystals of the hybrid compound (TET) 2 [Pb 4 Cl 16 ], as well as on the precursor organic salt TET•Cl 4 and TET organic molecules in solution for comparison and reference purposes; the results are plotted in Figure 5a.The OA spectrum of (TET) 2 [Pb 4 Cl 16 ] shows a strong band at 303 nm (4 eV) with a wide tail on the low energy side, which extends up to 800 nm.Based on the Tauc-plot method, 94 the direct and indirect band gaps were estimated at 3.42 and 3.09 eV, respectively (Figure S6), in good agreement with the DFT calculation.Similarly, the OA of the organic salt is characterized by a sharp absorption band located at 298 nm with a wide tail on the low energy side.The presence of such a tail in the absorption spectrum is usually attributed to below gap defect trap states in metal halide hybrid perovskites. 95−98 Upon 375 nm laser excitation, the hybrid compound and the salt exhibit a white and yellowish white intense light emission, respectively (Figure 5b).The PL spectrum of the hybrid compound consists of a broad-band emission spanning almost the entire visible range with a fwhm of 161 nm, formed by the overlap of four peaks located at 445, 480, 525, and 600 nm (Figure S7a).We have gathered in Table S7 the spectral characteristics of the broadbands emitted by a few homologous hybrid materials based on similar saturated organic cations.The emission of white light and broadbands emanating from this family of materials has been attributed to self-trapped excitons (STEs), 16,99 and in all these studies, the role of the organic cation has been omitted by considering that only conjugated molecules with aromatic rings could contribute to the emission process.Following these arguments, the observed broadband in (TET) 2 [Pb 4 Cl 16 ] seems to be associated with the STEs formed within the distorted inorganic moiety [Pb 4 Cl 16 ], particularly the central intense band around 533 nm.However, the PL spectrum of the salt, surprisingly, consists of a 171 nm fwhm broad emission band having the same shape and covering almost the same spectral range as the hybrid compound (Figure S7b).Additionally, the PL spectra of the organic molecule TET present a broad band with two intense peaks located at 450 and 470 nm and an extending tail.Based
The Journal of Physical Chemistry C
on this comparison, we can state that the emission process originates from recombination located on the organic molecule.
The temperature dependence of the PL emission was comparatively carried out for the hybrid compound and the salt (Figure S8), and a parameter R was introduced as the thermal ratio between the PL intensity at 40 K and the PL intensity at room temperature (R = I 40 /I 280 ).We observe that the thermal ratio of the hybrid compound (R = 1.40) is on the same order as that of the salt (R = 1.83).Moreover, when compared with previously reported hybrid compounds exhibiting white-light emission based on STEs, 19,23,100,101 the thermal ratio of the current hybrid compound is significantly lower, suggesting that STEs are not the primary element responsible for the emission process.At this point, it is of utmost importance to highlight that in OIH perovskite materials the organic cation can play a major role in the electronic and optical properties.−114 On the contrary, deprived of any kind of conjugation, saturated molecules are expected to be optically inactive in the visible range, owing to the unmatched band gap. 40,53,115Recent studies on similar molecules have clearly shown CTE is the working mechanism that enables the luminescence of unconventional chromophores.Relying on n•••n and n•••π through-space interaction and through-space conjugation (TSC), 116,117 the formation of supramolecular clusters of such luminogens with extended electron delocalization and a rigid conformation provided by the confinement within the crystal lattice, generates CL as the emission process.In (TET) 2 [Pb 4 Cl 16 ], the organic cation TET carries four amino groups and involves saturated C−C and C−N bonds without any π electrons, and as such cannot be considered as a conventional luminophore.On the contrary, the amino groups have the potential for CTE.To boost this assumption, photoluminescence excitation (PLE) at different emission wavelength λ em , PL under different excitation wavelength λ ex , and time-resolved PL measurements were carried out for (TET) 2 [Pb 4 Cl 16 ] and its corresponding salt TET•Cl 4 in the solid state.
Evidence for Clusteroluminescence (CL).The mechanism for aggregation induced emission (AIE) or CL relies on a restriction of molecular motions (vibration and rotation) in the crystalline state or the formation of cluster excitons spanning several molecules with strong intermolecular through-space interactions. 43,48,118CL is evidenced by a concentrationdependent luminescence, an excitation wavelength dependence of the emission spectrum with a systematic red-shift of the emission with increasing excitation wavelength, a broad emission originating from the superposition of several emitting clusters (with possibly several cluster sizes), and room temperature phosphorescence.−125 We explore hereafter the different characteristics of the optical absorption and photoluminescence of (TET) 2 [Pb 4 Cl 16 ] with respect to possible CL effects.
Concentration-Dependent Luminescence.According to the CTE mechanism, CLgens are anticipated to exhibit weak or no emission in dilute solutions but stronger emission in concentrated solutions, in contrast to the aggregation caused quenching (ACQ) effect observed in traditional luminogens. 33he optical properties of (TET) 2 [Pb 4 Cl 16 ] were first investigated in the solution state as a function of concentration.The OA spectra depicted in Figure 6a show an initial weak absorption at low concentrations, which is progressively enhanced with increasing concentrations, accompanied by the emergence of new shoulders with an extended edge to the visible range.Additionally, the PLE measurements at different λ em (Figure S9) indicate that the excitation spectra are enhanced, and the peaks grow more prominent with increasing the concentration.Reasonably, the PL spectra (Figures 6b and S10) show that as the concentration of (TET) 2 [Pb 4 Cl 16 ] increased, the PL intensity increased accordingly, accompanied by a red-shift of the λ em maximum.For instance, under excitation at 370 nm the λ em maximum is red-shifted from 430 to 454 nm accompanied by a remarkable 21-fold enhancement in intensity as the concentration is raised from 0.6 to 6.6 mg/ mL.This concentration-dependent behavior is attributed to clustering of the TET organic molecules as a function of concentration with shortened intermolecular distances among luminogens, which facilitate the formation of new clusters.The
The Journal of Physical Chemistry C
clustering effect promotes conformational rigidity and restricts molecular motions and vibrations, thus boosting luminescence. 44,126,127nmatched Absorption and Excitation.The PLE spectra of (TET) 2 [Pb 4 Cl 16 ] and its corresponding salt TET.Cl 4 , shown in Figure 7a,c, are characterized by a sharp peak centered at around 370 nm and a broad contribution at lower energy extending up to 700 nm, which implies the coexistence of multiple excitonic states.The excitation peak around 370 nm is present for both compounds, and could be assigned to the HOMO−LUMO transition of the organic cation, overlapped with the VB-CB transition of the inorganic framework.This is consistent with the energy gap derived from our DFT calculations for the organic component.This peak does not shift in energy as a function of the emission wavelength.On the contrary, the broad structure at lower energy is continuously red-shifted when measured at increasing emission wavelength.In both cases, the PLE spectrum is strongly different from the corresponding absorption spectrum (Figure 5a), with a characterized systematic red shift.The discrepancies in the longer wavelength range is regarded as a significant feature of CL. 42,48 Excitation-Dependent Luminescence.Figure 7b,d shows the PL spectra measured on single crystals for both compounds as a function of the excitation wavelength.For λ ex = 340 nm, the emission spectrum of (TET) 2 [Pb 4 Cl 16 ] consists of several peaks (with two main large contributions), corresponding to several emissive centers in the crystal.As λ ex increases up to 430 nm, the two main contributions change in intensity, with the peak located at 533 nm reaching its maximum intensity at 370 nm excitation and then further decrease in intensity before disappearing completely.From 430 nm, a continuous and monotonic red-shift of the emission band is observed; this excitation-dependent emission typically represents a key feature of CL. 128,129 Indeed, based on molecular photophysical theories, for a molecule with a fixed structure, the PL intensity of an emission band may vary, but not its energetic position (λ em is constant). 130For CL and aggregation induced emission, the excitation-dependent emission originates from emissive clusters in the crystal or
The Journal of Physical Chemistry C
aggregates with various size, leading in parallel to a different extent of electron delocalization. 96,131We therefore presume that in the case of (TET) 2 [Pb 4 Cl 16 ], the wavelength-dependent emission originates from the formation of supramolecular clusters in the crystalline state via a favorable network of TET•••Cl intermolecular interactions.A similar argument was used to explain the wavelength-dependent luminescence of heteroatom-containing molecules without aromatic rings in the crystalline state. 118A very similar effect is also characterized for the TET•Cl 4 salt (Figure 7d).A comparison between the hybrid and the salt compounds shows a strongly enhanced luminescence in the former, almost 5-fold.
The occurrence of clusteroluminescence is further evidenced by the continuous decrease in emission fwhm, ranging from more than 200 to about 100 nm as the excitation wavelength λ ex varies from 340 to 540 nm (Figure S11).At low λ ex (high energy), small to large clusters can be excited and contribute to the luminescence, while high λ ex (low energy) can only excite large clusters with a reduced energy gap. 73The presence of multiple emission maxima in the PL spectra with an impressive fwhm that remains broad even at fixed high excitation wavelength could only indicate the existence of multiple species with color-tunable emission. 43As illustrated in the CIE coordinate diagram (Figures 8), the emission of both compounds, according to the variation of λ ex , is covering a wide range of colors.In particular, the significant white color emission, with a (0.26, 0.33) and (0.30, 0.36) CIE coordinates at 340 nm excitation for the hybrid and salt compounds, respectively, contains multiple emission peaks and spans different emissive centers.
Multiple Emission Peaks.As characterized for (TET) 2 [Pb 4 Cl 16 ] and TET•Cl 4 , CL is known to generate multiple emission peaks, 129,132 bolstered by their distinct PL lifetime τ.The time-resolved luminescence signal of (TET) 2 [Pb 4 Cl 16 ] and TET•Cl 4 has been monitored at different emission wavelength λ em using the same excitation wavelength λ ex of 375 nm (Figure 9).For the two compounds, a similar and complex decay is observed, characterized by several components which have been fitted with a biexponential (results are collected in Table S8).The resulting time constants indicate a fast component (a few nanoseconds) and a second component 1 order of magnitude slower.This confirms the heterogeneous population of emissive species in both solids.Owing to the very different time scale, the emission of the hybrid compound could not be attributed to free Wannier-excitons of PbCl inorganic framework whose The Journal of Physical Chemistry C lifetime was estimated at 0.6 ns. 133Furthermore, the average lifetime τ of the hybrid compound (Figure 9a) is comparable to that of the organic salt (Figure 9b).The presence of multiple peaks generated in the PL behavior, with a fast and slow component, may also indicate the coexistence of fluorescence and phosphorescence processes. 134,135oom Temperature Phosphorescence RTP.In addition to the photoluminescence described in the previous sections, both hybrid and salt crystals exhibit a bright green afterglow that persists for 0.633 and 0.566 s, respectively, and can be seen with the naked eye at room temperature after ceasing the UV irradiation (Figure 10a and Videos S1 and S2 captured with a smartphone camera (Galaxy A52 (2021)).This is ascribed to RTP, which is another feature of CL.As a matter of fact, RTP is an optical process that can be promoted in a crystal with well-packed structures which restrict the molecular motions and suppress the nonradiative decay from the triplet state.The delayed PL lifetime of (TET) 2 [Pb 4 Cl 16 ] and TET• Cl 4 was determined, from the analysis of a region of interest of the video recording. 68,69The hybrid and salt delayed decay lifetimes were estimated using a single exponential equation I(t) = I 0 exp(−t/τ) fitting in the selected time window t 2 −t 1 , as described in Figure 10b.Similar values of the RTP lifetimes are estimated for the hybrid compound and the salt (τ H = 196 ms, τ S = 200 ms), which suggests that phosphorescence emission could occur from triplet states of the organic cations, possibly after a RET from the inorganic framework and intersystem crossing (ISC) in the case of (TET) 2 [Pb 4 Cl 16 ].The occurrence of RTP in both compounds corresponds to a population of triplet states and could be attributed to the following considerations: (i) The presence of heavy atoms (Pb, Cl) promotes spin−orbit coupling (SOC) and heavy atom effects, favoring ISC.(iii) The strong interfacial interaction between the organic cations and the inorganic framework might result in a remarkable RET, reducing the energy gap between the excited singlet and triplet states.−140 For instance, benzoquinolinium (BZQ) metal halides (such as (BZQ)-Pb 2 X 5 ) exhibit RTP from BZQ + cation when Cl is substituted by Br, owing to a heavy atom effect. 141,142A similar mechanism may be effective for the present compound, whose photoluminescence properties, including fluorescence and phosphorescence, are thought to be highly dependent on the chemical structure and the formation of a sufficiently rigid supramolecular cluster within the crystal.
Through-Space Conjugation TSC.In order to gain more insight into TSC, which reveals the working mechanism for CL, the crystal structure of (TET) 2 [Pb 4 Cl 16 ] was further examined.As reported in many recent studies, clustering in both solid and solution states could involve various forms of through-space interactions that induce TSC. 43,44,131Even with the absence of π-electrons, (TET) 2 [Pb 4 Cl 16 ] contains lone pair electrons from nitrogen (from ammonium groups) and chlorine atoms which may contribute to through-space n•••n interactions.It is noteworthy that many hyperbranched and even linear polymeric molecules containing imine and amine functional groups have been characterized as producing strong photoluminescence favored by the short N•••N through-space (not directly covalently bonded) distances. 38,138,143,144As shown in Figure S2a, the TET molecule presents intramolecular short contacts between nitrogen and adjacent carbons, such as N The delayed luminescence lifetime of the hybrid compound and salt at room temperature extracted from a smartphone video.The initial time t 0 = 0 corresponds to the first frame after tuning off the UV excitation source with a maximum of PL intensity.t 1 is a constant offset from t 0 to enable the settling of the excitation source response from the quick decline in prompt luminescence.Finally, t 2 is the point in time when 99.52% and 99.54% of the total delayed emission of the hybrid and salt compounds, respectively, have been collected.It is worth noting that TD-DFT simulations revealed that through-space electron delocalization among atoms can still occur even when the distance is greater than the sum of vdW radii, which is most likely due to the shortened distance in the excited states. 38,43,145,146Furthermore, it has been reported that halogens promote effective halogen-induced intermolecular interactions, leading to the restriction of molecular motions and molecular vibrations.S6) promotes self-assembly, creating a 3D supramolecular network and extends the through-space electronic communication network.It is imperative to consider that hydrogen bonds present in this system will not only stiffen the conformations, but also stabilize and facilitate the communication among organic groups and between organic and inorganic groups by bringing them into close proximity. 150,151The different interactions mentioned above contribute synergistically to promoting the TSC, therefore enhancing CL.Noting that the design of donor−acceptor interaction between halogen groups and organic luminogen and between organic and inorganic groups, in terms of energy/ charge transfer, may result in abundant luminescence phenomena. 96,140,152nergy Transfer RET.The photoluminescence results suggest that triplet excitons are formed in (TET) 2 [Pb 4 Cl 16 ], following efficient ISC, facilitated by SOC and heavy atom effects, and that the emission originates from the organic cation through energy transfer at the organic−inorganic interface.Along with the results of the DOS calculation presented above, the excitation dependence of PL has been well recognized as a powerful tool to highlight the RET processes, especially in similar hybrid materials where the absorption and emission of the two constituting entities (viz.organic and inorganic) overlap in the spectral domain.−155 This behavior corresponds exactly to the situation we observed for our compound, characterized by an energy diagram favorable to this mechanism, where the inorganic anion [Pb 4 Cl 16 ] acting as a donor and the organic cations TET acting as an acceptor (Figure 11a).Under excitation above the inorganic gap, estimated at 3.1 eV, both organic and inorganic components are excited.The photoluminescence spectrum therefore includes components from both sublattices.When the excitation is below the inorganic gap, only organic cations are excited, and the PL spectrum includes only recombination localized on the organic components.For instance, the emission band for both compounds at 470 nm show similar behavior for excitation below and above the donor band gap (Figure 11b), which is reasonable given that the TET molecule is the source for this emission.However, a distinct pattern is noted for the 530 nm emission band.In contrast to the salt, the PL intensity of the hybrid is significantly higher for excitations above the donor gap than below it, highlighting the role of the energy/charger transfer from [Pb 4 Cl 16 ] to TET supramolecular clusters in the recombination process (Figure 10a).It is worth noting that the total PL intensity of the hybrid is almost five times greater than that of the organic salt, owing to the contribution of the inorganic moiety besides the organic TET molecules.The inorganic clusters serve as a platform for organic functional group clusterization, which not only enhances the rigidity of the whole structure but also involves different kinds of communication, including the RET mechanism and TSC, broadening the white luminescence and enhancing the emission efficiency.The transfer of exciton from the inorganic framework to the organic component is expected to be ultrafast 109 and, therefore, not observable in our time-resolved photoluminescence experiments described above.
Figure 3 .
Figure 3. N−H•••Cl hydrogen bonds connecting the two TET cations with the inorganic framework.
Figure 6 .
Figure 6.(a) OA spectra and (b) PL spectra under 370 nm excitation of aqueous solution with different concentration obtained by dissolution of a corresponding weighted amount of (TET) 2 [Pb 4 Cl 16 ] in pure water.
Figure 7 .
Figure 7. Photoluminescence excitation spectra (PLE) of (TET) 2 [Pb 4 Cl 16 ] (a) and TET•Cl 4 (c) recorded at different emission wavelengths, λ em .Emission spectra (PL) of (TET) 2 [Pb 4 Cl 16 ] (b) and TET•Cl 4 (d) as a function of excitation wavelengths, λ ex .The PL curves have been shifted along the PL intensity axis to better highlight the red-shift of λ em as λ ex increases.The excitation source effect is suppressed and is represented by dashed lines.
Figure 8 .
Figure 8. CIE diagram coordinate showing the trajectory of color-tunable emission recorded under different λ ex from 340 to 540 nm for the hybrid compound (a) and from 340 to 540 nm for the salt compound (b).
Figure 9 .
Figure 9. Photoluminescence decay profiles for the hybrid (a) and the salt crystals (b) recorded under excitation at 375 nm and monitored at different emission wavelengths (λ em ).
Figure 10 .
Figure 10.(a) Video frames of the hybrid compound and salt luminescence taken under 375 nm UV light and after ceasing the UV irradiation.(b)The delayed luminescence lifetime of the hybrid compound and salt at room temperature extracted from a smartphone video.The initial time t 0 = 0 corresponds to the first frame after tuning off the UV excitation source with a maximum of PL intensity.t 1 is a constant offset from t 0 to enable the settling of the excitation source response from the quick decline in prompt luminescence.Finally, t 2 is the point in time when 99.52% and 99.54% of the total delayed emission of the hybrid and salt compounds, respectively, have been collected.
Journal of Physical Chemistry C whose distances are much shorter than the sum of their vdW radii (3.25 Å).These C•••N intramolecular interactions intertwine and rigidify the TET molecule, which in turn induces strong and stable through-space N•••N interactions. 48The intramolecular N•••N interactions (such as N 5 •••N 6 = 3.196 Å, and N 2 •••N 1 = 3.724 Å) that occurs between the nitrogen lone pairs within the crystal structure constitute the basis for n•••n through-space electronic communication network.
Figure 11 .
Figure 11.(a) Schematic illustration of the energy diagram of the electronic structure of the hybrid compound.(b) Excitation spectra of the hybrid and the salt at 470 and 530 nm emission wavelengths, showing the PL behavior below and above the inorganic band gap. 3 8 | 2023-11-25T16:06:54.138Z | 2023-11-23T00:00:00.000 | {
"year": 2023,
"sha1": "944d61a406fd61597fde309625dfc4833da15fd0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1021/acs.jpcc.3c04647",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "49b17ad5be60e60bfda92734e5dce6f9d2c1abd3",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268287619 | pes2o/s2orc | v3-fos-license | Review and update on the management of triangular fibrocartilage complex injuries in professional athletes
Triangular fibrocartilage complex injuries are common in amateur and professional sports. These injuries are mainly caused by acute or chronic repetitive axial loads on the wrist, particularly on the ulnar side and in association with rotations or radial/ulnar deviations. In order to treat professional athletes, a detailed specific knowledge of the pathology is needed. Moreover, the clinician should fully understand the specific and unique environment and needs of the athletes, their priorities and goals, the type of sport, the time of the season, and the position played. An early diagnosis and appropriate management with the quickest possible recovery time are the uppermost goals for both the athlete and the surgeon. A compromise between conservative vs surgical indications, athletes’ needs and expectations, and financial implications should be achieved. Arthroscopic procedures should be timely planned when indicated as they could allow early diagnosis and treatment at the same time. Conservative measures are often used as first line treatment when possible. Peripheral lesions are treated by arthroscopic repair, whilst central lesions are treated by arthroscopic debridement. Further procedures (such as the Wafer procedure, ulnar osteotomies, etc.) have specific indications and great implications with regard to rehabilitation.
INTRODUCTION
The triangular fibrocartilage is a small load-bearing disc-shaped anatomical structure located at the level of the distal part of the ulna and distal radio-ulnar joint, in close relation to the ulnar styloid and the ulnar margin of the distal radius.It is called the "triangular fibrocartilage complex" (TFCC) together with the dorsal and volar distal radio-ulnar ligaments, meniscal homolog, ulno-carpal ligaments and extensor carpi ulnaris tendon sheath.Vascularization is provided by dorsal and palmar branches of the ulnar artery, and palmar branches of the anterior interosseous artery [1][2][3].The central portion has reduced regenerative capacities, whilst the peripheral portion is more prone to reparative processes, being provided with a better blood supply compared to the central portion.
The function of the TFCC is to act as a stabilizer for the ulnar aspect of the wrist.It can resist both loading and tensive forces.The TFCC is at risk for either acute traumatic or chronic degenerative injury [1][2][3].One can intuitively understand that athletes are at greater risk of acute traumatic injuries to the TFCC rather than degenerative problems.
The diagnostic and therapeutic process can vary between the general population and professional athletes.Several factors must be taken into account in order to choose the most appropriate course of action [1,2].
LITERATURE SEARCH
A narrative review of published papers on the diagnosis and management of TFCC injuries in professional athletes was performed.PubMed, MEDLINE, Cochrane and EMBASE databases were searched.The search keywords were: TFCC injury; triangular fibrocartilage complex injury; TFCC athletes; triangular fibrocartilage complex athletes; TFCC sport; triangular fibrocartilage complex sport; TFCC professional athletes; triangular fibrocartilage complex professional athletes; TFCC treatments; triangular fibrocartilage complex treatments.
All Authors independently performed the review and all included articles were scrutinized.The included articles and results were merged after common agreement among the Authors.Only articles specifically based on the diagnosis and management of TFCC injuries in professional athletes published in the last 20 years were included.In addition, only articles with level of evidence I-II-III-IV were included.Informed consents and ethical approval were not necessary (narrative review).
EPIDEMIOLOGY
Whilst epidemiologic data in the general population have been reported, there is still uncertainty and a lack of high level scientific evidence with regard to the epidemiology of TFCC lesions related to sports.It is reported that these lesions represent between 3% and 9% of the hand-wrist injuries in athletes.It is also reported that the prevalence increases with age [1,2,4,5].
TFCC injuries are common in both amateur and professional sports.These injuries are mainly caused by acute or chronic repetitive axial loads on the wrist, particularly on the ulnar side and in association with rotations or radial/ulnar deviations [1,2,[4][5][6][7][8][9].
Sports that more commonly result in such injuries have been reported to be: Tennis, padel, table tennis, golf, and baseball.Sports that less commonly result in these injuries (but still with a significant number of reported cases) are: Volleyball, basketball, water board sports, and gymnastics [1,2,[6][7][8][9].
CLASSIFICATIONS
Classifications of TFCC injuries are based on anatomy (central vs peripheral) or more commonly on the etiology.In fact the classification most frequently used is Palmer's classification, which divides TFCC lesions into 2 groups: Type 1 (acute traumatic injuries) and type 2 (degenerative lesions) [10][11][12].Type 1 lesions are further divided into subgroups on the basis of the anatomical location of the lesion: Type 1A (isolated central TFCC articular disc perforation); type 1B (peripheral ulnar sided TFCC tear (with or without ulnar styloid fracture); type 1C (distal TFCC disruption (from the distal ulno-carpal ligament); type 1D (radial TFCC disruption with or without sigmoid notch fracture) [10][11][12].
DIAGNOSIS
The most common symptoms of a TFCC injury are: Ulnar-sided wrist pain with tenderness mainly found at the level of the fovea; ulnar-sided minimal oedema; pain against resistance; reduced range of motion (ROM) of the wrist; joint sagging during rotations or load-bearing activities; audible and palpable "click" from the ulnar side of the wrist during forearm rotations (prono-supination) and sometimes radial and/or ulnar deviation.Pain and reduced function on radioulnar deviation can be characteristic of an ulnar impaction syndrome.Injuries can be asymptomatic or pauci-symptomatic [13][14][15][16][17].
Athletes often report sudden occurrence of pain during forearm rotations and axial-loading of the wrist.Different combinations of axial-loading, rotation and radial or ulnar deviations have been reported.Direct trauma on the ulnar side of the wrist is a rare but existing occurrence, especially with the wrist in radial deviation.Another possible finding is a chronic lesion caused by repetitive movements of the wrist and mechanical stress in general of the distal radio-ulnar joint [13,14,18].
An association between TFCC injuries and other major musculoskeletal lesions (such as Colles' fractures, Galeazzi's injury, Essex-Lopresti injury, etc.) in athletes competing in sports involving strong body contact such as football, rugby, etc. has rarely been reported [1,2,4,5].
If the TFCC injury is significant, the athlete is rarely able to continue to the end of the training sessions or games/ competitions given the significant pain and wrist motion impairment.
Specific tests introduced in order to evaluate wrist stability are as follows: Ulnar fovea sign, piano key test, ulnar grinding test, compression test, and the ballottement test [1,2,19,20] (Table 2).
The next step is radiology.A plain radiograph is a must (at least the common antero-posterior (AP) and lateral view) and is the simplest and quickest radiological exam performed.It is of absolute relevance to identify any fractures (especially affecting the ulnar styloid), measure the standard radiological parameters of the wrist (ulnar variance in particular on the AP view), exclude a subluxation of the distal radio-ulnar joint (by examining the lateral view) and evaluate the potential presence of an ulnar impaction syndrome (especially in degenerative lesions) [1,2,16,19,20] (Table 2).
A computed tomography scan is rarely recommended, and is indicated only in cases of intra-articular fractures of the wrist.A wrist magnetic resonance imaging (MRI) scan is often indicated for TFCC injuries.Its sensitivity and specificity vary depending on the level of resolution of the scanning machine.High resolution MRI scans provide a level of accuracy up to 97% [1,2,16,19,20]
(Table 2).
A wrist arthrogram is an option very commonly used in some units as this exam allows visualization of the lesion location and defines the characteristics of the TFCC lesion[1,2,16] (Table 2).
However, the literature indicates that wrist arthroscopy is the gold standard for TFCC injury diagnosis and to potentially allow appropriate treatment simultaneously.It is considered the test with the highest sensitivity and specificity, provides the opportunity to directly visualize the anatomical structures and make a specific diagnosis, and allows arthroscopic treatment at the same time (debridement or repair).Several arthroscopy portals and repair techniques have been reported in the literature, which are chosen specifically depending on the type of patient and lesion [1,2,21,22].Specific aspects related to the choice and timing of radiological examinations will be discussed in the following sections.
Conservative treatments
Athletes with a TFCC injury, after the appropriate diagnostic process, are very often prone to attempt conservative measures first, before more invasive options.However, this will be further developed in this article as many specific factors must be considered when making such decisions [1,2,20,21].
The following conservative measures are applied in different combinations: No sport activity for 3-6 wk (depending on the severity of the injury); immobilization with a splint for 2-4 wk; utilization of non-steroidal anti-inflammatory agents (oral and/or topical); one or more steroid injections and/or hyaluronic acid injections; physical therapy; occupational If it is necessary for the athlete to carry on to the end of the competitive season or a specific game/competition (weeks or months), the clinician should postpone the surgical treatment until the annual break and carry out the abovementioned conservative strategies, if the severity of the injury allows and bearing in mind that the athlete must perform at a certain high level.However, if the TFCC injury is very severe and strong surgical indications are defined, especially in the absence of imminent relevant competitions, the orthopaedic surgeon could recommend surgical repair straight away, with the aim of achieving the quickest possible recovery and return to sport activity avoiding long-term degenerative complications [1,2,[22][23][24].
It is suggested that an experienced multidisciplinary team centered on an orthopaedic surgeon with strong experience in hand and wrist surgery should be involved in the treatment of elite athletes with TFCC injuries.In fact, injury management errors would be particularly evident when dealing with professional athletes and significant consequences could be caused by little mistakes.We stress again the importance of the initial decision on the timing of surgical repair: The athlete's entire career is at stake and this "crossroad" is the key management decision that could positively or negatively affect the outcome of treatment [1,2].
Surgical treatments
Type 1A lesions: Isolated central TFCC articular disk perforation.These lesions are avascularized and cannot be arthroscopically repaired.Therefore arthroscopic debridement is the surgical treatment of choice.It is reported in biomechanics studies that up to 80% of the disc could be debrided/removed without causing any significant wrist instability.A standard arthroscopic set up is usually required with the use of 2 portals (rarely 3 portals) [1,2,[18][19][20][21][22][23][24].
If a type 1A lesion is associated with neutral or positive ulnar variance, it may be necessary to perform a shortening ulnar osteotomy or a Wafer-procedure after debridement.The osteotomy could be postponed to the end of the season with the aim of allowing the injured athlete to return to high level performance in a few weeks and manage the remaining problem at the end of the season.The rehabilitation program after an osteotomy could last up to 3-4 mo (it requires immobilization for 6-8 wk) and the entire season could be at risk [1,2,[18][19][20][21][22][23][24].Type 1B lesions: Peripheral ulnar-sided TFCC tear with or without ulnar styloid fracture.These lesions are vascularized and can potentially be repaired.Using an inside-out, outside-in or all-inside technique, the lesion is arthroscopically repaired [1,2].
Type 1C lesions: Distal TFCC disruption (disruption of the distal ulno-carpal ligaments).These lesions are often diagnosed without arthroscopy and mainly require an open surgery repair.An incision on the ulnar side of the wrist just volar to the extensor carpi ulnaris should be made and the neurovascular structures carefully protected throughout the entire procedure.Following appropriate exposure, the lesion can be directly repaired; several techniques have been described [1,2,[18][19][20][21][22][23][24].
Type 2A lesions: TFCC wear.Symptoms related to these lesions may be insidious.Radiographs are necessary in order to rule out and diagnose degenerative changes in the distal radio-ulnar joint and evaluate the ulnar variance.Elite athletes are often offered surgical management including a shortening ulnar osteotomy for those with neutral or positive ulnar variance.However, the latter is contraindicated in the presence of radio-ulnar joint arthritis.In this case a distal ulnar resection is proposed [1,2,[18][19][20][21][22][23][24].
Type 2D lesions: TFCC perforation with lunate and/or ulnar chondromalacia and with lunotriquetral ligament perforation.This type of lesion rarely affects elite athletes as significant degenerative processes do not generally occur before 30-40 years of age.The treatment does not differ from that of 2C lesions, apart from the necessity to evaluate the stability of the lunotriquetrel ligament.Consequently a Wafer procedure is contraindicated in the case of instability, and accurate debridement of the ligament is also necessary.If instability persists even after the osteotomy procedure, a lunotriquetral arthrodesis should be considered as a second stage treatment option in those whose symptoms do not improve after osteotomy [1,2,[18][19][20][21][22][23][24].
Type 2E lesions: TFCC perforation with lunate and/or ulnar chondromalacia, lunotriquetral ligament perforation and ulno-carpal arthritis.In the presence of degenerative changes, Wafer procedures and osteotomies are not indicated in these patients.The well studied salvage procedure called Sauve-Kapandji (or hemiresection arthroplasty) is a suitable option for these cases.A Sauve-Kapandji procedure is the treatment of choice for elite athletes as it offers the lowest risk of radio-ulnar impingement during sport activities [1,2,[18][19][20][21][22][23][24].The rehabilitation protocol of type 2 lesions does not differ from those described above.
Sport-specific treatments
The treatment of choice may depend and vary on several factors.The clinician should take into account the level of pain and movement limitations, the type of lesion, the severity of the injury, the level of competition, the timing of the injury in relation to the stage of the agonistic season, sport, and position played by the athlete [1,2,[22][23][24][25][26][27].
Very early diagnosis is the key step for elite athletes as this allows early identification of the problem and subsequent early treatment planning.If temporary immobilization is advocated, professional athletes are often very reluctant to be compliant with this treatment strategy.Moreover, athletes participating in sports involving repetitive pronation/ supination and radial/ulnar deviation do not significantly benefit from this type of conservative measure and very often immobilization is avoided [5,16,17].
On the other hand, steroid injections (with or without the use of hyaluronic acid) seem to be a quite common temporary or definitive option for professional athletes.These injections are indicated for injuries that do not have surgical indications or for injuries with surgical indications in athletes who are willing to end the season before undergoing surgery.However, there are exceptions: In the presence of radio-ulnar instability, central lesions (an early arthroscopic procedure for central lesions allows both an early and accurate diagnosis and appropriate surgical management at the same time, assuring the shortest rehabilitation time), and lesions associated with neutral or positive ulnar variance (debridement is initially needed, after which further surgical treatments are considered and evaluated in the following months), a prompt surgical plan is warranted [1,2,[18][19][20][21][22][23][24].
Different to the surgical timing for central lesions, peripheral lesions with surgical indication do not need immediate surgical planning.Careful discussion (with pros and cons evaluation) between the athlete and the surgeon should take place with a shared final decision on whether to decide on a conservative or surgical option.As mentioned previously, several factors should be taken into account and the decision should be athlete-centred and specific.If the type of injury allows, many opt for temporary measures (steroid injections prevalently) in order to complete at least the season.In fact a surgical option implies the need for at least 3 mo rehabilitation.More invasive treatments such as Wafer procedures are very commonly delayed until the end of the season, whilst ulnar osteotomies are widely postponed at least until the end of the season, if not to the end of the professional career [1,2,[18][19][20][21][22][23][24].
The major issue for all athletes whose injuries have a surgical indication but are treated temporarily with conservative measures until the end of the season, is the possibility of compromising the final surgical results and increasing the risk of medium-and long-term consequences (such as degenerative changes)[1,2,18-24].
SPORT-SPECIFIC REHABILITATION INSIGHT
It is known that central lesions can be treated arthroscopically.Patients are required to use a splint for 1 to 2 wk after the surgical procedure and then start passive and active ROM exercises.Athletes playing sports such as golf are usually able to start their routine training (including ball-contact) after about 3 wk.Full return to competitive sport activities can be achieved in 4-6 wk in these patients.On the contrary, for sports involving significant axial loading forces onto the wrist (such as boxing and gymnastics) full return to competition level may take up to 8-12 wk.Athletes playing sports involving frequent and intense radial-ulnar deviation and rotations of the wrist (such as tennis and padel) may return to competitions in approximately 6-8 wk [1,2,[22][23][24][25][26][27].
The rehabilitation protocol after arthroscopic debridement for peripheral lesions is longer than that described above.In fact, patients require a period of immobilization with a splint or cast for 2-6 wk.This should be followed by a further 6-8 wk of passive and active wrist ROM exercises and strengthening exercises.The return to competitive sport activity may be achieved after 2-3 mo, independent of the type of sport [1,2,[22][23][24][25][26][27].
The return to sport might take longer following surgical treatment of type 2 lesions.A Wafer procedure requires 1-2 wk of immobilization in a splint or cast which should be followed by early active exercises first and strengthening exercises in the second stage.Full return to competition is authorized after 6-8 wk minimum depending on symptoms: The athlete can compete at the end of the rehabilitation protocol when pain free.No differences among the types of sport have been reported [1,2,[18][19][20][21][22][23][24][25][26][27].
A shortening osteotomy requires 5-6 wk of immobilization, preferably with a cast.This is followed initially by passive and active ROM exercises of the wrist and then by strengthening exercises.Full return to competitive sport may be achieved after 10-12 wk at the earliest.Full bone healing (with radiological evidence from plain radiographs) is necessary in order to allow the athlete to compete again [1,2,[18][19][20][21][22][23][24][25][26][27].
Elite athletes playing certain sports can be aided by the use of taping, splinting or padded casts, especially after Wafer or osteotomy procedures in order to reduce stress.However, not all sports allow the use of these aids.In fact, athletes playing ball contact sports (such as rugby, football, baseball and tennis) cannot fully compete if they require these aids and their rehabilitation protocol might take longer than expected as a consequence.Therefore, protective equipment varies between non-contact and contact sports, and its use may vary even among different roles played by the athletes within the same sport activity [1,2,[18][19][20][21][22][23][24][25][26][27].
In general, a professional athlete with key roles within the team or with very high potential and expectations can wait until the end of the season to undergo a surgical procedure after sustaining a TFCC injury (and utilize temporary measures such as steroid injections); moreover, the rehabilitation protocols tend to be more intense as the quickest possible return to sport is attempted.On the other hand, professional athletes with lower expectations decide on surgical treatment at an earlier stage and a slightly more prudent rehabilitation protocol is adopted [1,2,[18][19][20][21][22][23][24][25][26][27].
CONCLUSION
TFCC injuries are common in amateur and professional sports.These are mainly caused by acute or chronic repetitive loads on the wrist, particularly on the ulnar side.This is even worse if axial loads are associated with rotations or radial/ ulnar deviations.
In order to treat professional athletes who sustain TFCC injuries, a detailed specific knowledge of the pathology is needed.Moreover, the clinician should fully understand the specific and unique environment and the needs of the athletes, their priorities and goals, the type of sport, the time of the season, and the position played.
An early diagnosis and appropriate management with the quickest possible recovery time are the uppermost goals for both the athlete and the surgeon.A compromise between conservative vs surgical indications, athletes' needs and expectations, and financial implications should be achieved.Arthroscopic procedures should be timely planned when indicated as they may allow early diagnosis and treatment at the same time.
Conservative measures are often used as first line treatment when possible.Peripheral lesions are treated by arthroscopic repair, whilst central lesions are treated by arthroscopic debridement.Further procedures (such as the Wafer procedure, ulnar osteotomies, etc.) have specific indications and great implications with regard to the rehabilitation time and long-term consequences for the athletes.
Competitive levels are very often achieved by athletes with TFCC injuries surgically.Only a small percentage do not reach satisfactory levels, and this is more common in those undergoing repair procedures or procedures related to radial/ ulnar instability and neutral or positive ulnar variance. | 2024-02-08T16:16:21.329Z | 2024-02-18T00:00:00.000 | {
"year": 2024,
"sha1": "f3ec44b9c457dabc93eafa4c0d58ba4b93ea5da3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5312/wjo.v15.i2.110",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3ec44b9c457dabc93eafa4c0d58ba4b93ea5da3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
11708785 | pes2o/s2orc | v3-fos-license | Effect of copper and zinc status on susceptibility to cadmium intoxication.
The effects of dietary cadmium on copper and zinc metabolism in animals are described. Emphasis is given to situations involving chronic exposure to low levels of cadmium, to the identification of population groups most at risk, and to the protective effect of dietary supplementation with copper and zinc. The mechanism of the interaction between the metals and the involvement of metallothionein are discussed.
The effects of dietary cadmium on copper and zinc metabolism in animals are described. Emphasis is given to situations involving chronic exposure to low levels of cadmium, to the identification of population groups most at risk, and to the protective effect of dietary supplementation with copper and zinc. The mechanism of the interaction between the metals and the involvement of metailothionein are discussed.
It is now widely recognized that the toxicity of metals cannot be considered without due regard being given to dietary composition and the nutritional state of the animal. Increased cadmium intake can cause alterations in the metabolism of copper and zinc in experimental animals and, conversely, the development of certain symptoms of cadmium toxicity may, on occasion, be prevented by dietary copper and zinc supplementation. For example, elevated cadmium intakes in rats, chicks and mice have resulted in increased mortality, poor growth, and anemia (1, 2). The growth rate was restored by zinc supplementation and the mortality and anemia reduced by increasing the copper intake. Supplementation with copper also prevented the degeneration of aortic elastin, presumably by restoring the activity of the copper-dependent enzyme lysyl oxidase. Clinical signs of zinc deficiency have been reported in poultry (3) fed on high cadmium diets, but were absent in animals in which the zinc intake had been increased.
The widespread occurrence of competitive antagonisms between metals has been explained in terms of their isomorphous replacement at particular sites in biological systems. This concept was of considerable value in furthering our understanding of the multiplicity of trace metal interactions, but it did not explain all facets of trace metal imbalance.
One difficulty was that the antagonistic effect of a metal was, on occasion, associated with increased concentrations of the agonist in tissues exhibiting signs of pathological change, rather than with the * Rowett Research Institute, Bucksburn, Aberdeen AB2 9SB, United Kingdom. decreased concentrations that might be expected. For example, the testicular atrophy occurring in rats after cadmium injection was associated with a massive increase in zinc concentrations in the testes and yet was preventable by prior zinc injection (4).
Although many reports have been published illustrating the complexity of the interactions among cadmium, copper, and zinc, these have usually left unanswered the fundamental question of whether copper and zinc metabolism is likely to be disturbed at the level of exposure encountered by human and animal populations. Typical concentrations of cadmium in the human diet are about 0.05 mg/kg and may increase to about 1 mg/kg in contaminated areas. In many experimental studies to date, the concentrations of cadmium used have been greater than this by several orders of magnitude. Furthermore, the diets used have often been frankly deficient in both copper and zinc, although deficiency states of comparable severity are almost unknown in human populations. There is, however, an increasing realization that dietary zinc intakes may be only marginally adequate for normal demands and that copper deficiency may occur in special circumstances, as in cases of malnourishment in infants. In view of the occurrence of skeletal defects and cardiovascular lesions in both chronic cadmium toxicity and copper deficiency, there is a need for detailed assessment of the importance of the interactions between these metals. This can only be done by direct experimentation in animals, using realistic concentrations of cadmium, zinc, and copper. It must also be recognized that the effects of cadmium on copper and zinc metabolism may be more evident in certain population groups, because of increased demands on body reserves of essential trace elements during periods of rapid growth, pregnancy, and lactation.
Concern has been expressed about the hazards of cadmium intoxication in animals grazing near industrial complexes where pastures may contain about 10 mg cadmium/kg. In studies designed to assess this danger, Mills and Dalgarno (5) found that copper metabolism was seriously disturbed in pregnant ewes, and more especially in their lambs, when they were fed diets containing only 3.5-12 mg cadmium/kg, even though their copper intake, 5 mg/kg, was apparently sufficient to meet normal requirements. Both liver and plasma copper concentrations were markedly reduced but the only clinical manifestation of copper deficiency was a deterioration in wool quality. Zinc metabolism was not seriously affected, despite a decrease in liver zinc contents. In agreement with the observations of Anke et al. (6), there appeared to be an efficient placental block against cadmium transport to the fetus, but there was nevertheless still a trend towards reduced copper accumulation in the newborn lamb. Little or no cadmium was accumulated by suckling animals, but newly weaned animals appeared particularly susceptible to cadmium following their introduction to solid food.
In contrast, no disturbances in copper metabolism were detected in pregnant ewes receiving 3 mg cadmium/kg and only 2.6 mg copper/kg, which was still sufficient to maintain normal growth and blood indices in the control animals (Campbell and Mills, unpublished data). However, the offspring of the cadmium-treated animals had lower birth weights, and there were signs of skeletal rarefaction and significant reductions in growth, plasma copper concentrations, and cytochrome oxidase activities of liver and duodenal mucosa when they were fed these diets (with 2.5 or 4.5 mg copper/kg) for several months thereafter. These effects were abolished by increasing the dietary zinc intake from 30 to 150 mg/kg. Surprisingly, this was associated with an increase in hepatic copper concentration, in contrast to the decrease noted by Bremner, Young, and Mills (7) in demonstrating a protective effect of zinc against copper toxicity in sheep. The effects of both cadmium and zinc on the growth of the lambs were abolished by increasing the dietary copper content to 15 mg/kg.
The inclusion of a range of dietary zinc concentrations in these experiments was inspired by the common occurrence of both cadmium and zinc in excessive amounts in contaminated pastures (5). A dietary zinc content of 750 mg/kg severely reduced food intake, growth, and copper status of the pregnant ewes. It also caused a high incidence of abor-tions and reduced the viability of the lambs. Increasing the dietary copper content prevented the decrease in plasma copper concentrations but had no effect on growth or on lamb survival, which indicates that these effects did not result from a conditioned copper deficiency. Instead it is probable that they resulted from the low food intake of the ewes or the accumulation of zinc in fetal kidneys and other tissues, with consequent renal damage similar to that reported in lambs on liquid diets with a high zinc content (8). It is noteworthy that adult sheep can tolerate zinc intakes of 400-1000 mg/kg with at most only slight effects on growth and food intake. It is apparent therefore that the pregnant or young liquid-fed animal is more susceptible to excessive zinc intakes. However, in the latter case, this may be a reflection of a more general phenomenon since the young of many species show increased absorption of several heavy metals compared with older animals.
Similar findings of decreased plasma ceruloplasmin and kidney copper concentrations and reduced cortical bone index have been made in rats fed a diet with as little as 1.5 mg cadmium (9, Campbell and Mills, unpublished data). These effects were exacerbated by cadmium intakes of up to 18 mg/kg, which reduced liver copper concentrations by about 50%o. Plasma zinc concentrations were reduced, and liver and kidney zinc concentrations were increased. However, the changes in zinc content were relatively minor and, unlike the situation in sheep, there was no beneficial effect on increasing the zinc content of the diet. Instead, this increased the severity of the copper deficiency state, although it did reduce kidney cadmium concentrations. Indeed, dietary intakes of 300 and 1000 mg zinc/kg, even in the absence of cadmium supplements, had severe effects on copper metabolism.
The reductions in plasma and tissue copper concentrations caused by a cadmium intake of 6 mg/kg were prevented by increasing the copper intake, indicating that there was a direct effect of cadmium on copper metabolism. This could arise from a reduction in copper absorption or alternatively a change in copper distribution within tissues and its displacement from functional sites. Van Campen (10) and Starcher (11) have claimed that cadmium inhibits "Cu absorption in rats and chicks, possibly by inhibition of copper-binding to a low molecular weight protein in the mucosal cytosol, which may be involved in the copper absorption process (11,12). Davies and Campbell (13) confirmed that cadmium inhibited copper absorption at a molar cadmium:copper ratio as low as 4:1 which was similar to that found sufficient to induce a copper deficiency state in rats (9).
Environmental Health Perspectives
However, in contrast with the earlier reports, they found that binding of 64Cu to the intestinal mucosa was increased, even at a cadmium:copper ratio of 1.: 1. This was associated in part with the low molecular weight copper protein, the binding of 64Cu to the protein being inversely proportional to the cadmium intake. It appears, therefore, that cadmium may block the exit of copper from mucosal cells, while exerting little inhibitory effect on its mucosal uptake. The lack of agreement as to the effects of cadmium on mucosal binding of copper may derive from the abnormally high cadmium:copper ratios used in the earlier studies (10,11) or from the fact that Davies and Campbell (13) maintained their rats on the cadmium-supplemented diet for one week prior to dosing with 64Cu, as this increased the concentrations of cadmium and zinc in the low molecular weight fraction.
The nature of the intestinal metal-binding proteins has yet to be established. It has been assumed by several groups that the cadmium protein is metallothionein, but, according to Evans and Le-Blanc (14), the copper protein may have a different amino acid composition. However, similar claims have been made as to the identity of the analogous copper-protein in liver (15), yet Bremner and Young (16) successfully isolated (copper, zinc)-thioneins from the livers of copper-injected rats.
Many of the cadmium-induced changes in tissue copper and zinc distribution appear to result from increased incorporation of the other metals into metallothionein. For example, cadmium administration causes an increase in the amount of zinc in hepatic metallothionein and of copper in renal metallothionein (17). Although cadmium, zinc, and copper can all apparently induce synthesis of metallothionein, displacement of one metal by another and competition for binding sites on the protein may also occur. For example, cadmium has a greater binding affinity than zinc for binding sites, and high copper concentrations in ovine liver have resulted in displacement of both zinc and cadmium from metallothionein (Bremner, unpublished data). Furthermore, the zinc status of an animal has an important influence on the accumulation of copper-thioneins in liver and kidney (18,19), possibly because of the decreased biological stability of the zinc-free copper-thioneins (20).
It is not known whether the accumulation of cadmium-thionein after chronic exposure to cadmium is influenced by zinc status, but it has been shown that the lethality (21) and development of testicular atrophy (4) and alterations in hepatic and pancreatic function (22) in acute cadmium toxicity are diminished by zinc administration. This has been attributed (21) to increased and more rapid incorporation of cadmium into metallothionein as a result of its prior induction by zinc (23). However, the importance of preinduced metallothioneins in the protection against acute cadmium toxicity has recently been disputed (24).
These conclusions do not necessarily apply to animals chronically exposed to cadmium where it can be argued that any displacement of cadmium from metallothionein will lead to greater expression of the toxicity of cadmium. For example, the conversion of vitamin D into the active 1,25dihydroxycholecalciferol derivative in the kidney is inhibited by cadmium, but not by cadmium-thionein (25). It is possible that cadmium bound to other proteins might still inhibit the activation of vitamin D, with eventual development of skeletal lesions typical of cadmium toxicity.
It is interesting that the hydroxylation of vitamin D is due to a mixed function oxidase reaction and may therefore be a copper-dependent process (26). The occurrence of disturbances in vitamin D metabolism in cadmium-treated animals could therefore result from direct inhibition by cadmium of the hydroxylation or indirectly by induction of a copper deficiency state. These postulated effects of ohydroxylations may have wider implications in view of the reduction in cytochrome P-450 contents and inhibition of drug metabolizing enzymes in both cadmium-treated (27) and copper-deficient rats (28). | 2017-10-03T14:09:42.239Z | 1978-08-01T00:00:00.000 | {
"year": 1978,
"sha1": "76e6ad625783fb10d95d4b296251c866ca24ef46",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.7825125",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76e6ad625783fb10d95d4b296251c866ca24ef46",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
148574821 | pes2o/s2orc | v3-fos-license | Twin gestation in a uterus didelphys with only one functional cervix: A case report
Introduction Twin gestation in a uterus didelphys with one fetus in each uterine cavity is rare and presents unique challenges in antepartum and intrapartum care. Case Presentation A 35-year-old woman with a uterus didelphys became pregnant with twins, with one fetus in each uterus, after intrauterine insemination of a single visible cervix. Multiplanar ultrasonography showed the presence of one complete cervix and a second hypoplastic cervix; it was unclear whether she could deliver both twins vaginally. Her pregnancy was complicated by fetal growth restriction of twin B. At 38 weeks, the patient underwent scheduled cesarean section and delivered two viable twins. Conclusion Determining the precise anatomy of Mullerian duct anomalies, including the cervix and vagina, is important for obstetrical management.
Introduction
Failure of the Mullerian ducts to fuse during embryonic development can result in a wide spectrum of female reproductive tract malformations, ranging from a minor variation in uterine shape to complete absence or duplication of the uterus, cervix, and vagina. Mullerian duct anomalies have been associated with increased risk of infertility, pregnancy loss, and adverse pregnancy outcomes depending on the extent of abnormality [1]. The American Society for Reproductive Medicine (ASRM) created the most widely known classification system for Mullerian duct anomalies in 1988; however, it is almost entirely limited to describing uterine abnormalities. Thus, the European Society of Human Reproduction and Embryology and the European Society for Gynaecological Endoscopy (ESHRE/ESGE) proposed a new classification in 2013, which accounts for variations of cervical and vaginal anatomy [2]. Identifying the exact nature of a patient's anomaly is crucial to understanding the potential impact on reproductive health.
The uterus didelphys is one of the least common anomalies, estimated to occur in 1 in 1000 women, and is classically defined as the presence of two uteri, two cervixes, with or without a longitudinal vaginal septum, though other variations exist [1,3]. Twin gestation with a fetus occupying each uterus has been reported in a small number of case studies. Simultaneous pregnancies may pose some unique obstetrical issues when two complete uteri can contract independently and can have different rates of cervical dilation [4]. Management decisions such as determining feasibility of vaginal delivery or the surgical approach to cesarean delivery is unclear and left to the discretion of the obstetrician.
This case report discusses a patient with an exceedingly rare situationa twin pregnancy occurring in a uterus didelphys, with one twin in each uterine cavity. However, this case is further unique in that only one cervix was visible on pelvic exam. Whether both twins could deliver vaginally or how a pregnancy occurred in the left uterus without an identifiable cervix was initially unclear. The issues surrounding the evaluation of Mullerian duct anomalies in pregnancy and their implications during labor and delivery are discussed.
Case Presentation
A 35-year-old woman was found to have a Mullerian duct anomaly during investigation of her primary infertility. Magnetic resonance imaging showed she had a uterus didelphys with two uteri and two cervixes, as well as an absent left kidney. However only one cervix was identified on pelvic exam. Hysterosalpingogram (HSG) was performed by injecting dye into the single cervix, resulting in filling of the right uterine cavity and spillage from the right fallopian tube (see Fig. 1). With an uncertain diagnosis of her Mullerian duct anomaly, the patient underwent several cycles of controlled ovarian hyperstimulation and intrauterine insemination (IUI) through the single identifiable cervix. The patient achieved pregnancy after the fourth IUI cycle using a regimen of 5 days of clomiphene (at 100 mg/day) followed by 9 days of human menopausal gonadotropin (8 days of 2 ampules and 1 day of 3
Contents lists available at ScienceDirect
Case Reports in Women's Health j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / c r w h ampules) while monitoring the number and develop of ovarian follicles using serial transvaginal ultrasound scans. Once 1-2 potential mature follicles were identified in the right ovary, ovulation was trigged with human chorionic gonadotropin injection, and IUI was subsequently performed. Ultrasound at 8 weeks of gestation revealed the presence of dicavitary twins, with one fetus situated in each uterus.
The patient was followed by the perinatology team and was monitored throughout the pregnancy with serial growth ultrasound scans and non-stress testing due to her unusual uterine anatomy. The right twin developed growth restriction at 37 weeks, with an estimated fetal weight at the fifth percentile for gestational age but with normal fetal Doppler velocimetry. Both twins remained in cephalic presentation at term (see Fig. 2). Delivery was recommended around 38-39 weeks, given twin gestation with fetal growth restriction. Several discussions were had with the patient regarding route of delivery since it was unclear whether the left uterus had a cervix due to the discrepancy between her prior imaging and pelvic exam. Detailed three-dimensional multiplanar ultrasound was performed and confirmed the presence of two cervixes; however, the left cervical canal appeared to be incomplete without any communication to the vagina (see Fig. 3a-f). A small communication was identified between the right and left cervical canals, as shown in Fig. 3f. Thus, it was uncertain whether the left twin would be able to be delivered vaginally. The patient elected to proceed with cesarean delivery.
At 38 and 6/7 weeks of gestation, the patient underwent primary cesarean section with a low midline vertical skin incision to maximize exposure. The left uterus was found to be more anterior, and so the left twin was delivered first (twin A), followed by the twin in the right uterus (twin B) via bilateral low transverse uterine incisions. An intraoperative exam revealed a complete uterus didelphys (see Fig. 4). The right uterus was confirmed to have a complete cervical canal while the cervix of the left uterus was found to end in a blind pouch. Thus, the patient's uterine anomaly was confirmed to be ESHRE/ESGE Class U3b-C3-V0. Estimated blood loss was 1.6 L, mostly attributed to bleeding from the bilateral hysterotomies, but she did not require blood transfusion. Twin A weighed 3195 g with Apgar scores of 8 and 9, and twin B weighed 2705 g (sixth percentile for gestational age) with Apgars of 8 and 9. The postoperative course was uncomplicated, and both twins were discharged with their mother two days after delivery.
Discussion
This case involved a rare scenario of twin gestation in a didelphys uterus with each fetus occupying a separate uterine cavity and illustrates the challenges of diagnosing and managing pregnancies with uterine anomalies. Mullerian duct anomalies include a variety of anatomical defects, from a minor groove in the uterine fundus to complete duplication of the female genital tract. Yet, most of the literature examining pregnancy outcomes with uterine anomalies group all of the different classes together and compare them to pregnancies with a normal uterus. Few studies have investigated the clinical significance of each specific Mullerian duct anatomic defect due to their low prevalence [5]. One of the largest studies examining patients with uterus didelphys specifically was a retrospective cohort study following 49 patients over an average of 9.1 years; 94% of these women had at least one pregnancy (all singleton gestations). The complications reported were spontaneous abortion (21%), pregnancy-induced hypertension (13%), preterm delivery (24%), breech presentation (51%), fetal growth restriction (11%), and cesarean delivery (84% due to malpresentation or labor dystocia) [3].
Multifetal gestation in a uterus didelphys has been reported in only a small number of case studies. Most of these women were delivered by cesarean section either electively or due to anatomical obstruction or fetal distress; however, successful spontaneous vaginal deliveries of both twins have also been reported [6,7]. Thus, the presence of a uterus didelphys is not an indication itself for cesarean delivery but carries a higher likelihood of cesarean delivery for malpresentation or for labor dystocia reasons. Dicavitary twin gestations are further complicated since the two uteri and two cervixes can function independently. Maki et al. reported a case of twins in a double uterus and demonstrated independent contractions in the two uteri for approximately 90% of labor on tocometry [4]. Nohara et al. described a patient with uterus didelphys and dicavitary twins, complicated by preterm rupture of membranes in the left uterus with subsequent contractions. Due to fetal distress, the left twin was delivered by emergent cesarean section at 25 weeks gestation. The right uterus remained quiescent until 35 weeks and delivered the second twin via spontaneous vaginal delivery [8].
The diagnosis of Mullerian duct anomalies is not always easily discernible and may require utilizing multiple imaging modalities or more invasive procedures such as hysterosalpingography, hysteroscopy, laparoscopy, or laparotomy. Furthermore, uterine anomalies have been reported that do not belong to any of the traditional ASRM classifications, such as a uterus didelphys with one cervix. More recently, three-dimensional mutliplanar ultrasonography has been utilized as a noninvasive method to determine uterine anatomy with accurate results [1]. Multiplanar ultrasound proved useful for our patient case, as it not only confirmed the presence of two cervixes, but also revealed the second cervix to be incomplete with no apparent communication to the vagina. Based on this information, cesarean delivery was recommended since it was unlikely the left twin could be delivered vaginally with a hypoplastic cervix.
How the patient became pregnant in both uteri when intrauterine insemination was only performed through the right cervix was initially a mystery. Given only the right ovary had 1-2 potentially mature follicles seen on transvaginal ultrasound, the patient most likely ovulated twice from the right ovary, with one oocyte entering the right fallopian tube, and the other oocyte entering the left fallopian tube. But this does not explain how semen was able to enter the left uterus. The HSG suggested there was no communication between the uteri, as injection of the dye through the right cervix resulted in filling of the right uterus only. However, multiplanar ultrasound identified a communication between the right and left cervical canals, which could provide a route for the semen injected into the right uterus to reach the left uterus. If the HSG catheter was placed above the level of this cervical connection, it is possible it could have been missed on HSG imaging. The presence of the communication between the right and left cervical canals would also explain how this patient did not develop hematometra during menses.
This patient case involved an unusual situation of two fetuses occupying two separate uterine cavities with only one functional cervix, demonstrating the diversity of Mullerian duct anomalies. Confirming the precise anatomy of the entire female genital tract, including the uterus, cervix, and vagina, is essential to predicting potential obstetrical outcomes. Multiple imaging modalities may be required for diagnosis when maternal anatomy is ambiguous. Three-dimensional multiplanar ultrasound was particularly useful in our case. Further studies are needed to investigate the clinical significance of each anatomical variation. | 2019-05-12T13:27:49.304Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "93c7ca970afec0255ea0c9b1a43cdf7a5dbb5406",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.crwh.2019.e00118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68cd64452a571341e3477dd891c38a9570f95654",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213311950 | pes2o/s2orc | v3-fos-license | Using Blockchain-based approach for building the system events logging service
The article deals with Blockchain technology, namely with its aspect of forming distributed registries. It serves as the platform for creating and storing corporate information resources which need higher protection. A lot of marketing uproar on cryptocurrency is feeding the interest in employing this technology for to enable cryptocurrency operation. But not only this factor explains the technology success. Blockchain technology also favorably compares to classic centralized registries and databases. The article describes the journaling service of data computing processes and events system, both based on the Blockchain technology. We display all the advantages of the existing Blockchain types for developing corporate journaling systems. And we also show the use of such a system for medium and large organizations, allowing for their computational capabilities and potential for scaling and developing such a system.
Introduction
The digitalization process in the economy various sectors is rapidly increasing the corporate data volumes [1; 2]. To ensure competitiveness, organizations have to increasingly use not only classic information systems, but also to apply various modern methodologies, models and services [5]. Due to this, supporting and developing modern corporate IT infrastructure becomes more and more difficult, with time [6; 11].
One of the most important tools used for administrating and supporting information systems is the subsystem of journaling system events and collisions. In case of technical failures or incidentssystem errors, failures, unauthorized access to information system resources, data leaks, virus or DDOS attacks -the IT infrastructure administrators or the devops can use a system log to quickly identify the fact, time and essence of the incident. Such information in most cases allows to identify the cause of the events, to eliminate the resulting problem or to assess the level of adverse external influence (risk level) with minimal damage.
However, the journaling systems are mostly functioning on specific service levels and in various formats, e.g. server solutions from Canonical, Docker, Microsoft, IBM, etc.
These important problems, in our opinion stimulate the development of a specialized service able to log all events simultaneously in all key nodes of an organization's information infrastructure and would be an additional tool for generating efficient network monitoring.
While analyzing the platforms and tools for creating the above-described journaling service, such as storing log files in file storage, databases, etc., the authors have identified the following problems complicating the generation of unified logging system -the different formats system logs, storing system logs in different locations (this may be relevant for some Linux server systems and applications, but especially for various Windows systems and applications). Besides, with the classic approach and use of standard tools for maintaining system logs, there is a possibility of unauthorized modification, replacement or deletion at various levels of administration.
In this regard, the authors focused on the capabilities and features of the Blockchain technology. In our opinion, this technology has not yet completely unlocked its potential against the background of marketing noise and speculative hype around cryptocurrencies. Thus, in accordance with the wellknown Gartner Hype Cycle trend, the Blockchain technology will transfer to the "Plateau of Productivity" stage only a few years after its hype peak, as in case with cloud computing after 2009-2011.
For confirmation of this hypothesis, we consider two graphs ( figure 1 and 2). The first graph shows the worldwide dynamics of the user requests' number for "Blockchain" keyword in the Google search system (figure 1). In comparing the presented graphs, it becomes clear that the marketing hype around the most popular cryptocurrency Bitcoin aroused interest in Blockchain and vice versa. The correlation coefficient R on the segment 2017-2019 strives for 1.
Currently, both the hype and the interest in technology have markedly decreased. However, we can successfully use them in a large number of applications. The key advantages of the Blockchain technology are well known: the potential of decentralized data processing in distributed registries, the mechanisms ensuring the integrity and confidentiality of data, the high reliability and fault tolerance of the system, the ease of developing and generating the systems based on Blockchain technology.
Features of the Blockchain-systems' types
All these advantages are well suited for problem solving when generating a journaling service for key nodes of the organization's information infrastructure. So, for large organizations, it is possible to 3 support such a service by using the hardware of the organization's various subdivisions (identified as nodes), that can be geographically distributed.
It important to maintain the strict reporting mechanism for providing the data security level. The logging system should record all the events and incidents related to the efficiency and security of the systems' functioning. Record keeping should be complemented by a thorough analysis of registration data. The specialists on data infrastructure support should receive up-to-date information on the system status and arising incidents, with the above information not to be changed by any other division employees or not to be lost due to local software or hardware failure or external virus or DDOS attack. Developments for implementing such approach to generating a journaling system are already underway [3,4].
However, many interesting questions and opportunities remain, to be pointed out in that article. To do this, let's review the main types of Blockchain-systems.
The first type is a centralized blockchain-system with a trusted center. If there is a trusted center within the Blockchain-system, then after a certain period of time (or after a certain number of transactions) it forms a new block, supplying it not only with a hash sum, but also with its electronic signature. Each client of the system will be able to verify that all the blocks in the chain have been generated and confirmed by a trusted center and no one else.
If such a trusted center is relevant and not compromised, then the attacker will be unable to modify the system log. However, in our opinion, using Blockchain technology in this case is redundant, since with a trusted center available, one can simply refer to it by requesting to sign each transaction, with adding time and a serial number to it. The number provides the order and the impossibility of adding (deleting) transactions from the chain, the electronic signature of the trusted center -the impossibility of modifying specific transactions.
Due to this, we think that such Blockchain type will not be suitable for generating logging systems. The second type is a centralized Blockchain-system without a guaranteed trusted center. In case of no trusted center allocated among the Blockchain-system centers, the guarantee is required that any user of the system will be unable to recreate the whole chain of blocks, deleting data from any block or adding another one. To provide that guarantee, we can use the following two methods [3]. The first method requires using the additional trusted repository. After creating the next block, the center sends the hash code from the new block to the trusted (and independent from this center) storage. Trusted storage should not accept any changes to the hash codes of already created blocks. The stored data volume may be small as compared to the total journal volume. In our opinion, such option may be quite suitable for generating journaling systems of system processes logging for medium and large distributed organizations.
The second method is to add a timestamp generated by a trusted time center to each block. Such timestamp should contain the generation time and the electronic signature of the center. The signature should be calculated on the basis of the hash code of the block and the timestamp. In case the "nontrusted" center wants to re-create part of the block chain, a gap in the timestamps will be fixed. It should be noted that this method does not guarantee such a center from simultaneously generating two chains of blocks, supplementing them with correct timestamps, and then replacing one chain by another.
The third type is decentralized blockchain system. Currently, decentralized blockchain systems are of great interest. Such systems have no dedicated centers for generating blocks. Every participant of such system can take a set of transactions (records) expecting to be included into the system log and create a new block for recording it in the general blockchain. Since this issue has been widely covered and actively discussed, there is no need to discuss in detail the potentials and advantages of the basic type of Blockchain-systems.
The fourth type is blockchain-similar systems. The current Blockchain technology uses peer-topeer decentralized networks without dedicated center with unlimited number of nodes. In our case, this is not so. In this regard, the authors present, below, the conceptual description of the proposed The operating algorithm of the JA is described below: collect from the pool of unregistered system log events the ones fitting into 1 block (1)(2)(3) and having the earliest timestamp (showed up earlier). In the case of the added event not fully fitting in the current block it should be rejected for fitting in the forming block and remain in reserve for the next block; add the previous block data to the block; add your own data (JA) to the block (for accountability).
The block structure for the BEJS can be selected based on the requirements of a specific IT infrastructure. We propose the following typical block structure for such systems (table 1). The events of the system logs from the pool, signed by digital signature (GOST R 34.10-2012).
up to 3 MB
In case the next block is simultaneously generated by two or more JA, and subject to the complete identity of the blocks, any of the generated blocks should be selected. If the blocks are not identical, the larger block should be selected.
Such cases can occur quite rarely during the normal system operation mode, since each of the JA follows the same algorithm for including events in the block.
The third level is Verification and Confirmation Level (VCL). The agents of the verification and confirmation level (AVCL) contain all records available in the BEJS, provide backup of such data, confirm, record, and add new blocks to the general blockchain.
Agents of this level are checking up whether the JA sending the new block really has the rights to generate the block with the data it contains (for example, does this JA is really relevant to the data systems from where the events were placed in the current block). After successful confirmation by AVCL agent, events in the block are removed from the pool of unregistered events (ECL level).
Otherwise, the block is not confirmed and not added to the general blockchain. All events stay in the pool of unregistered events. In this case, error notification and detailed incident report are delivered to the administrators of the relevant network segments.
Such AVCL, in distributed organizations, can be virtual machines or compute containers operating in various segments of corporate data center or in cloud. Those AVCL can be deployed at the distributed data centers of the departments across the world in order to support BEJS of the entire organization.
Note that cryptographic hashing algorithms are the fundamental security parts of Blockchainsimilar systems, for example, the cryptographic algorithm SHA-256 or newer, SHA-512.
In case of selecting decentralized Blockchain the implementation of BEJS can be very similar to the architecture of Blockchains using cryptocurrencies, for example Bitcoin.
Development and technical support of BEJS
As noted earlier, the approach called Superchain [3] may well be used for implementing flexibilities of controlling access to data, as well as for general scaling of the above logging system architecture for large, distributed organizations.
In this case, each subdivision (branch) of the organization must have its own BEJS for accumulating data from the local system logs of this subdivision (branch) data systems operating by the above algorithm. Local units entering this system must be synchronized in the BEJS organization.
Such Blockchain-similar system can be called Superchain, and its operation can be maintained by the data center of the organization's head unit.
In this case, each unit's BEJS becomes a JA agent and operates on the general confirmation and verification level (General Confirmation and Verification Level, GCVL (figure 4). In general, the described architecture of journaling system, in our opinion, is relevant and interesting due to the wide range of technical potential for optimal (in terms of data center capacity utilization) generation of such systems. For example, agents from all levels of such system can be automatically launched or created by orchestration system in convenient time (for example, after business hours, with diminished data center load) using Docker containers or virtual machines [8]. And they can be stopped the moment the data center resources are needed for other tasks during business hours [10]. Moreover, it is possible to launch any agents from different levels of BEJS in the cloud [2].
The performance comparison conducted by us [7], as well as by other experts [9], including experts from IBM, shows that a swarm of containers (Docker Swarm) running within a computing cluster can easily cope with the tasks of efficient and secure journaling based on Blockchain-similar system.
Conclusion
In conclusion, we would like to emphasize the actuality of viewing Blockchain technology as the platform for generating various corporate IT services. In particular, we were able to show the potential of using this technology for generating corporate IT infrastructure journaling systems. The approach to creating a Blockchain-similar corporate journaling system (BEJS) presented in the article allows us to solve a sufficiently large number of problems arising with administering dynamically configured hybrid distributed information systems.
In addition, Blockchain, as a technology, has undoubtedly promising prospects. By various estimates, the formation of blockchain-using startups will attract investments exceeding $ 5 billion in 2018-2020. Blockchain-based crowdfunding (ICO) has already been actively used to attract investments in startups. This will make this technology an alternative to venture investment in traditional systems of storing and processing Big Data in all areas of economic and social activity. | 2019-12-12T10:14:33.667Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "403877be06390fa75d934a808faec182aee17b71",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1399/3/033075",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fbb26f91e362a596e986d75ddc00b6ce16fbb433",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
15516320 | pes2o/s2orc | v3-fos-license | Family physicians' perceptions of academic detailing: a quantitative and qualitative study
Background The efficacy of academic detailing in changing physicians' knowledge and practice has been the subject of many primary research publications and systematic reviews. However, there is little written about the features of academic detailing that physicians find valuable or that affect their use of it. The goal of our project was to explore family physicians' (FPs) perceptions of academic detailing and the factors that affect their use of it. Methods We used 2 methods to collect data, a questionnaire and semi-structured telephone interviews. We mailed questionnaires to all FPs in the Dalhousie Office of Continuing Medical Education database and analyzed responses of non-users and users of academic detailing. After a preliminary analysis of questionnaire data, we conducted semi-structured interviews with 7 FPs who did not use academic detailing and 17 who did use it. Results Overall response rate to the questionnaire was 33% (289/869). Response rate of non-users of academic detailing was 15% (60/393), of users was 48% (229/476). The 3 factors that most encouraged use of academic detailing were the topics selected, the evidence-based approach adopted, and the handout material. The 3 factors that most discouraged the use of academic detailing were spending office time doing CME, scheduling time to see the academic detailer, and having CME provided by a non-physician. Users of academic detailing rated it as being more valuable than other forms of CME. Generally, interview data confirmed questionnaire data with the exception that interview informants did not view having CME provided by a non-physician as a barrier. Interview informants mentioned that the evidence-based approach adopted by academic detailing had led them to more critically evaluate information from other CME programs, pharmaceutical representatives, and journal articles, but not advice from specialists. Conclusion Users of academic detailing highly value its educational value and tend to view information from other sources more critically because of its evidence-based approach. Non-users are unlikely to adopt academic detailing despite its high educational value because they find using office time for CME too much of a barrier. To reach these physicians with academic detailing messages, we will have to find other CME formats.
Background
Academic detailing or educational outreach is a form of continuing medical education (CME) in which a trained health professional such as a physician or pharmacist visits physicians in their offices to provide evidence-based information. The efficacy of academic detailing in changing physicians' knowledge and practice has been the subject of many primary research publications and systematic reviews. The most recent review found that academic detailing in conjunction with other educational interventions led to a median improvement in physician performance of approximately 6.0% [1]. Several studies have found that physicians rate the educational value of academic detailing highly [2][3][4].
Despite the number of studies about the efficacy of academic detailing, few have addressed the features of academic detailing that physicians find valuable or that affect their use of it. Habraken et al. found that Belgian physicians highly rated academic detailing visits and approximately 90% of those who used academic detailing wished to use it again [4]. However, they also identified some barriers to participation: the information was not new or could be obtained in other ways, the information was politically coloured and designed to cut expenses, and the educational visits were time-consuming [5]. The goal of our project was to explore family physicians' (FPs) perceptions of academic detailing and the factors that affect their use of it.
The Office of CME at Dalhousie University has had an Academic Detailing Service since 2001. Funded by the provincial Department of Health, the Service is available to all FPs in Nova Scotia (approximately 850 are in practices for which academic detailing is relevant, e.g., in active family practice, not in solely administrative or emergency medicine roles). Three academic detailers, 2 pharmacists and a nurse, present 1 or 2 evidence-based topics per year. Topics are selected by surveying FPs, by scanning the literature for areas of interest, and to complement other provincial health initiatives. We research the evidence for each topic with the help of a drug evaluation pharmacist. A specialist physician and advisory board of 4 FPs ensure that the evidence-based information is clinically relevant. If a physician poses a question that the academic detailer cannot answer during the visit, the academic detailing team finds an answer and faxes the response.
During our detailing session, we present data from clinical trials in absolute as well as relative terms and include event rates of active and placebo groups, absolute risk reduction, and numbers needed to treat with 95% confidence intervals. We believe this approach presents a more accurate estimate of treatment effect than presenting data in only relative terms and there is evidence that the way data is presented affects prescribing decisions [6][7][8]. During our visits and in our handout material, we explain these terms to physicians. Most visits last about 25 minutes, are with individual physicians, take place during regular working hours, and provide 1 MAINPRO M-1 credit of the College of Family Physicians of Canada [9].
Handout material left with physicians consists of a booklet of up to 40 pages that provides details of clinical trial evidence. A few pages of summary statements in the front of the booklet summarize the key points of the evidence. We also leave double-sided laminated sheet that contains essential points for ready reference. Examples of handout material are at http://cme.medicine.dal.ca/ADS.htm.
For each topic, about 360 FPs use the service. By 2004, our records showed that approximately 43% of FPs had never used academic detailing, 14% had used it once, and 43% had used it more than once. We wished to determine the factors that encourage and discourage FPs from using academic detailing. Our research questions were: 1. What features of academic detailing • encourage physician participation?
• do physicians find valuable? 2. How can academic detailing be improved to better meet the CME needs of physicians?
3. What is the value of academic detailing compared to other forms of CME?
Methods
This was a mixed-methods study [10] using 2 methods to collect data, a questionnaire and semi-structured telephone interviews. For both methods, we divided our study population into 3 groups of FPs based on their participation: those who had never used academic detailing (used never group), those who had used academic detailing once (used once group), and those who had used academic detailing more than once (used > once group). Nova Scotia FPs in practice, regardless of group, were sent invitations to participate in previous academic detailing sessions. The Research Ethics Board of Dalhousie University approved the project.
Questionnaire
Two of the authors involved in the Dalhousie Academic Detailing Service (MA, IF) and two colleagues (see acknowledgements) who have a strong background in educational theory and questionnaire design developed the questionnaire. There is little published information on the factors that encourage and discourage physicians from using academic detailing. Therefore we developed questions to address factors that our experience and informal discussion with physicians indicated may be important. Four physicians from each group tested the questionnaire for face validity.
An introductory letter described features of the Academic Detailing Service. The three-page questionnaire collected demographic and practice information and asked respondents to rate on a five-point Likert scale how much various factors encouraged or discouraged use of academic detailing, and how likely they were to use academic detailing in the future. Open-ended questions asked for suggestions to make academic detailing better meet respondents' learning needs and for general comments. The questionnaires for the 3 groups were identical except for 1 question. The used never group were asked if they had heard about the academic detailing service before receiving the questionnaire while the other 2 groups were asked to rate the value of academic detailing compared to other forms of CME.
In September 2004 we mailed the questionnaire to all FPs in the Dalhousie CME database whom we considered eligible to participate in academic detailing (N = 869). We offered a chance to win two $50 vouchers as an incentive and re-mailed the questionnaire to non-responders 3 weeks after the initial mailing. The questionnaire also asked respondents to indicate if they were interested in being interviewed.
Questionnaire data were analyzed using SPSS version 10.1. We calculated descriptive statistics for data collected in Likert scales, and frequency distributions for non-continuous data. Means and frequency distributions of the study groups were compared using inferential statistics (i.e., analysis of variance and chi-square tests, respectively). We set an alpha level of 0.004 with a Bonferroni correction to adjust for the number of encouraging and discouraging factors being compared.
Telephone interviews
We planned 10 interviews with each group of physicians and developed the interview questions after a preliminary analysis of the questionnaire data to determine themes for exploration (see Additional file 1 for questions). We randomly selected physicians from questionnaire respondents who expressed interest in being interviewed. We tape recorded and transcribed interviews and mailed the transcriptions to subjects to verify accuracy.
Using a thematic content analysis, the interview data were coded, or broken down, into manageable categories and then examined for the frequency of occurrences of each code. Interview transcripts were analyzed independently by researchers (MA and NO), and reviewed by a third researcher (SF). The coding of the researchers were compared, and in the case of discrepancies, the researchers reviewed and discussed the text until agreement was reached. We used QSR NUD*IST 6 for data management.
Results
We received only 24 questionnaire responses from the used once group, a response rate of 25%. These responses were not significantly different from the used > once group so we combined the 2 for data analysis. Similarly, for interviews we were able to schedule only 5 interviews with the used once group. Their responses were not different from the used > once group and so we combined them for analysis. Therefore, we ended up with 2 groups, those who did use and did not use the academic detailing service (users and non-users respectively). Table 1 shows the response rate of the 2 groups. The overall response rate was 33% though it varied widely between groups. Table 2 shows demographic and practice data. Forty-two percent of respondents were female and approximately 60% were members of the College of Family Physicians of Canada. Significantly more non-user respondents came from communities with populations larger than 50,000 and significantly more user respondents came from communities smaller than 5,000. The questionnaire respondents were similar in terms of gender and year of graduation to other physicians who received the questionnaire. Table 3 shows responses to questions asking physicians to rate the factors that encourage and discourage their use of academic detailing. Ratings of users were significantly higher than non-users at the p < 0.004 level for all factors except adopting an evidence-based approach, having the detailer follow up by answering questions, and obtaining CME credits.
Questionnaire quantitative results
For non-users, the factors that most discouraged them from using academic detailing were spending office time doing CME, scheduling office time to see an academic detailer, and having CME provided by a non-physician. The mean rating of each of these domains was less than 3.0 indicating that these factors may actually be a deterrent to participating in academic detailing. For the users, the factors with the lowest ratings were having CME provided by a non-physician, spending office time doing CME, and having access to CME in other ways. The mean The body of the question was: Many factors may determine whether physicians see an academic detailer for CME. Please rate how much the following aspects of the Dalhousie Academic Detailing Service discourage or encourage you from seeing an academic detailer. * Statistically significant at p < .004 rating of each of these domains was approximately 3.0 indicating that these factors were neither encouraging nor discouraging.
For users, the 3 factors that most encouraged the use of academic detailing were the relevance of the topic, the evidence-based approach adopted in academic detailing, and the usefulness of the handout material. For non-users, the most encouraging factors were the evidence-based approach, obtaining CME credits, and having the detailer follow up by answering questions. Mean ratings for most encouraging factors were ≥ 4.3 for the users and 3.9 to 4.0 for the non-users. Figure 1 shows that 68% of user respondents rated academic detailing as being of higher or much higher value than other forms of CME. Figure 2 shows that 93% of users were somewhat likely to use or would definitely use academic detailing in the future compared to 39% of nonusers (p < 0.0001).
Interviews and responses to open ended questions on questionnaire
Forty-one non-users and 88 users made comments to open-ended questions on the questionnaire. We were able to recruit 7 non-users and 17 users for interviews. In reporting qualitative data, we will concentrate on some factors that most discouraged and encouraged physicians from using academic detailing.
This section reports qualitative data from both the interviews and questionnaires. If the source is not specified, it is from the interview participants.
Scheduling and spending office time for CME
All non-users except 1 indicated that scheduling was a concern. One non-user mentioned he had nothing against academic detailing, but just did not have the spare time.
In another practice with 2 physicians, 1 non-user said it may be possible to see an academic detailer at lunchtime but he might be called away for an emergency. In reply to a question about what might make them use academic detailing, 2 non-user interview participants had no suggestions because there simply was not enough time. In contrast, only 1 user found scheduling a time a barrier. Other users found no difficulty.
"It's better than spending office time seeing commercial detailers or spending evenings so it's not a problem. It's not an onerous imposition." One user found it less than ideal but still a practical approach to learning considering other demands.
"It is not ideal because it means that we tend to be in a rush, and are often coming from seeing a patient, and maybe haven't had lunch. We tend to cram it into a busy day, which isn't really the best way to learn. But there isn't really much other time. So I don't think it is ideal but I think it is a reasonably practical approach to how much time people have." On the questionnaire, 6 respondents from the users made comments about difficulty with scheduling while 2 reported no difficulty. Questionnaire and interview data indicated that the preferred times for seeing the academic detailer were in the morning before seeing patients, at noon, and at the end of the day.
CME provided by a non-physician
Only 1 non-user considered having non-physicians as detailers a concern in interviews while 3 non-users made comments on the questionnaire.
"I find it offensive having a non-MD presenting this information. Their lack of training in physiology and pathology of diseases makes their input useless." Percent of questionnaire respondents likely to use the Aca-demic Detailing Service in the future Figure 2 Percent of questionnaire respondents likely to use the Academic Detailing Service in the future. Users' rating of academic detailing compared to other forms of CME Figure 1 Users' rating of academic detailing compared to other forms of CME. Two users expressed some concern that non-physicians could not answer their questions at the time of the visit and 1 other user was unsure if it would be a wise use of physicians' time for them to be detailing. Comments from other interview informants were generally favorable about having non-physicians present CME.
"Drug company reps may have an undergrad science degree but no medical background. I'm quite happy to get it from a pharmacist. I work with nurses and have no problem." "As long as they identify their area of expertise. And so far the detailing that we've gotten, I feel confident in the presenters. And I am actually quite impressed that they seem to have quite a good knowledge of the topics."
Topic selection
In interviews, 3 non-users mentioned that selection of relevant topics might lead to them using academic detailing. One suggested that we choose topics that pharmaceutical companies are concentrating on while on the questionnaire 1 respondent suggested we cover topics that were not usually presented in other CME formats. One nonuser thought the topics presented to date were appropriate. In response to interview questions about improving the Service or that might lead to more use of it, 10 physicians (2 non-users and 8 users) commented on the importance of relevant topics.
Evidence-based approach
In interviews, all users were supportive of the evidencebased approach adopted in academic detailing. One user from a small community stated that as a group the community physicians had decided to adopt an evidencebased approach to practice and had taken up academic detailing based on that decision. We asked physicians if the evidence-based messages in academic detailing had affected their evaluation of information from other sources. Approximately half the users interviewed indicated that academic detailing had made them more critical of information from other CME programs, journal articles, and pharmaceutical representatives. In some cases, academic detailing had reinforced their critical approach.
"Academic detailing is making me not want to go to some of the more traditional sit down in a dark room, listen to specialist talk CME. I'm starting to expect more." "We are reminded every time we go through academic detailing to make sure we question the level of evi-dence and how studies are done. That approach is very helpful." "I can discuss things with them (pharmaceutical representatives) and if I have information from the Academic Detailing Service it helps to support my points for discussion." Academic detailing was less likely to affect physicians' evaluation of advice from specialists because they considered specialists to be well informed of the evidence.
"Most of the specialist reports I see try to include evidence."
Handout material
All users except 1 found the handout material useful. They appreciated the point form format of the resource booklet and the key messages found in the laminated sheet. Ten users reported they referred to the handout material for therapeutic recommendations, medication doses, and in preparation for patients who had appointments for conditions covered in the material.
"Your handouts were just so wonderful, they summarized everything so well. They're very concise and up to date." "I have found myself referring to it on several occasions, so I'm glad I have it." When asked for suggestions to improve the handout material, 6 users had no suggestions because they liked the existing format. Suggestions from other users were to add color and pictures, put them in format for a personal digital assistant, and include patient education material.
Users also indicated they had made practice changes based on information from academic detailers. Examples were ordering spirometry to diagnose chronic obstructive pulmonary disease, more diligent screening for osteoporosis, prescribing alendronate instead of etidronate for osteoporosis, and not prescribing rofecoxib because of concern over adverse cardiovascular effects.
Most information about ways to make the Academic Detailing Service better meet learning needs came from the user group. The most frequent suggestion was to provide the service more often (3 interviews, 8 questionnaire respondents) followed by having group sessions (2 interviews, 6 questionnaire respondents). Most comments expressed satisfaction with the Service. When asked for their general impressions in interviews, all but 1 user made favourable comments about the Service. When asked for suggestions for improvements, 3 interview informants and 49 questionnaire respondents, all from the user group, said they were satisfied with the service as it is.
Discussion
This study identified several factors that encourage and discourage FPs from using academic detailing. We found few other studies that provide similar information. Janssens conducted interviews with physicians who did and did not use academic detailing [5] and Van Eijk listed some reasons for non-participation in an academic detailing project [11]. Soumerai and Avorn list 8 techniques of effective academic detailing based on information from pharmaceutical representatives and their own experiences [12].
In our study, spending office time doing CME was the factor that most deterred physicians, a consistent finding from interviews and the questionnaire. In interviews with non-users, physicians who saw the value of academic detailing just did not have time. In contrast, users of academic detailing did not find time to be enough of a barrier to discourage participation. Van Eijk and Janssens both identified lack of time as a barrier to participation in academic detailing [5,11]. This finding was unexpected since we thought physicians would view office-based CME as being convenient and efficient.
Another major barrier for non-users was having non-physicians deliver the educational material. This finding was more pronounced in the questionnaire responses than in the interviews and was not identified in Janssens' study. However, Van Eijk mentioned that some physicians did not participate because her project was initiated by the school of pharmacy rather than the Faculty of Medicine, even though the detailer was a physician. We reviewed 28 studies published since 1997 to try to determine if participation in academic detailing is greater if the detailer is a physician. Including our own study, 9 studies had physicians as detailers, 16 had non-physicians, 2 had both, and 1 did not specify. Figure 3 shows participation rates for those studies that gave this information (MD detailer mean participation = 81.8% SD 13.9, non-MD detailer participation = 63.9% SD 23. 6). There appears to be a trend toward higher participation in those studies in which the detailer was a physician; however a Mann-Whitney U test found no statistically significant difference, perhaps because of the small number of studies or because we did not adjust for potential confounders such as the relevance of the topic presented. It may be preferable to have a physician as detailer since this might entice non-users to participate and is unlikely to deter regular users. However, it would increase cost. This subject requires more study.
The relevance of the topics detailed was the main encouraging factor for users on the questionnaire and was mentioned by several physicians in interviews as a factor that might lead to use of academic detailing. At the time of our study, the topics we had presented were updates on influenza and pneumococcal vaccine, hormone replacement therapy, osteoarthritis, osteoporosis, and chronic obstructive pulmonary disease. These are quite conventional topics; however, we were able to bring something new to them all. For instance, in the session on chronic obstructive pulmonary disease we pointed out that some recommendations in the Canadian guidelines [13] are based on studies that show statistical significance but not clinical significance based on the scales used for outcomes. There did not appear to be a consensus as to whether we should present topics that are commonly presented in CME programs or that are somewhat unusual.
The handout material we leave with physicians was also mentioned as an encouraging factor. Our handout material is somewhat atypical for an academic detailing program. Soumarai recommends brief, graphic print materials [12]. We do produce such material in our single page laminate, however we also produce an extensive review of the evidence with several summary pages. Information from this study indicates that physicians find such information helpful and refer back to it. In a previous evaluation survey (unpublished data), 75% of 106 respondents found the booklet somewhat or very useful while 65% found the laminate useful. These findings chal-Percent participation in academic detailing interventions in which detailers were physicians or non-physicians lenge the accepted approach to producing brief academic detailing material.
One of the most interesting findings from our study was the value that FPs place on the evidence-based approach of academic detailing. Previous studies indicate that FPs use evidence-based summaries and guidelines but do not believe that learning the skills of evidence-based medicine is the best way to implement it in practice [14]. Our approach is to do the first three As of evidence-based medicine (Ask the appropriate question, Access the appropriate information, and Appraise the information) [15]. We then present our findings to physicians by providing them with information in absolute and relative terms as recommended at a recent meeting of the U.S. National Institutes of Health [16]. We explain these terms to the physicians and let them decide about the final two As (Apply the information as they think appropriate, and Assess the results.) Our goal is similar to that of Habraken et al whose underlying aim was to stimulate a critical attitude in physicians by discussing the results of studies [4].
Habraken et al speculated that physicians could apply this critical attitude to other sources of information such as pharmaceutical representatives. Our results suggest that this is the case and extends to other sources such as CME programs and journal articles. However, they are not necessarily more likely to critically appraise advice from specialists. Since specialists provide most CME there is some inconsistency in this finding and it too requires more study.
In our study, users of academic detailing value it more highly than other CME formats. This is consistent with evaluations we receive on detailing visits which consistently average at least 4.5 on a five-point Likert scale. In addition, users are much more likely than non-users to participate in the program in the future.
The main limitation of this study is the low response rate for the non-user group in both the interviews and the questionnaire. Only 15% of non-users returned the questionnaire, so our findings for this group may not be representative of the overall population of physicians who choose not to use the Service. Also, non-users who were interviewed came from the same group of physicians who returned the questionnaire further limiting the generalizability of data from this group. Unfortunately we do not know how to encourage non-users to participate in this type of research. Additional research is needed to explore perceptions of non-users in greater depth to determine if these findings can be generalized.
Another limitation of this study is that it deals with physicians' perceptions of the Service and their reasons for using or not using it. With the cross sectional design and the measures that we used, it is not possible to discern if their perceptions reflect reality. More objective measures and experimental designs could be used to determine this.
Finally, the interviewer was associated with the department that offers the Service. Although she was not involved with the Service directly, it is possible that her connection with the university department may have influenced physicians' responses during the interviews.
As a result of this study we have made few changes to our Service. We have given large group didactic presentations to try to reach physicians who do not have time to see academic detailers at their offices. Also, we are mailing key points of our academic detailing messages to non-users and giving them an opportunity to receive a list of their patients from the provincial drug insurance plan for whom the points may apply. They can use the list to see if their patients are on appropriate therapy. We are now conducting a study to determine the efficacy of this format. We have maintained our comprehensive evidence-based approach since physicians value it highly and now provide them with a brief peer-reviewed explanation of the differences between relative and absolute terms [17].
Conclusion
Physicians who use academic detailing rate its educational value highly. Selecting relevant topics appears to be the most important factor in encouraging use of academic detailing but we did not find consensus on what type of topics physicians consider valuable. Other factors encouraging participation are adopting an evidence-based approach and providing useful handout material. In our study, physicians found comprehensive as well as concise handout material useful, a finding that challenges the tenet that handout material should be brief.
The 3 factors that most discouraged the use of academic detailing were spending office time doing CME, scheduling time to see the academic detailer, and having CME provided by a non-physician. Because we found indications that an evidence-based approach can lead to more critical thinking and practice change, it is important to consider other ways to reach non-users who find it inconvenient to spend office time doing CME. The relative merits of having physicians or non-physicians provide academic detailing require further study.
wrote the draft of the paper. SF helped with experimental design and data analysis, wrote parts of the draft of the paper and edited the paper. NO helped with data analysis, wrote parts of the draft of the paper and edited other parts. IF helped with experimental design and edited the paper. All authors have read and approved the final manuscript. | 2014-10-01T00:00:00.000Z | 2007-10-12T00:00:00.000 | {
"year": 2007,
"sha1": "a6ff6175f64aa48b8a493835b60dbdcce9283129",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/1472-6920-7-36",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6ff6175f64aa48b8a493835b60dbdcce9283129",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12661590 | pes2o/s2orc | v3-fos-license | Molecular Analysis of the HOXA2-Dependent Degradation of RCHY1
The homeodomain transcription factor Hoxa2 interacts with the RING-finger type E3 ubiquitin ligase RCHY1 and induces its proteasomal degradation. In this work, we dissected this non-transcriptional activity of Hoxa2 at the molecular level. The Hoxa2-mediated decay of RCHY1 involves both the 19S and 20S proteasome complexes. It relies on both the Hoxa2 homeodomain and C-terminal moiety although no single deletion in the Hoxa2 sequence could disrupt the RCHY1 interaction. That the Hoxa2 homeodomain alone could mediate RCHY1 binding is consistent with the shared ability all the Hox proteins we tested to interact with RCHY1. Nonetheless, the ability to induce RCHY1 degradation although critically relying on the homeodomain is not common to all Hox proteins. This identifies the homeodomain as necessary but not sufficient for what appears to be an almost generic Hox protein activity. Finally we provide evidence that the Hoxa2-induced degradation of RCHY1 is evolutionarily conserved among vertebrates. These data therefore support the hypothesis that the molecular and functional interaction between Hox proteins and RCHY1 is an ancestral Hox property.
Introduction
Hoxa2 belongs to the extremely well-conserved Hox gene family which includes 39 members in mammals. Mammalian Hox genes are organized in four clusters located on different chromosomes and can be classified in 13 paralogue groups based on their sequence similarities and their relative position within the clusters. Hox genes code for transcription factors which fulfill well-documented functions during embryonic development. In particular, their most spectacular activities are associated to the patterning of the main body axis, limb development and organogenesis (reviewed in [1]). Hox gene activity is also involved in the control of multiple cell behaviors such as proliferation, migration, differentiation or apoptosis (reviewed in [2]). Concordantly with these important roles, Hox gene misregulation has been linked to the onset of cell pathologies, namely cancers [3].
Although these genes encode homeodomain transcription factors, a growing body of evidence supports that HOX proteins could also perform non-transcriptional activities. Indeed, their involvement in mRNA translation, DNA repair, initiation of DNA replication and possibly modulation of signal transduction has been highlighted in distinct although still restricted instances (reviewed in [4]).
RCHY1, a protein we recently identified as an interactor of Hoxa2 [5], is a RING-finger type E3 ligase promoting ubiquitination and degradation of different targets. Several of its known substrates, including p53, p27 Kip1 , polH, CHK2 and c-MYC, are known to be involved in the control of cell proliferation and cell death. Consequently, by influencing the abundance of these targets, RCHY1 has been shown to be a regulator of DNA damage response and cell cycle progression (reviewed in [6,7]). RCHY1 itself is a short-lived protein regulated by the proteasomal degradation pathway and distinct modulators have been reported to negatively influence RCHY1 stability [8,9]. Indeed, RCHY1 can be self-ubiquitinated leading to its proteasomal processing [10][11][12][13]. Moreover, the CaMKII protein kinase was shown to hyper-phosphorylate RCHY1 during the G2/M phases of the cell cycle which, in turn, enhances RCHY1 self-ubiquitination and turnover. Finally, SCYL1BP1 can relocalize RCHY1 to the cytoplasm and promote its ubiquitin-dependent degradation [14].
In a former study, we reported that Hoxa2 expression correlated with a decrease in RCHY1 protein levels. We provided evidence that the RCHY1 decay induced by Hoxa2 involved the proteasome pathway but in an ubiquitin-independent way. Finally, correlatively to the RCHY1 degradation, Hoxa2 expression was also shown to decrease the ubiquitination of p53, in turn, increasing its abundance [5].
Here, we further investigated the process of Hoxa2-induced RCHY1 degradation. We showed that HOXA2 induces a two-fold reduction in RCHY1 half-life, and that this degradation requires the 26S proteasome. Furthermore, we provide evidence that the ability to interact with RCHY1 is shared by HOX proteins although the ability to induce RCHY1 decay is not. We also show that the HOXA2-RCHY1 interaction and RCHY1 degradation are evolutionarily conserved processes, from fish to mammals. Through the analysis of truncated proteins, we reveal that both the homeodomain and the C-terminal extremity of Hoxa2 are indispensable to induce RCHY1 degradation.
Sequences coding for Hoxa2 and Rchy1 from distinct vertebrate species were PCR-amplified using primers described in Table 1. The resulting PCR products were inserted into the pDON223 vector using the Gateway1 Technology from Invitrogen to generate the corresponding pEnt vectors (Table 1). The resulting pEnt plasmids were confirmed by DNA sequencing and used to generate pExp mammalian expression vectors for FLAG-tagged or GST-tagged proteins with the v1899 destination vector and pDest-GST N-terminal destination vector [21], respectively. pExp vectors for fusion proteins involving the VN 173 and VC 155 Venus protein moieties were obtained with pDest-VN 173 [22] and pDest-VC 155 [22], respectively.
The DNA sequence corresponding to the Rchy1 in situ hybridization (ISH) probe was PCRamplified from genomic DNA using the following primers, CAGCGTGAACACCTGAGGAT and GCTTATGTTGTCACAAGAGCCA and cloned into the pCR2.1 TOPO plasmid using the TOPO1 TA Cloning1 Technology (Invitrogen).
Bimolecular Fluorescence Complementation assay (BiFC)
COS-7 cells were cultured on glass coverslips and transfected with distinct combinations of pExp-VN 173 and pExp-VC 155 vectors for the fusion proteins and/or pDest-VN 173 and pDest-VC 155 empty controls, each at 500 ng. When indicated, the proteasome was inhibited as previously described. Twenty-four to 31 h after transfection, cells were rinsed in PBS solution and fixed for 20 minutes with 4% paraformaldehyde (PFA) in PBS, then rinsed twice in TBS (50 mM Tris, 155 mM NaCl, pH 7.5) containing 0.1% Triton X100. Coverslips were then used for immunofluorescence detection of proteins or rinsed once in TB (50 mM Tris pH 7.5) prior to mounting.
Imaging
Glass coverslips were mounted in Vectashield1-DAPI medium (Vector laboratories). Slides were then analyzed by epifluorescence (Axioskop 2, Zeiss) or confocal microscopy (LSM710, Zeiss, Jena, Germany). Fluorescence signals were quantified using ImageJ software. BiFC fluorescence from the test and the control conditions were quantified and the interaction was considered positive when the tested interaction emitted at least 3 times more fluorescence than the 3 control conditions. β-Galactosidase and Luciferase assays.
HEK293T cells were transfected with 250 ng of reporter plasmid, and 50 ng of pCS2-PREP1, pCMV-PBX1a and/or expression vector for full length GST-Hoxa2 or deletion GST-Hoxa2 derivatives. To avoid experimental variations due to transfection efficiency, an internal standard reporter corresponding to the lacZ gene under the control of a constitutive CMV promoter (pCMVlacZ, [21]) was also added in cotransfection experiments (25 ng). Cells were harvested 48 h after transfection for enzymatic assays. Lysis and enzymatic activity dosages were performed with the β-Gal reporter gene assay kit (#11758241001, Roche) and the Luciferase reporter gene assay kit (#11 669893001, Roche), respectively. Luciferase activity was then normalized to that of β-Galactosidase.
ISH, RNA extraction and RT-PCR from mouse embryos
All animal experiments were performed in accordance with the guidelines established by the Animal Experimentation Ethics Committee of the Université catholique de Louvain and in agreement with the European directive 2010/63/UE (approval number 103002). Mice were maintained and fed under standard conditions on a 14 h light / 10 h dark cycle. All the experiments were carried out on adult CD1 female mice mated overnight with adult CD1 males. When plugs were detected, pregnant mice were killed between 8.5 to 12.5 days post coitum by gas inhalation and embryos were rapidly dissected while kept on ice. Embryos for ISH were rinsed in PBS, fixed overnight with 4% PFA in PBS at 4°C, rinsed three times 20 minutes in PBS before cryopreservation by incubation at 4°C, for 2 h in 10% sucrose/PBS and then overnight in 20% sucrose/PBS. Thereafter, embryos were embedded in OCT (Shandon Cryoma-trixTM, Thermo Electron, France), frozen on dry ice and stored at -80°C. Using a Leica CM 3050S cryostat, seven sets of 20 μm thick serial transversal or sagittal cryosections per embryo were obtained. Gene expression was detected using digoxigenin-labeled RNA probes as previously described by Hutlet et al. [23]. For probe synthesis, the pKS-Hoxa2 and the pCR2.1-TO-PO-Rchy1 plasmids were linearized with EcoRI and SpeI, and the probes were transcribed with the T3 and the T7 polymerases, respectively. Hybridized sections were analyzed on a Leica DM2500 microscope and pictures were captured with a Leica DFC420C camera.
Embryos for RT-PCR were frozen in liquid nitrogen and stored at -80°C. Total RNA was extracted with the High Pure RNA Isolation Kit (Roche) according to the manufacturer's instructions. RNA was reverse transcribed using a reaction mix containing 200 ng random hexamer primers (#SO142, Life technologies), 1 mM dNTP (#R0191, Life technologies), 10 U ribo-Lock RNase inhibitor (#EO0381, Life technologies), 100 U RevertAid Reverse transcriptase and the provided buffer (#EP0441, Life technologies). The mixture was incubated for 10 minutes at 25°C, 1 h at 42°C and 10 minutes at 80°C. Specific intron-spanning primers listed in Table 2 were designed based on NCBI database sequences. For Rchy1 and Actin amplifications, PCR reaction mix contained 1.25 U Taq DNA Polymerase (#EP0402, Life technologies) with the provided buffer supplemented with 1.9 mM MgCl 2 , 250 μM dNTP (#R0191, Life technologies) and 1.25 mM of each primer. The amplification program started with an activation step at 95°C for 5 minutes followed by 35 cycles of denaturation at 95°C for 30 seconds, hybridization at 57°C for 15 seconds and elongation at 72°C for 45 seconds. The last cycle was completed by a final elongation step at 72°C for 7 minutes. For Hoxa2 amplification, PCR reaction mix contained 1 U Expand Long Template (Roche) with the provided buffer (n°1), 250 μM dNTP (#R0191, Life technologies) and 250 nM of each primer. The amplification program started with an activation step at 95°C for 5 minutes followed by 35 cycles of denaturation at 95°C for 30 seconds, hybridization at 55°C for 15 seconds and elongation at 68°C for 45 seconds. The last cycle was completed by a final elongation step at 68°C for 7 minutes.
Statistical analysis
All statistical analyses were performed with JMP11 software. Mixed model, followed by a post hoc a Dunnett's test, was used for statistical purposes. Western blot quantifications were analyzed using the gel as a random parameter, the tested HOX or the tested deletion Hoxa2 derivatives as fix parameters, Ln(RCHY1/ACTIN) intensity as the response and the GST condition as the control. Luciferase activation was analyzed using the experiment as a random parameter, the tested deletion Hoxa2 derivatives as a fix parameter, Log(Luciferase/βGal) as the response and "PREP-PBX" or "Hoxa2-PREP-PBX"conditions as the controls.
RCHY1 half-life is decreased by HOXA2 in a 26S proteasomedependent way
Further to an interactomic screen carried out by Bergiers et al. to identify Hoxa2 partner proteins, the Hoxa2 protein from the mouse was shown to interact with the human RCHY1 and to induce its destabilization [5]. To confirm these data obtained with proteins of heterologous origin, we addressed whether this destabilization process was conserved for the human proteins.
To this end, we transiently transfected HEK293T cells to express human FLAG-RCHY1 and GST or human GST-HOXA2 fusion proteins. As shown in Fig 1A, the level of RCHY1 was considerably decreased in the presence of HOXA2 supporting that, like for the murine Hoxa2, expression of the human HOXA2 had a negative effect on RCHY1 protein accumulation.
To further investigate the effect HOXA2 exerts on RCHY1, we first quantified RCHY1 halflife and then assayed the impact of Hoxa2 on this parameter. HEK293T cells were transiently transfected with a FLAG-RCHY1 expression vector. Twenty four hours later, de novo protein synthesis was inhibited using cycloheximide (CHX) (200 μg/ml) and cells were collected at different time-points (0, 1.5, 3 and 4.5 h). Western blot analysis revealed that RCHY1 protein levels decreased within 3 h and more dramatically after 4.5 h (Fig 1B). Densitometry analysis indicated that RCHY1 half-life (with quantification normalized to β-ACTIN levels) was about 3.68 h, which is consistent with previously reported data [8,9] (Fig 1C). Thereafter, to address to what extent HOXA2 protein might impact on RCHY1 half-life, GST-HOXA2 was transfected in combination with FLAG-RCHY1. Upon co-expression, RCHY1 protein levels were drastically reduced. Indeed, HOXA2 caused a significant 2-fold reduction in RCHY1 half-life, which was decreased to 1.76 h (Fig 1B and 1C).
Similarly to previous reports by Bergiers et al. [5], we addressed whether HOXA2-dependent reduction in RCHY1 stability was the result of proteasome-mediated degradation. We thus compared RCHY1 levels and half-life in the presence or absence of proteasome inhibition. Upon transfection of FLAG-RCHY1 and subsequent treatment with the 20S proteasome inhibitor MG132 (5-10 μM) for 6 h, RCHY1 protein levels were dramatically increased, whereas β- ACTIN levels remained unaltered. Moreover, the destabilization of RCHY1 induced by HOXA2 was significantly abolished by MG132 treatment therefore confirming that HOXA2 induces RCHY1 degradation through a proteasome-dependent pathway (Fig 1B and 1D). In addition, multiple higher molecular weight bands revealed by the anti-FLAG antibody appeared upon MG132 treatment which likely correspond to ubiquitination of RCHY1 ( Fig 1D). Such post-translational modifications of RCHY1 have indeed been assayed and confirmed previously [5].
Most of the substrates degraded by the 26S proteasome are polyubiquitinated. However, in a limited number of cases, degradation of non-ubiquitinated proteins by the 20S core proteasome has been reported [24][25][26]. For example, 14-3-3τ, MDM2, NQO1 and Rchy1 respectively promote p21, RB, p53 and PolH turnover through the 20S proteasome independently of the ubiquitination status of their substrates [24,27,28]. In this model, the destabilization of p21 and RB was shown to be promoted by their interaction with PSMA3, an α-subunit of the 20S core proteasome [27,29,30]. This hypothesis suggests that for their degradation, non-ubiquitinated proteins could bypass the 19S regulator moiety of the proteasome to be directly targeted to the 20S core proteases [29]. We previously reported that Hoxa2 could destabilize RCHY1 independently of its ubiquitination and that Hoxa2 was capable of interacting with the PSMA3 and PSMB2 subunits of the 20S core particle [5]; supporting a similar mechanism to the Hox-a2-mediated decay of RCHY1 as described above. To investigate whether RCHY1 bypasses the 19S cap proteasome and is directly targeted to the 20S core proteasome by HOXA2, the activity of the 19S was inhibited with b-AP15. This drug inhibits the deubiquitinating activity of both ubiquitin C-terminal hydrolase 5 (UCHL5) and ubiquitin-specific peptidase 14 (USP14), two constitutive proteins of the 19S proteasome, leading to an accumulation of polyubiquitinated proteins [31]. High molecular weight forms of RCHY1 likely corresponding to ubiquitinated proteins were detected upon b-AP15 treatment (1 μM) supporting the drug's efficiency. However, our results showed that the RCHY1 reduction induced by HOXA2 could be rescued by b-AP15 ( Fig 1D). Moreover, exposure to b-AP15 also resulted in an enhanced stability of RCHY1 after cycloheximide treatment (Fig 1B). These data suggest that 19S cap proteasomal activity is required for the HOXA2-mediated RCHY1 decay.
In conclusion, RCHY1 half-life has been estimated to be around 3.5 h and is dramatically reduced upon expression of HOXA2. The RCHY1 turnover is mediated by the proteasomal pathway and the function of both the 20S core and 19S cap proteasome is required.
HOXA2 and RCHY1 mainly interact in the nucleus
The localization of the RCHY1-HOXA2 interaction was studied using BiFC assay which not only validates the possible direct interaction between two partner proteins but can also indicate where it takes place in live cells or in vivo. BiFC relies on the ability of N-and C-terminal parts of the Venus protein to emit detectable fluorescence once they are brought in close proximity. The HOXA2 protein was fused downstream of the N-terminal 173 amino acids of Venus (VN 173 ), while RCHY1 was C-terminally fused to the C-terminal moiety of Venus (amino acids 155 to 243; VC 155 ). Controls supporting that the N-and C-terminal Venus fragments did not reassociate if not fused to interacting proteins were also involved. Consistently, the VN 173 HOXA2/VC 155 , the VN 173 /VC 155 RCHY1 and the VN 173 /VC 155 combinations showed little to no fluorescence (S1 Fig). As preliminary controls, BiFC was first assayed for two wellestablished RCHY1-mediated interactions: RCHY1 dimerization and the RCHY1-p27 Kip1 interaction [12,32,33]. The VN 173 RCHY1 and VC 155 RCHY1 fusion proteins provided diffuse and punctate fluorescence mainly in the cytoplasm as well as a weaker diffuse signal in the nucleus (Fig 2A). In addition, the majority of the cells transfected with VN 173 RCHY1 and VC 155 p27 Kip1 provided a diffuse signal both in the cytoplasm and the nucleus concordantly with known regulation of p27 Kip1 by RCHY1 in these subcellular compartments (S2 Fig) [33]. It is of note that a few cells presented a particular punctate staining in the cytoplasm for the VN 173 RCHY1 and VC 155 p27 Kip1 interaction (S2 Fig).
The vast majority of the emitted BiFC signal associated to the HOXA2-RCHY1 interaction localized in the nucleus and most frequently appeared to be punctate. Moreover, the number of fluorescent cells and signal intensity were drastically increased by treating cells with proteasome inhibitor MG132, confirming the involvement of the proteasomal pathway (Fig 2B). The background fluorescence produced by the control conditions was weak compared to the corresponding test condition, as observed in S1 Fig. To identify the nuclear sub-compartment stained by the punctate signal associated to the RCHY1-HOXA2 interaction, we tried to colocalize the BiFC signal with distinct immunolabeled nuclear domains.
First, PML bodies were targeted. These structures are dynamic nuclear foci 0.2-1 μm wide consisting in multimolecular platforms where recruited proteins together with post- translational modifiers act to modulate protein activation, sequestration or degradation. Cellular processes such as transcription, response to DNA-damage or resistance to micro-organisms, for example, have been shown to be regulated within these nuclear structures (reviewed in [34]). Preliminary data suggested that Hoxa2 could be relocalized to the PML bodies upon inhibition of the proteasome (I. Bergiers, unpublished data). However, our results show that the BiFC staining did not overlap with PML immunoreactivity (Fig 2C).
Second, DNA repair foci were analyzed since RCHY1 has been shown to play a role in DNA repair. Notably, RCHY1 is involved in the degradation of PolH, a DNA polymerase required for DNA repair [28,35]. Moreover, IR-induced cell death was found to be altered in RCHY1 knockout mice [36]. For this reason, it was of interest to look at γH2AX, a marker of DNA repair foci. As shown in the Fig 2D, the BiFC signal emitted by the RCHY1-HOXA2 interaction did not coincide with the pattern observed for γH2AX.
Finally, we targeted nuclear speckles with an antibody against the splicing factor SC35. These structures contain RNA splicing machinery also including transcription factors and notably HOXA1 [37]. No overlaps were observed in the SC35 staining pattern and the HOX-A2-RCHY1 interaction.
In conclusion, the HOXA2-RCHY1 interaction appears to mainly take place in the nucleus and is enhanced upon proteasomal inhibition. Though we excluded PML bodies, DNA repair foci and nuclear speckles, the actual nuclear subcompartment where the interaction localizes remains to be identified.
The HOXA2-induced destabilization of RCHY1 is an evolutionarily conserved phenomenon
Since the HOX family of proteins is extensively conserved among bilaterian animals, we next questioned whether RCHY1 destabilization induced by HOXA2 is evolutionarily conserved among vertebrates. RCHY1 and HOXA2 coding sequences from mouse, chicken, xenopus and zebrafish were cloned and each HOXA2 orthologue was tested for its ability to interact with and destabilize the RCHY1 protein from autologous origin. Like the human proteins, mouse, chicken, xenopus and zebrafish HOXA2 orthologues shared the ability to interact with RCHY1 ( Fig 3A) and promote its degradation ( Fig 3B). As previously reported, RCHY1 destabilization induced by HOXA2 results from proteasome-mediated degradation [5]. We therefore addressed whether the destabilization was differentially influenced upon proteasome inhibition by MG132 for the HOXA2/RCHY1 orthologues. For all HOXA2-RCHY1 protein pairs, RCHY1 decay was indeed blocked by proteasomal inhibition (Fig 3B). Hence, we conclude that the proteasome-dependent destabilization of RCHY1 by HOXA2 is an evolutionarily conserved phenomenon.
Rchy1 is expressed during mouse embryogenesis
Hoxa2 expression and functions are well known to take place from the gastrulation stage on during embryogenesis [38,39]. Beitel (2002) and Leng (2003) [11,40] have previously reported that Rchy1 is differentially expressed in various adult mouse tissues. However, data concerning Rchy1 expression during mouse development are currently lacking. For this reason, it was of interest to investigate whether Rchy1 expression pattern overlaps with that of Hoxa2 in the embryo.
We first analyzed Rchy1 expression in the developing embryo from E8.5 to E12.5, stages at which Hoxa2 expression is well described. RT-PCR on different cDNA pools confirmed Rchy1 expression at all the tested embryonic stages (Fig 4A). ISH on sagittal sections at E10.5 revealed a widespread but heterogeneous staining. In particular, strong signal was detected in the anterior neuroepithelium while it appeared weaker posteriorly (Fig 4B). We then compared Hoxa2 and Rchy1 expression patterns focusing on the hindbrain, at the boundary between rhombomeres (r)1 and 2, and the branchial arches, which are structures affected in Hoxa2 knockout mice. While Hoxa2 expression showed a clear limit at the r1-r2 junction, staining for Rchy1 overlapped with the Hoxa2 expression domain and extended more rostrally, towards the midbrain (Fig 4C and 4D). Similarly, while Hoxa2 was specifically expressed in the second but not in the first branchial arches, Rchy1 expression was detected in both of them (Fig 4E and 4F). In conclusion, our results show that Rchy1 is expressed during mouse embryogenesis and highlight that both Hoxa2 and Rchy1, previously identified as genes coding for interacting proteins in cultured cells, are expressed in overlapping territories at the same embryonic stages.
Mapping the Hoxa2 domains involved in interacting and destabilizing RCHY1
In order to determine which regions of the Hoxa2 protein are involved in RCHY1 interaction and destabilization, we designed four Hoxa2 deletion variants in which residues from the Nterminal (Hoxa2 ΔN [138-372]), C-terminal (Hoxa2 ΔC ), both terminal domains (Hox-a2 HD ), or the homeodomain (Hoxa2 ΔHD [δ139-198]) were removed (Fig 5A). We then assayed these variants for their interaction with RCHY1 by BiFC. Hoxa2 and the deletion mutants were fused to VN 173 , while RCHY1 was fused to VC 155 . Emitted fluorescence was observed with all the tested mutants suggesting that an extended or at least two separate Hoxa2 protein regions are capable of establishing contacts which are sufficient but not absolutely HEK293T cells were cotransfected with expression vectors coding for FLAG-RCHY1 and GST, GST-HOXA2 or GST fused Hoxa2 deletion derivatives. Cell lysates were then subjected to immunoblot analysis with antibodies targeting FLAG, GST or β-ACTIN. (D) The RCHY1 and β-ACTIN proteins detected as in (C) were quantified and the relative RCHY1/β-ACTIN abundance was compared to the control condition involving the unfused GST protein for which the RCHY1/β-ACTIN ratio was set at 1 (dashed red line). Bars indicate the standard deviation (3 < n < 4). (E) HEK293T cells were transfected with a Luciferase (Luc) reporter construct containing a Hoxa2-responsive element and a constitutive β-Galactosidase (Gal) reporter as standard. Expression vectors for PBX1a, PREP1 and GST-tagged Hoxa2 variants were added in combination. The Luc/Gal activity ratios were compared relatively to that obtained with the control transfection involving the unfused GST protein (relative activity of 1). Bars indicate the standard deviation (N = 2, n = 6). (D-E) Asterisks indicate a significant difference between ratios for unfused GST and GST-HOX conditions (Dunnett, * = 0.01 < P < 0.05, ** = 0.001 < P < 0.01, *** = 0.0001 < P < 0.001).
doi:10.1371/journal.pone.0141347.g005 necessary to the RCHY1-Hoxa2 interaction, i.e. subsets of contacts are sufficient to support the interaction (Fig 5B). Nonetheless, our results indicate that some contacts at least could be mapped to the homeodomain and others to the C-and/or N-terminal moieties of Hoxa2.
Next, the GST-fused Hoxa2 variants were individually cotransfected with FLAG-RCHY1 and the effect of each deletion on the HOXA2-mediated destabilization of RCHY1 was tested. As shown in Fig 5C and 5D, the N-terminal deletant still promoted RCHY1 destabilization. In contrast, both the C-terminal and the homeodomain deletants failed to significantly affect RCHY1 stability indicating that these regions are required to promote RCHY1 degradation (Fig 5C and 5D). Consistently, the homeodomain alone did not induce RCHY1 decay. We therefore conclude that the Hoxa2 fragment spanning aa138-372 contains the necessary elements for RCHY1 destabilization. However, as Hoxa2 ΔC , Hoxa2 HD and Hoxa2 ΔHD were all able to interact without leading to RCHY1 destabilization, it can also be concluded that the interaction between the two partners is not sufficient to lead to RCHY1 degradation.
To further determine whether Hoxa2's impact on RCHY1 abundance is associated to its transcriptional activity, we tested if Hoxa2 deletion variants were still active in transcription. Luciferase assays were carried out with a reporter construct under the control of a Hoxa2-responsive element we previously characterized as an auto-regulatory enhancer active in the developing hindbrain [19]. Expression vectors for the Hoxa2 cofactors Pbx1a and Prep1 were added in the assay to provide full activation of the reporter. All the tested Hoxa2 deletions highly or completely compromised Hoxa2 transcriptional activity with regards to the reporter gene ( Fig 5E). This therefore provides evidence that the N-terminal, HD and C-terminal domains of the protein are necessary for an efficient transcriptional activity. As a corollary, since the Hoxa2 variant missing the N-terminal part was almost inactive in transcription ( Fig 5E), but remained functional for RCHY1 destabilization (Fig 5C and 5D), we conclude that the Hoxa2-mediated degradation of RCHY1 is independent of its transcription factor activity and can be considered as a new non-transcriptional function for a Hox protein.
RCHY1 is bound by multiple HOX proteins but destabilized by only few of them
Our Hoxa2-deletion analysis revealed that the homeodomain is involved both in the HOX-A2-RCHY1 interaction and in the induction of RCHY1 degradation. Since HOX proteins share important sequence conservation, in particular in their homeodomain, we addressed whether HOX proteins other than HOXA2 were capable of interacting and destabilizing RCHY1. Nineteen out of the 39 HOX genes were subcloned to code for GST fusion proteins and they were individually cotransfected with FLAG-RCHY1 in HEK293T cells (Fig 6A and 6B). Cell lysates were collected and processed to estimate relative RCHY1 stability (Fig 6B). Western blot and densitometry analyses revealed that RCHY1 abundance varied upon GST-HOX expression and that RCHY1 was significantly destabilized by HOXB1, HOXC4, HOXB5, HOXA6 and HOXB7 (Fig 6C). These results indicate that several HOX proteins, but not all of them, can induce RCHY1 decay similarly to HOXA2. HOXB1, HOXC4 and HOXB5 interaction with RCHY1 was further confirmed by BiFC assay upon MG132 treatment and the corresponding emitted fluorescent signal localized in the nucleus (Fig 6D).
We then addressed whether the inability of some HOX proteins to destabilize RCHY1 was due to lack of interaction. Surprisingly, while assaying HOXA1, HOXB2, HOXA3, HOXC11 and HOXD10, which did not show a significant impact on RCHY1 stability, these proteins appeared to interact with RCHY1 upon MG132 treatment (Fig 6D).
In conclusion, in light of these results, we hypothesize that binding to RCHY1 is a general property shared by HOX proteins but that the ability to promote RCHY1 degradation is restricted to a subset among them.
Discussion
While HOX proteins are known to be involved in a vast range of activities, their molecular interactions have been poorly characterized to date. In a previous study, we identified RCHY1, an E3 ubiquitin-ligase, as a new interactor of Hoxa2 and provided data supporting that Hoxa2 promotes the proteasomal degradation of RCHY1. Here, we reported (1) that the HOXA2-R-CHY1 interaction mainly takes place in the nucleus (2) that the RCHY1 decay induced by HOXA2 depends on both the 19S and the 20S proteasome particles, (3) that the interaction involves molecular determinants from at least two distinct HOXA2 regions providing contacts which are sufficient but not necessary for the binding of RCHY1 and (4) that some HOXA2 deletion derivatives, though still capable of interacting with RCHY1, have lost the ability to provoke its protesomal degradation. Finally, the results provided in the present study support that the downregulation of RCHY1 provoked by HOXA2 is conserved among orthologues from other vertebrate species and that binding to RCHY1 seems to be generic characteristic of HOX proteins yet the ability to stimulate its protesomal degradation is shared only by a subset of them.
The N-terminal part of Hoxa2 seems dispensable to the induction of RCHY1 degradation. Indeed, the Hoxa2 deletion mutant lacking the N-terminal portion of the protein destabilizes RCHY1 to the same extent as the WT Hoxa2. This result could be related to what we previously showed for the Hoxa2 WMAA protein mutated in the hexapeptide sequence located in the N-terminal part of Hoxa2. Similarly to Hoxa2 ΔN , the Hoxa2 WMAA protein has been shown to induce RCHY1 decay [5]. This hexapeptide motive is known to mediate the interaction of Hox proteins with PBX [41] and in the case of Hoxa2, we provided evidence that amino acid substitutions in the hexapeptide severely impaired or abolished the transcriptional activity of Hoxa2 [19]. As both Hoxa2 ΔN and Hoxa2 WMAA mutants lack the hexapeptide sequence but maintain full activity with regards to RCHY1 destabilization, we propose that the Hoxa2-mediated effect on RCHY1 degradation is independent of the transcription activity of Hoxa2 and corresponds to a novel non-transcriptional activity for Hoxa2.
Conversely, although all the HOXA2 variants tested so far remain capable of interacting with RCHY1, the ability to induce RCHY1 destabilization is only affected upon deletion of the C-terminal part or the entire homeodomain of HOXA2, or as a consequence of point mutations in the homeodomain (Hoxa2 KQN-RAA ), as we previously reported [5]. This suggests that the molecular determinants involved in inducing RCHY1 degradation are distinct from the ones sustaining the interaction. This also demonstrates that integrity of the homeodomain is necessary for RCHY1 degradation. However, although necessary, the homeodomain is not sufficient to act on RCHY1 turnover.
In accordance with the observation that the homeodomain is sufficient to mediate the interaction with RCHY1, all other tested HOX proteins were shown to be able to interact with RCHY1. Moreover, consistently with the fact that the homeodomain alone is insufficient to promote RCHY1 decay, the majority of the tested HOX proteins did not display any significant impact on RCHY1 decay, with only 6 out of the 19 HOX proteins tested leading to a significant destabilization of RCHY1. This again supports that the interaction between HOX and RCHY1 proteins does not necessarily lead to RCHY1 degradation and that the molecular determinants involved in inducing RCHY1 degradation are distinct from those sustaining the interaction. We could therefore hypothesize that an additional domain might be required for RCHY1 destabilization. However, protein alignments of HOXB1, A2, C4, B5, A6 and B7 did not enable us to identify common motives that are absent from HOX proteins showing no effect on RCHY1 stability and that could be linked to this function.
Similar molecular interactions generic to HOX proteins have already been described, including TALE-class homeodomain transcription factors Pbx and Meis, CUL4 ubiquitineligase, histone acetyltransferase CPB or pre-replication complex inhibitor Geminin [41][42][43][44][45][46][47][48][49]. At a functional level, some generic activities have also been highlighted such as the inhibition of autophagy in drosophila [50]. Molecular properties and activities often appear largely shared among proteins belonging to the same paralogue group. For example, Hox proteins of paralogue group 3 have been shown to be functionally interchangeable [51]. However, although the interaction with RCHY1 appears common to several if not all HOX proteins, its functional consequence in terms of modulating RCHY1 stability does not seem to be neither generic to HOX protein nor specific to one HOX since only a subset of HOX proteins tested could induce a significant RCHY1 destabilization. Remarkably, this activity is not paralog-specific but is instead shared by proteins corresponding to genes spread along the vertebrate HOX complexes. Determining why a given HOX protein can impact on RCHY1 stability or not deserve further investigation.
In addition to the RCHY1 degradation mediated by several HOX proteins we report here, two other studies previously reported HOX involvement in promoting protein degradation. Namely, HOXB4 and HOXA9 associate with Roc1-Ddb1-Cul4a ubiquitin ligase and contribute to Geminin ubiquitination leading to its proteasomal degradation [47,52]. Several similarities can be highlighted between the degradation processes described in these studies and the present report. Similarly to what we observed for HOXA2, the integrity of the HOXB4 HD was shown to be required to promote Geminin decay. Furthermore, as for RCHY1, the ability to interact with Geminin is shared by multiple HOX proteins, conversely to the ability to destabilize it. Indeed, Geminin was shown to be destabilized by HOXB4 and HOXA9 but not by HOXC13 [5,47,52].
Since HOXA2-induced RCHY1 degradation is evolutionarily conserved in all the tested vertebrates and this activity is shared by several HOX from different paralogue groups, we hypothesize that controlling RCHY1 stability is an ancestral HOX function. Whether ancestral HOX proteins displayed the ability to induce RCHY1 destabilization is an issue that could be addressed by testing if this activity is shared among HOX proteins from Bilaterian clades other than vertebrates.
The proteasomal degradation of RCHY1 promoted by HOXA2 was shown to occur independently of the ubiquitin system [5]. Ubiquitin-independent protesomal degradation of proteins has already been reported in several instances [24][25][26][27][28][29][30]. The direct involvement of the 20S core particle has been implicated in this process and was confirmed for RCHY1 using MG132, a drug that blocks the proteolytic activity of this part of the proteasome. It was proposed that Hoxa2 interacts with the 20S core proteasome to directly induce RCHY1 degradation. However, whether the 19S cap proteasome was required for such ubiquitin-independent protein decay remained unknown. Using b-AP15, a drug specifically inhibiting the 19S cap proteasome [31], we observed an abolishment of the HOXA2-induced RCHY1 degradation supporting that the 19S also is implicated in this degradation pathway.
Finally, to our knowledge, this is the first report and description of Rchy1 expression during mouse embryogenesis. Rchy1 expression shows an extended pattern with a stronger staining detected in the anterior neuroepithelium. Most significantly, the overlap between Hoxa2 and Rchy1 gene expression supports the possibility of an interaction between these proteins during mouse embryogenesis. Though Rchy1 expression appears to be dispensable for normal embryonic development as suggested by the phenotype of the knockout mouse [36], the protein has been shown to differentially regulate its targets under stressed and unstressed conditions and might play a more predominant role under stressed conditions during embryonic development as well. | 2016-05-12T22:15:10.714Z | 2015-10-23T00:00:00.000 | {
"year": 2015,
"sha1": "a2afd6f7c301a78f42f1c918b165c8a81d70e82b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0141347&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f280707fc72f89ce4e0985548db81124e9971c6d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.