id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247837225
pes2o/s2orc
v3-fos-license
N-body Interactions will be Detectable in the HR-8799 System within 5 years with VLTI-GRAVITY While Keplerian orbits account for the majority of the astrometric motion of directly-imaged planets, perturbations due to N-body interactions allow us to directly constrain exoplanet masses in multiplanet systems. This has the potential to improve our understanding of massive directly-imaged planets, which nearly all currently have only model-dependent masses. The VLTI-GRAVITY instrument has demonstrated that interferometry can achieve 100x better astrometric precision (Gravity Collaboration et al. 2019) than existing methods, a level of precision that makes detection of planet-planet interactions possible. In this study, we show that in the HR-8799 system, planet-planet deviations from currently used Keplerian approximations (Lacour et al. 2021) are expected to be up to one-quarter of a milliarc-second within five years, which will make them detectable with VLTI-GRAVITY. Modeling of this system to directly constrain exoplanet masses will be crucial in order to make precise predictions. INTRODUCTION Orbits are essential to understanding the formation and evolution of exoplanet systems. In particular, orbital elements such as the inclination angle and eccentricity directly reflect the dynamical history of the system (Bowler et al. 2020). Recently, planet-planet interactions have been used to measure the dynamical masses of directly-imaged exoplanets (Lacour et al. 2021), which has the potential to address one of the major limitations of using direct imaging: that planet masses are currently model-dependent. This new method can be used to compare measured masses against those predicted by formation models and even discover new planets using existing data. In the era of extremely precise astrometry, using planet-planet interactions to measure dynamical masses is not only possible, but may be a more effective method than using radial velocities and/or absolute astrometry. orbitize! is a Python package that streamlines the process of modeling the orbits of directly-imaged planets. The orbitize! development team aims to make orbit-fitting faster and more accessible by combining multiple existing algorithms and techniques into one code base . orbitize! currently models planet-planet interactions using a Keplerian approximation (Lacour et al. 2021), meaning it assumes planets' orbital elements stay the same over time. In this study, we document the addition of an N-body backend, REBOUND (Rein & Liu 2012, Rein & Spiegel 2015 into orbitize!, which is now available in the 2.0.0 release. This update allows us to reliably model the motion of multiple secondary bodies, and we can use it to study when the current Keplerian approximation will be insufficient to model the motion of the HR 8799 system. Because it is a system with four super-Jupiter planets (Marois et al. 2008), HR-8799 is an excellent candidate for directly observing the effects of planet-planet interactions. METHODS & RESULTS In order to confirm the accuracy of our REBOUND implementation, we first compared its output with the standard orbitize! Keplerian solver on a single massless secondary body. The massless results agreed to a factor of 10 −10 fractional precision over hundreds of years, which is well below the measurement capabilities of current instruments. Since orbitize! and RE-BOUND use different orbital elements, our 10 −10 agreement helps to confirm our conversions, and provides an independent check for our Kepler solver. After confirming that our N-body implementation was working, we moved on to testing massive systems against the current approximate solver in orbitize! (Figure 1), which shows deviations as high as 0.25 milliarcseconds for one body within 5 years (assuming the orbital parameters reported in Wang et al. 2018), detectable given the current VLTI-GRAVITY precision of 0.05 milliarcseconds (Gravity Collaboration et al. 2019). Within 10 years, arXiv:2204.03679v1 [astro-ph.EP] 7 Apr 2022 the deviation is calculated to rise up to one milliarcsecond. Although using an N-body solver has significant accuracy advantages for multi-planet systems, using it within orbitize! is completely optional, and can be called within the compute all orbits() function. Anyone can use test data to experiment with the solvers, and tutorials are available on the orbitize! website 1 . Conclusion Switching current Kepler or other mass-independent solvers for Newton-based N-body solvers such as RE-BOUND will be essential to model multi-planet orbits to the current level of precision of VLTI-GRAVITY. Within 5 years, we should be able to detect the differences between using a Kepler solver and an N-body solver in the HR-8799 system, which we will be able to use to make accurate dynamical mass measurements of directly-imaged exoplanets. With REBOUND's implementation into orbitize!, we can now look for planetplanet interactions in new and existing data, and use them to better predict astrometric motion. Figure 1. The absolute difference between using an approximate Keplerian multi-body solver (Lacour et al. 2021) vs. an N-body solver for each body in the HR-8799 system. Data shows that smaller-separation planets will deviate by up to a quarter of a milliarcsecond from the previous Kepler calculations within five years; enough to be detected by high-precision instruments such as VLTI-GRAVITY.
2022-03-31T20:02:40.539Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "843fcb4197a4e5d0c78fa004e61a9b99d23af9aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/2515-5172/ac61d8", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "0cd941b905922e3e65e36adedc1297f08940bd50", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
166796131
pes2o/s2orc
v3-fos-license
Wind Power: Integrating Wind Turbine Generators (WTG’s) with Energy Storage Energy Storage is the missing link between wind driven power generation and delivering power in a sustainable manner that can be dispatched at times of high demand from the grid. Transmission systems that cover large territories such as in North America are particularly vulnerable, requires additional dedicated transmission and readily dispatcheable backup power systems. The installed capacity of Wind Turbine Generators (WTG’s) in the US and worldwide, while impressive, suffers from a low capacity factor of 30% or less due to the variability and intermittency of wind as the motive force. In 2007 the global installed capacity was 94 GW with a predicted capacity of 136 GW by 2010, 55% would be installed in Europe and 23 % (31 GW) in North America, these numbers could be exceeded, as the US already has over 29 GW installed capacity with 99 GW in planning in the next 10 years. The demand for electricity has considerable daily and seasonal variations and the maximum demand may only last for a few hours each year. As a result, some power plants are required to operate for short periods each year – an inefficient use of expensive plants. Without any additional storage above the present 2.5%, mainly PHS, of the installed base load in the USA, base loaded plants are being detrimentally cycled at higher frequency and the situation is further exacerbated by the latest growing demand for renewable energy such as wind energy. In the US, this capacity has now reached in excess of 29,000 MW [Fig 1] summarized by the American Wind Energy Association (AWEA) projects; in Canada the current 2800 MW projects under consideration or contract will grow to 7400 MW to meet energy objectives set for 2015. Installing larger wind farms, to cover the deficiency of a higher capacity factor, results in high costs per delivered kW/hr. This requires continued tax incentives to deliver “green” energy to the consumers. The full capability of the WTG is never realized, as at high wind speeds, some of the wind energy has to be “spilled” to maintain a smooth delivery profile. Technology improvements have not overcome the “wasted” capacity of these modern marvels except where Hydro or Pumped Hydro Storage (PHS) facilities are utilized. The Hydro power station can compensate for wind variability and intermittency while PHS provides energy storage and delivers power during high demand periods. Wind Energy Storage results in a much higher capacity factor, in effect reducing the cost of delivered kW/hrs., PHS amounts to less than 2.3 % of the current installed 1000 GW generating capacity and will decrease with the increasing addition of wind generation. Introduction Energy Storage is the missing link between wind driven power generation and delivering power in a sustainable manner that can be dispatched at times of high demand from the grid. Transmission systems that cover large territories such as in North America are particularly vulnerable, requires additional dedicated transmission and readily dispatcheable backup power systems. The installed capacity of Wind Turbine Generators (WTG's) in the US and worldwide, while impressive, suffers from a low capacity factor of 30% or less due to the variability and intermittency of wind as the motive force. In 2007 the global installed capacity was 94 GW with a predicted capacity of 136 GW by 2010, 55% would be installed in Europe and 23 % (31 GW) in North America, these numbers could be exceeded, as the US already has over 29 GW installed capacity with 99 GW in planning in the next 10 years. The demand for electricity has considerable daily and seasonal variations and the maximum demand may only last for a few hours each year. As a result, some power plants are required to operate for short periods each year -an inefficient use of expensive plants. Without any additional storage above the present 2.5%, mainly PHS, of the installed base load in the USA, base loaded plants are being detrimentally cycled at higher frequency and the situation is further exacerbated by the latest growing demand for renewable energy such as wind energy. In the US, this capacity has now reached in excess of 29,000 MW [ Fig 1] summarized by the American Wind Energy Association (AWEA) projects; in Canada the current 2800 MW projects under consideration or contract will grow to 7400 MW to meet energy objectives set for 2015. Installing larger wind farms, to cover the deficiency of a higher capacity factor, results in high costs per delivered kW/hr. This requires continued tax incentives to deliver "green" energy to the consumers. The full capability of the WTG is never realized, as at high wind speeds, some of the wind energy has to be "spilled" to maintain a smooth delivery profile. Technology improvements have not overcome the "wasted" capacity of these modern marvels except where Hydro or Pumped Hydro Storage (PHS) facilities are utilized. The Hydro power station can compensate for wind variability and intermittency while PHS provides energy storage and delivers power during high demand periods. Wind Energy Storage results in a much higher capacity factor, in effect reducing the cost of delivered kW/hrs., PHS amounts to less than 2.3 % of the current installed 1000 GW generating capacity and will decrease with the increasing addition of wind generation. Decoupling energy production from supply Storage allows energy production to be de-coupled from its supply, self-generated or purchased. WTG's can only receive energy payments for delivered power, requiring the installation of Gas Turbines or cycling of thermal plants to provide capacity that cannot be delivered by wind. The wind generation variation vs. daily demand requirement is illustrated in Fig. 2 The problem with the proven bulk energy PHS solution is that the USA or the worldwide installation of WTG's do not have such facilities readily available (some exceptions in Europe), are expensive to construct and difficult to permit in the USA. A readily available, cost effective alternative bulk-energy storage technology is ready for deployment. The Gas Turbine-Compressed Air Energy Storage (GT-CAES) concept incorporates a standard production GT with CAES technology and so covers a wide range of power production that can be matched to specific storage sites. During excess wind power production or nighttime wind, this power is used to drive air compressors to pump up or pressurize storage facilities such as salt caverns, deep aquifers (depleted natural gas wells) or above ground storage tanks (Pipelines). The stored compressed air is released to an air expander to recover the stored energy. The air to the expansion turbine is pre-heated to 510 oC to 565 oC using the Gas Turbine exhaust energy recovered in a Heat Recovery Unit (HRU). The Gas Turbine low exhaust emissions are reduced further with Selective Catalytic Reduction (SCR) in the HRU. Adiabatic expansion without pre-heating the air before expansion is another possibility. The electric motors driving the air compressors are large for Bulk Energy Storage facilities, and can absorb large and varying quantities of wind generated power and thus regulate the delivered kW/hrs delivered during peak demand, or store the excess power during low grid demand. Wind as a renewable resource would be able to deliver a larger percentage of "green" capacity with the ancillary power benefits of Storage such as Voltage Regulation, load following, spinning reserve, etc., not a feature of WTG's. Smaller capacity systems of 3 to 30 MW/hrs serve a different purpose for smaller wind farms, primarily in a "smoothing" function of decoupling for power delivery and meeting short duration peak hour generation By having large-scale electricity storage capacity available over any time, system planners would need to build only sufficient generating capacity to meet average electrical demand rather than peak demands. Fig. 4 shows a multitude of smaller short duration devices with quick response discharge times, this is where a lot of the research and development was focused and does not accommodate large wind contribution. The emphasis today has to be on large scale systems such as pumped hydro and compressed air energy storage to fully integrate the growing installed wind capacity. In theory, a typical power plant could operate with 40% less generating capacity than would otherwise be required when supported by Energy Storage. This represents considerable financial savings in peaking and intermediate plants. Additional reductions in emissions and capital investment can occur due to the base load generators operating more efficiently at steady state output. The wind energy can be stabilized as well as increased in capacity toward the nameplate rating. Grid instability does lead to regional blackouts. This does open the door for more consideration of Energy Storage. While this is encouraging, there are institutional hurdles to overcome, one of which is the lack of understanding of the value and benefits of Bulk Energy Storage as well as some perceived concepts that simply adding more new power plants and transmission capability will cure blackout problems experienced in recent times in the USA. Storage is probably the better solution! Storage of electricity (energy) will significantly change the Power Industry for the better: better utilization of resources, better system efficiency, lower emissions, better reliability and security. Geologically suitable identified sites for bulk energy storage using salt domes, hard rock or aquifers can be readily exploited for 20/30 GW capability by 2020 or sooner, a fact not fully recognized by power entities. (van der Linden, Septimus, 2006 ) 3. How does a CAES system work? The fundamentals of a Gas Turbine are well understood: atmospheric air is compressed to a higher pressure, fuel is added in a combustion chamber and the hot, high pressure www.intechopen.com combustion gas expands through a turbine that provides both the motive power for the compressor (60% or more) and the balance of the power (40% or less) as mechanical energy to drive an electric generator. In a CAES cycle variation of a standard gas turbine, the compression cycle is separated from the combustion and generation cycle; by using low cost, off-peak or excess electricity, motor driven inter-cooled compressors provide the compressed air held in storage to be released from storage to the modified gas turbine for power generation on demand. In this process, some dramatic changes in the power and economic cycles have occurred. The gas turbine expander absent of its large parasitic load delivers approximately two thirds more power with no increase in fuel consumption. The required compressed air comes at a much lower cost thus enabling lower cost of electricity generation during high demand cycles from other intermediate load systems, in particular the increasing renewable energy mandates and others such as Gas Fired Thermal or Combined Cycle power plants, or even the lower cost Simple Cycle gas turbine power plants. The illustration Fig 5 below will help clarify the CAES concept. Fig. 5. CAES Concept The Compressors utilize off peak wind energy to store high pressure air in the storage cavern, which is expanded to generate power when there is a demand during the day; this diurnal wind energy as depicted in Fig. 6 brings maximum wind capability to the grid. CAES technology: storage concepts Decoupling the Compressor trains from the generating train allows for more flexibility in compression optimization and utilization. Motor driven compressors in 50 MW or lesser increments allow sites and storage volume to best serve the transmission grid needs as well as act as load sinks of 100/200 MW or 300 MW to avoid unnecessary cycling at base loaded plants. The illustration Fig. 7 below captures the decoupling of compression from the power www.intechopen.com Applications Stored energy integration into the generation-grid system is best illustrated in (Fig. 8) "Energy Storage Applications on the Grid". This covers a wide field in every aspect of generation- transmission and distribution. The ability of the various technologies to react quickly, converting the stored energy back to electricity readily provides three primary functions: Energy Management (hours of duration) load leveling or peak period needs; Bridging Power (seconds or minutes duration) assuring continuity of service, contingency reserves or UPS (Uninterruptible Power Supply); and Power Quality & Reliability (milliseconds or seconds duration) in support of manufacturing facilities, voltage and frequency controls. This storage pipe concept could be applied to existing GT/CC plants. Increasing the hot day output 20/25% by injecting the stored air into the combustors with or without humidification ( Fig. 9). By applying the humidification concept, the air supply in a CAES plant could reduce the required storage volume by 30% or more, or increase the operating hours by 30% of the specific cavern storage volume. (Nakhamkin, Michael. et al, 2004) In another Hybrid proposed concept, a conventional gas turbine is coupled with storage and a separate unfired air expander for increased flexibility of operation. Using a 180 MW gas turbine, the plant output would exceed 400 MW. (Fig.10). The advanced technology gas turbine with 38% efficiency can be operated independently when the cavern air supply has been drawn down. (Nakhamkin, Michael .et al, 2000) The separate expander (bottoming cycle) allows stored wind "green" energy to remain clean without products of combustion. Systems of 100 MW supported by a 45 MW Gas Turbine is another of several size options using available production gas turbines rather than specially designed Combustion expanders. The first unit in Huntorf Germany (290 MW) and the first unit in McIntosh Alabama (110MW) have high and low pressure sequential combustors, and inline motor/generator driven compressors. Advanced concepts of Adiabatic Compression & Expansion, requiring Thermal Energy Storage (TES) have been studied in the US, but more recently in Europe. Such systems would ideally benefit renewable energy systems such as wind, solar and biomass, adding capacity with no premium fuel consumption. Diabatic CAES plant loses heat from the compression cycle which must be re-generated or added to the compressed air before entering the turbine expansion cycle. Adiabatic CAES will benefit from the thermal energy storage to preheat the stored air which will expand adiabatically through a sliding pressure air turbine, with the added benefit that no CO2 is generated in the process. Such studies have been completed in Europe with 19 different partners with support and involvement of the European Commission through a research contract. (Bulloch, Chris. et al, 2004) Thermal storage devices such as the "Cowper" heat storage devices in glass and metallurgical industries were investigated for suitable thermal storage solutions. The study detailed concept sizes of 30, 150 and 300 MW respectively. (Fig.11) From the study it is concluded that the Dutch electricity market is the most promising for mass storage whereas, less promising, but not ruled out, are Italy, Norway, Sweden France, the Alpine countries with PHS will be exceptions. A series of economic calculations using the national spot market prices for the years 2001 and 2002, and a range of storage capacities, demonstrate that the opportunities are greatest on the Dutch market, with a plant storage capacity of about 3000MWh. The increased fuel costs in 2009 and higher equipment costs for WTG's will change the dynamics for AA-CAES storage/generation. Fig. 11. Advanced Adiabatic CAES Concept (Alstom Power) Smaller adiabatic systems suitable for isolated Wind Turbine systems where no fuel is added are under development, with utilization of the cold exhaust air to be used for cold storage systems or advanced concepts of "freeze" desalination. Such units of 500 kW and larger are ideal for wind power "smoothing" and distributed generation. The T-CAES 500kW system can produce 3600 liters/hr of fresh water from seawater or saline/brackish water. (Dr. Ben Enis et al., 2006) The European wind resources and potential are substantial and cover a large area for suitable wind /storage integration, other than the current Hydro plant and PHS already in operation (Fig. 12). Benefits from energy storage One of the first benefits would be to fully utilize capital assets, considering that the national average for generation capacity factor is 58/60% and transmission 50/52%. Bulk Energy Storage will allow the most efficient units to be fully utilized and allow optimization of the generation mix. The integration of ever increasing renewable sources such as wind with energy storage will bring a larger contribution to the Generation mix. Furthermore, it will avoid the use of inefficient units using premium fuels during peak periods. Needle peaks can be readily met with storage as the distribution level or with current installed "peaker" unit capacity. The market or economic benefits from Energy Storage can be quantified in four major areas of the electricity supply chain, namely: generation, transmission & distribution, energy services, and renewable energy storage. Projected benefits over a 15 year period for the USA Generation and T&D system could exceed $100 Billion. Other benefits of Wind Storage are reducing water consumption, CO2 reduction, Ancillary Service Value and Transmission Value as part of the value chain illustrated in Fig. 13. www.intechopen.com Close to 90% of all new U.S. generation capacity added since 2005 has been a combination of natural gas and wind power. The U.S. electric industry faces dramatic transformations as it wrestles with the challenges of the 21st century. The capacity factor of wind requires that 3 MW is installed to displace 1.0 MW of base load coal power, and subsequently backed by gas fired power plant. This is a clarion call for integrating wind with storage technologies increasing the clean mix of renewable and flexible technologies. Large-scale storage is the 6 th dimension in the Electricity value chain which can bring new possibilities to the Utility industry with the growing mandates for 20/30%power generation from renewable energy in particular wind energy. Note that this does not exclude Distributed Power Energy Storage devices as illustrated in Fig. 13. Water consumption CAES can increase clean water and more of it: Wind power does not require water, nor does CAES plant. If wind were to provide 20% of our base load electricity by 2030, using Energy Storage technologies, water use by the electricity sector would be cut by 17% in that year. Water is a precious commodity which current fossil plants demand in high quantities for cooling. Many new power plants are using air cooled condensers to conserve water especially in California and Nevada. The Western United States face critical water issues today and renewable energy sources such as concentrated solar power (CSP) now in construction in the high solar radiation areas will have water curtailment for cooling, requiring dry cooling with performance degradation. CAES avoids water use entirely. CO2 reduction CAES contributes to increasing the CO2 reduction contributed by wind energy displacing fossil power generation, assuming zero CO2 emissions for wind whereas Coal would produce 974 Tonnes CO2/GWe/h and gas fired plant 464 Tonnes CO2/GWe/h. Increasing wind energy contribution from variable and unpredictable to a dispatcheable base load contribution from a capacity factor of 30% to 55 % or higher would significantly mitigate CO2 issues the power sector faces. The GT/CAES has a heat rate of 4010 kJ/kWh and that of a CC power plant 6500 kJ/kWh (lower heating value) LHV for comparative purposes a 40% improvement over CC and a 64 % improvement an efficient open cycle fast start and load aero-derivative gas turbine used to supplement short coming of wind generation due to variability of wind. The CO2 reduction of CAES/Wind integration is significant factor in the overall economics considering the CO2 sequestration programs being promoted. Ancillary service value The rapid response of a CAES plant capability allows it to provide automatic generation control/regulation in both generation and compression modes. CAES plant provides spinning reserve; a CAES plant with independent compression/generation trains can bring the full generation capacity online in less than 10 minutes. CAES plant can also be considered to be providing quick start or operating reserve with the ability to rapidly shed load while in compression mode, or ramp up while in generation mode. CAES can also provide other ancillary services such as balancing energy and voltage/VAR support. Transmission value Strategic location of a CAES plant may be valuable from a transmission perspective. When located in an area where wind production is congesting the local transmission lines the CAES plant has the potential to significantly reduce or eliminate congestion and increase grid efficiency by storing the wind energy and releasing it when the wind plants are at a low output and more transmission line capacity is available. The CAES plant has reactive capability in both the generation and compression modes, and when configured with www.intechopen.com clutches it can operate in the synchronous condenser mode, to provide and absorb reactive power as needed. A CAES plant can provide voltage reliability benefit to the transmission grid. Furthermore CAES plant reactive support can be particularly useful when combined with wind plant operation, especially as wind plants have been manufactured with limited reactive capability and installed on weak areas of the grid. Conventional power plants have more reactive power capability and voltage stability, so when displaced by wind power these abilities are substantially diminished , the negative effects of wind on the transmission grid can be avoided with CAES plant acting as a dynamic Reactive System . Future prospects (developments) Pumped Hydro has clearly demonstrated the value of Bulk Energy Storage. While these benefits are recognized and utilized, new facilities have languished; projects in development do show promise and opportunities for implementation. The requirement for efficient Clean Coal concepts such as IGCC (gasification) can be enhanced with storage systems to keep the plant at an 80% or better load factor during the off-peak demand periods and deliver the added stored capacity during high demand. New concepts are being proposed especially with the growing capacity of wind energy, currently backed by tax incentives. However, at 29 GW and projected substantial growth, energy storage and wind energy integration using CAES or Flow Batteries or ganged Flywheels could lead to better economic utilization of a substantial resource operating at below 30% capacity factor -storage could drive this capacity factor to 65% or higher. Concepts outlined in a paper presented at EESAT 2003 Conference (van der Linden, S., 2003) suggested sub-surface storage using large diameter pipes such as typically used for natural gas transportation. Using a storage complex of 2000 meters of pipe, a system that will provide 60 MW/hrs (15 MW x 4hrs) could enhance power supply at remote wind farms. Introducing such smaller systems will help the industry gain confidence in the value of energy storage, gain operational experience without large expenditure. Smaller capacity systems of 3 to 30 MW/hrs serve a different purpose for smaller wind farms, primarily in a "smoothing" function of decoupling for power delivery and meeting short duration peak hour generation. (Dr. Ben Enis et al., 2006) Other concepts proposed would transmit stored air by pipeline to industrial areas or to bring power to where existing transmission is available. Permitting for new power lines is constantly challenged by environmental groups. Pipelines to transmit the stored energy would help bring more renewable energy into the demand cycle. The overall principle of operation of the T-CAES system is depicted simply in Figure 14. Note that if the system efficiency is about 50%, then the excess power versus time area is able to deliver either the same excess power in half the time (First part of Figure 14, early time history) or half the excess power for twice the time (Second part of Figure 14, later in the time history). In order to discuss the performance of the T-CAES system, it is necessary to consider several simplifying assumptions to readily demonstrate trends. First, wind history has 24 identical successive 1-hour periods for each 24-hour day. This assumption produces the most effective volume tank. Second, average power of 46.9% was used compared to the peak power in selecting the shape of the power history sinusoid. Third, the nighttime power requirement is a fraction of the daytime power requirement. Fourth, when daytime and nighttime power usage was excessive for the T-CAES system, a diesel backup was used. (Dr. Ben Enis et al., 2006). Figure 13 shows the wind turbine 2,500-kW power dropping to 312.5-kW and then rising back to 2,500-kW during each 1-hour successive period. The System is delivering 90-kW to the user during the 12-hour period at night and 910-kW to the user during the 12-hour daytime period. The 15.24 m long cylindrical storage tank that is 2.56 m in diameter permits the tank pressure to rise from 16.3 bar at the start and to reach 82.75 bar at the peak, and to return to 17.3 bar-psia at the end of the day. This cycle is therefore ready to repeat itself continuously. If the storage tank is made twice as long, say 30.48 m, then the power lulls can be extended from 1-hour to 2 hours. If there is a set of two 30.48 m long tanks, then there can be a four-hour lull. If there is a wind speed lull that extends continuously for several days, then neither the T-CAES system nor the underground cavern CAES system can support the user. The diesel system or Gas Turbine is required. The T-CAES system is amenable to all geological and geographical locations. It falls in the power versus duration region where other energy systems do not apply (Fig. 4). It operates at high power levels (0.5 MW to 10MW) and over many hours. It operates in three modes for different daily scenarios at the same facility: (1) Electrical power mode, (2) Chilled air cogeneration and (3) Drives pneumatic equipment and pneumatic tools. The T-CAES system provides electrical power history "smoothing" so that even smaller wind turbines can provide steady power histories to the user. When there is a differential in price of electricity during the summer daytime and the summer evening, the T-CAES system provides "peak shaving" advantages. The T-CAES system provides backup power when the electrical grid is down or when the wind turbine is idle. The Wind Turbine Generator would be remote at a suitable location, except in some rural area with co-ops or small town Municipalities. Where there are additional services from T-CAES system, such as pneumatic power or chilling then the storage and generation system are co-located with buildings, as the T-CAES system does not burn fossil fuel. US storage geology and +4 wind resources and population density The two maps Fig 16 & 17 of the US show an interesting perspective where the high wind resources are located and the population density; basically many wind farms are distant from end users with high energy demands, resulting in transmission constraints as well as time of day generation, generally not co-incident with demand. This is also illustrated in Fig 18 as offset increasing fuel costs. These smaller systems may be from one to three WTG's, while helpful do not realize the full benefits of the investment. This also applies to some large schools and Colleges who support the emissions reduction effort in their communities and subsurface storage would be the right answer. The map also indicates that the favorable wind resources in the Midwest have domal and bedded salt as well as aquifers suitable for energy storage. The Iowa Storage Project using a deep saline aquifer is a good example of an Association of Midwest municipalities taking the initiative to collectively harvest their wind resources. Wind variability Wind characteristics vary in different Geographical regions, the chart Fig. 19 Projects in development Several large CAES projects with different storage media are in development. Two are fully permitted and of particular note, even when the financial climate for new projects requiring major investments has slowed for such innovative concepts. Storage Technology of Renewables and Green Energy Act of 2009 is funded by the US Department of Energy (DOE) to assist getting storage projects launched with Industry partnership. Load shifting for Wind Farm diurnal operations and ramping control to demonstrate CAES projects of 10 to 50 MW /2 to 5 hour storage will benefit from $50/60 Million cost share. Both above ground and below ground CAES projects will be considered for demonstration. It is expected that at least 35% lower CO2 emissions than simple-cycle will be achieved, with a predicted economic payback based on the 24 months of project data. These projects must be ready for operation within 4 years of project award. Some of the larger projects in development could benefit from the US DOE serious consideration for energy storage based on increasing wind power generation. Iowa Stored Energy Project (ISEP) This project under development by Iowa Association of Municipal Utilities promises to be exciting and innovative. The compressed air will be stored in an underground aquifer and wind energy will be used to compress air in addition to available off-peak power. A separate section of the underground aquifer will be utilized for the storage of natural gas, allowing the CAES facility and other utilities to purchase gas when prices are lower. The plant configuration is for 200 MW of CAES generating capacity, with 100 MW of wind energy. While wind might be the lowest cost generation system, it is variable and not reliable as a constant source. CAES provides the 'battery' storage for wind energy and makes wind energy a dispatch resource. CAES will expand the role of wind energy in the region generation mix and will operate to follow loads and provide capacity when other generation is unavailable or non-economic. The underground aquifer near Fort Dodge has the ideal dome structure allowing large volumes of air storage at 80 bar pressure or more with injection depth of 3000 feet (Fig. 20). With recent funding from the US DOE several exploratory wells have been drilled under the guidance of The Hydrodynamics Group team led by Michael King a natural gas engineer with 30 years of hydrogeology experience. The aquifer has been defined and additional test wells will need to be drilled prior to injection tests to determine the permeability of the sand stone. The results are promising thus far and further progress towards the first aquifer storage for air is well on the way. Other states such as Illinois also have this potential for Wind & Storage. "Energy Storage Options for Central Illinois", (Makansi et al., 2003) but Iowa is in the forefront, possessing a site ideal for a CAES power plant and wind farm. These development plans have a future vision for the value of carbon reduction: adding reliable renewable resources with storage concepts such as CAES. In reality, there is no shortage of potential projects and suitable sites (van der Linden & Septimus, 2006) for Bulk Energy Storage development; there is no energy policy or incentive to implement the advantages and benefits demonstrated by NG Storage or the Pumped Hydro storage now serving the nation's power system. This long term lack of support is now getting some in-depth consideration from the US DOE driven by growing wind farm developments. Project Markham, Texas This 540 The full 540 MW can be delivered in less than 15 minutes. This is a tremendous value to the grid, providing reserve capacity, before cycling of base-loaded plant is required. The variable capacity range would be 840 MW (300 MW Compressor + 540 MW Generator). Nox emissions will be controlled to 5.0 vppm or lower with SCR in the HRU. This site has Salt Dome cavern storage suitable for high pressure air storage and is unique in that natural gas storage is available on the site as well. This is ideal as energy can be arbitraged either as electrons (electricity) or Btu's (natural gas), or a combination of both. Compression trains totaling 300 MW, for the required shorter off-peak charging period, will also act as a very large load sink on the system. This project stalled some years ago when funding for the project ceased due to financial circumstances, the same group contributed to a study commissioned by the Texas State Energy Conservation office, to look at the impact of CAES on Wind in Texas, Oklahoma and New Mexico. (Desai N, et al 2005) The study results were positive in spite of very little day time to off-peak spark spread: "We have been working on combining compressed air energy storage (CAES) with wind generation in West Texas, and we have shown that storage can actually reduce the burden on the transmission system. In fact, storage can allow additional wind generation capacity to be served within an existing transmission plan. Our estimates suggest that a combined wind and storage project would be able to produce shaped, dispatchable energy for less than 5 cents per kWh with the capacity benefits of traditional thermal generation, with over 90 percent renewable content. "The conditions are different in 2009 and new developments are expected. Norton Energy Storage, Ohio One of the first potential CAES projects in the USA, developed by Haddington Ventures, Inc., is the huge facility at Norton in Ohio which is permitted for 2700 MW of capacity and as a commercial project when completed will be one of the largest Bulk Energy Storage facilities, including PHS, to be built in the USA. As originally planned, this will consist of 9 x 300 MW (or larger) nominally rated CAES units supported by an underground storage cavern volume of 120 million cubic meters, 722 meters below the surface, originally mined in a limestone formation. Fig 21 attests to the dry cavern walls and height of the pillar and post mining creating the large storage capability. Fig. 21. Norton Limestone Cavern (Hydrodynamics Group) Using 200 MW (4 x 50 MW) compression trains for each 300 MW power train will allow for 16 hours generation by day for 5 days a week. Four units producing 1200 MW could operate for 4x16 hour days without requiring recharging of the cavern. With more available surface space, cavern volume could support 5400 MW or more for 8 to 10 hours operation, 5 days a week. This cavern was originally permitted for a PHS that would only support a small fraction of that capacity. The project has had many stops and starts, and is now once again being developed by Haddington using the McIntosh Turbo machinery arrangement with an up rated version of 135 MW modules; while this changes the dynamics of the original planning, it allows for lower initial investment. Decoupling the compressors from the power train will allow the compressors to be located away from the storage cavern, allowing more generating units to be located on the site, and so reach the full potential of this storage facility. With this modular approach, the capacity could be added over 5 years allowing full integration in Ohio and the East Central Area Reliability (ECAR) region. What are the economics of CAES systems? The Fig. 22 below. The additional costs of energy storage when integrated with wind, often cited as being noneconomic, is not born out by reality of the facts. Wind power for example by nameplate ratings are at $2000/kW (or higher) for onshore installations, however when based on deliverable power capacity factors of 30% the equivalent cost to a base load plant now increases to $4500/kW. CAES plant would add $750/kW with the same fuel consumption as a Gas Turbine at $350/kW. The CAES plant can readily improve the wind deliverable power to 45% to 55% or higher reducing the basis of $4500/kW to $3000 and $2500/kW respectively, the delta difference pays for the integrated CAES plant. This is a rough analysis as many other factors enter the actual economic screening. Conclusions and recommendations The current storage concepts are ready for deployment. Storage needs to be implemented in particular for Wind Energy, not just here in the US but in all developing countries. The biggest impact is probably the flexibility of operation. Economic dispatch to meet market needs, absorb excess capacity or large load swings with compression are powerful market tools. It is possible to improve energy management and obtain better value from bulk power purchase and sales; reduce risks and vulnerabilities from fuel price shocks. In particular, volatility in the US will always be a factor; long term projections show that natural gas prices will continue to rise with increased demand which cannot readily be met from new sources other than LNG imports. The trend of increased harvesting of wind energy will put further stress on the grid reliability. This is already manifested in Europe where a far greater percentage of its generating base is committed to the variances of wind power production. Most importantly, Bulk Energy Storage will "buffer" utilities from the lack of spinning reserve and load following capability, a result of many independent Wind Generating Farms installed in the last 5 years and substantial planned capacity. It will remove concerns about power quality and new threats to reliability. CAES as a generating asset has capacity value, as if it were a thermal asset, fully dispatchable, and a low emissions profile. Using liquid air as an energy storage medium could be a potential solution to being able to locate storage for wind energy closer to load centers. This concept is proposed by Expansion Energy LLC. The Cryogenic tanks store liquid air are at relatively low pressure, which when required can be pumped at 42 bar to suit the prime mover, the liquid air is preheated and vaporized with the GT exhaust to feed an expander generating unit. Like many innovative concepts, ideas such as this will need further development ad investigation for wind integration. Eliminating the use of fossil fuel such as in adiabatic compression and expansion as discussed should get further attention and development funds for demonstration. Austin Energy in Texas has taken a step in this direction proposing the incorporation of Solar Energy in the thermal storage integrated with Wind and CAES as a Dispatcheable Hybrid Wind / Solar Power Plant (Fig. 23); the study is being cooperatively supported by the University of Austin. Energy Storage provides security, reduces transmission constraints, importantly extends (optimizes) the capabilities of efficient clean coal plants, reduces emissions, and primarily enhances and integrates wind energy as a valuable renewable resource. It provides load management, (rapid response) frequency and voltage control, spinning reserve, black start capabilities and supports distributed generation. Energy Storage and Wind integration will be a Paradigm shift in the entire Utility System
2019-01-01T15:13:21.470Z
2010-06-01T00:00:00.000
{ "year": 2010, "sha1": "13b7a0ad46b58093d1ee8d8ccbe8261889fb6e0a", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5772/8354", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a96c46c8522abe3eda02df0aa7a32d2b33081b4d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
212978836
pes2o/s2orc
v3-fos-license
Premises for the Penetration of Eastern Thought in Bulgaria There are many factors that facilitate the perception of Eastern thought in Bulgaria. In this paper, I will regard two groups of premises: spiritual heritage and political history. In the first part, I will discuss the peculiarities of Christianity in Bulgaria as well as some specific Bulgarian answers to the spiritual quests at the beginning of last century, especially the ideas of Dunovism with its ambivalent attitude to Eastern teachings. In the second part, I will regard the period of socialism in Bulgaria with the impact of atheistic education and the activity of Lyudmila Zhivkova. I regard the influence of Dunovism Introduction In recent years in the world, there is a great interest towards Eastern practices and teachings. Buddhist meditation, yoga and martial arts become more and more a part of Western life. Researchers notice "Yoga today is a thoroughly globalized phenomenon. A profusion of yoga classes and workshops can be found in virtually every city in the Western world and (increasingly) throughout the Middle East, Asia, South and Central America and Australasia… yoga has taken the world by storm" (Singleton & Byrne, 2008: 1). Similar are observations about Buddhism: "… during the last two decades, Buddhist groups and centres have flourished and multiplied to an extent never before observed during Buddhism's 150 years of dissemination outside of Asia. For the first time in its history, Buddhism has become established virtually on every continent." (Baumann, 2001: 4). While this spread of Eastern teachings is a global phenomenon that has global reasons, each country has its specificity in its manifestation and premises that influence it. Bulgaria is among the countries with durable and stable interest towards these teachings, especially to yoga. It is commonly recognized that during the period of socialism the attitude to yoga in Bulgaria was an exception among other socialist countries (see Tietke, 2011) and that now yoga in Bulgaria is developed very well and in ashrams in India groups from small Bulgaria are among the biggest ones (according to my responders from Ireland and Germany). What are the reasons for this interest? In this paper, I outline some peculiarities that might have influence on the perception of Eastern thought in Bulgaria. They could be distinguished in two groups: spiritual heritage and political history. Spiritual heritage Historical peculiarities of Christianity in Bulgaria as well as specific Bulgarian answers to the spiritual quests at the beginning of last century are among the most important factors of the specific spiritual heritage in Bulgaria that have influence on the perception of Eastern teachings here. 2. Historical peculiarities of Christianity in Bulgaria include: The role of early spiritual teachings and peculiarities of Orthodox Christianity -the role of the church during the period of Ottoman rule Early spiritual teachings In the lands of contemporary Bulgaria, the primary Christianity was spread and special Christian teachings were developed. These are Bogomilism and Hesychasm. Bogomilism is a special form of spiritual and social teaching, "extremely spiritualized Bulgarian Christian movement" (Vachkova, 2017: 10). It has ambiguous interpretations being "understood as yet another dualistic heresy, or else as a specific Bulgarian mission" (Vachkova, 2017: 7). Therefore, it is presented either as modification of "Eastern dualistic teachings", or "as a unique Bulgarian conception" (Vachkova, 2017: 10). Bogomilosm proclaimed a strict distinction between matter and spirit. As Bulgarian historian Vesselina Vachkova points out "Parallel to this development, partly as a reaction against its manifest dualism, there emerged in the local Christian communities, and began to spread an equally austere and spiritualizing, while definitely non-dualistic at dogmatic level, doctrine, namely, Hesychasm. With its meditative ascetic practices Hesychasm was to try and fail to sanctify the body through the belief in human ability to overcome it even during one's earthly existence and contemplate the divine light within one's heart and mind" (Vachkova, 2017: 33-35). The influence of both teachings on the spiritual heritage could be sought in two directions. From one side, they have a strong accent on inner practices, a strive for spiritual development and taught that only the inner path could lead to God. From the other side, they are connected with original apostolic Christianity and keep the vision that there is such Christianity that is different from official institutions that were established later. Peculiarities of Orthodox Christianity Orthodox Christianity has some peculiarities that make it closer to Eastern traditions than Catholicism and Protestantism. Usually it is interpreted as more mystical while Catholicism is more interpreted as more rational, and Protestantism is more directed towards outward practice. Mysticism of Orthodox Christianity is in several interconnected aspects: -Its apophatism; -In its basis on spiritual experience. Orthodox Christianity claims that God could not be explained through rational methods and could not be grasped within rational categories. It denies possibilities to understand and express God's essence. Apophatism of Orthodox Christianity is outlined as its most specific feature: "If Orthodox theology was characterized by a single stream, then this would surely be the apophatic orientation of the whole theological view. All the true Orthodox theology is apophatic in its root. Apophaticism is "a fundamental characteristic of the whole theological tradition of the Eastern Church". This negative (apophasis) view of non-knowledge (agnosia) or learned ignorance begins with the praise of God's mystery, not with a rational anihilation or explanation" (Clendenin, 2011). In tune with the conviction that the essence of God could not be grasped, Orthodox Christianity underlines that it could not be explained neither as only transcendent nor as solely immanent. Immanence and transcendence lead to each other. According to the orthodox theologian V. N. Lossky, "In the immanence of Revelation, God affirms Himself as transcendental to creation" (Lossky, 1972: 133). The idea of mutuality of these opposites is presented in the very important for Orthodox Christianity idea of Christ as a God-man. According to Orthodox Christianity in his personality, "the transcendental divine Truth has become immanent to the man, objectively immanent, and represents a direct, ever-living historical reality. In order to make it his own, subjectively immanent reality, man should through exercise of his God-man virtues to make the Lord Christ the soul of his soul, the heart of his heart, the life of his life" (Popovich, 2003). The main aim of Orthodox Church is "to lead a person to theosis (or divinisation), to fellowship and union with God" (Ierofey, 2009), to transform man into a God-man. Fjodor Dostoevski, for example, sees a sin of Catholicism in its attempt to replace God-human with human-God shifting in this way the accent from divine to humanity. The strive for inner transformation in Orthodox Christianity is connected with the importance of practices for inner development. Because of the mystical approach, the inward practices are of great importance here. The main aim could not be achieved by reasoning but by real transformative life: "Orthodox spirituality is an experience of life in Christ, the atmosphere of a new person, revived by the grace of God. This is not about an abstract emotional and psychological state, but about the unity of man with God" (Ierofey, 2009). Spiritual transformation of man is its real salvation that "is not a problem of the mental perception of truth, but the transfiguration and divinization of man by grace" (Ibid.). This transformation is interpreted as a "spiritual healing" that should be lived, not rationalized and it is seen as a main difference from the other Christian denominations that "do not have a tradition of spiritual healing" and "believe that rational faith in God is our salvation" (Ibid.). According to Orthodox Christianity "there is faith from hearing the Word and faith from contemplation -the vision of God. First, we accept faith from hearing in order to be healed, and then we gain faith from contemplation that saves a person" (Ibid.). The most important method for Orthodox Christianity is real constant spiritual practice. This practice consists of purification, enlightenment and divinization -processes that "do not mean the stages of anthropocentric activity, but the results of God's uncreated energy" (Ibid.). Apophatism and inner development are interconnected: "At the moment they cross the line from a verbal prayer to a contemplation and gain a few faithful followers, Christian ascetics become mystics. They seek and find enlightenment not in the definitions of heresy or orthodoxy, but in the depths of their souls, where they seem to discover the purpose of their escape from the world. For the mystics, God is no longer a Creator or the root cause of all things but a state of mind" (Glishev & Sharankova, 2010). The aim of the mystical path and the mystical apophatic union that denies abilities of senses and intellect is described profoundly by Dionysius the Areopagite in his Mystical Theology: "… direct our path to the ultimate summit of your mystical knowledge, most incomprehensible, most luminous and most exalted, where the pure, absolute and immutable mysteries of theology are veiled in the dazzling obscurity of the secret Silence, outshining all brilliance with the intensity of their Darkness, and surcharging our blinded intellects with the utterly impalpable and invisible fairness of glories surpassing all beauty…" "… in the diligent exercise of mystical contemplation, leave behind the senses and the operations of the intellect, and all things sensible and intellectual, and all things in the world of being and nonbeing, that you may arise by unknowing towards the union, as far as is attainable, with it that transcends all being and all knowledge. For by the unceasing and absolute renunciation of yourself and of all things you may be borne on high, through pure and entire self-abnegation, into the superessential Radiance of the Divine Darkness" 1 . It is obvious that this mystical path has much in common with Eastern teachings that are based on mystical experience. Therefore, Eastern Christianity has common features with some Eastern teachings and in many aspects was directly influenced by them. According to many authors "the roots of Christian mysticism can probably be found outside Christianity itself… In cultural and historical terms, the traditions that most likely influenced early Christian mysticism are the Neo-Platonic and Mithraism, as well as some other Oriental cults" (Glishev & Sharankova, 2010). So, in Orthodox Christianity there is a great theoretical and practical (in the sense of inner practices) propinquity with Eastern teachings and Eastern vision and approach to reality. This similarity is recognized: "In Eastern religions, of course, one can find a desire to purify the mind of images and thoughts" (Ierofey, 2009). This, however, is interpreted as very superficial likeness: "this is a movement to nowhere, into non-existence. There is no way that would lead to the divinization of man" (Ibid.). Therefore, Orthodox Christianity insists that "Orthodox spirituality and Eastern religions are divided by a vast abyss, despite some external similarity in terminology. For example, Eastern religions may use the terms ecstasy, impassivity, intuition, mind, enlightenment, and so on, but they are filled with completely different content than the corresponding terms of Orthodox spirituality" (Ierofey, 2009). Besides, Orthodox Christianity is a religion that could be defined as closed and not inclined to discuss different traditions. It regards everything outside itself, including other Christian denominations, as a heresy. Role of the church during the period of Ottoman rule The role of the church during the period of Ottoman rule was ambivalent. From one side, during this period the Church was ruled by non-Bulgarian authority and was a subject of a critical attitude. From the other side, Christianity as well as the fight for independent Church was the most important factor for union and feeling of self-identity and belonging. It is a common knowledge that Christianity keep the Bulgarian self-identity during the centuries. So, in Bulgaria, from one side, there is a heritage of strong mystical lineages that are in theoretical and practical kinship with Eastern teachings. From the other side, however, on the level of Church as institution, there is a strict closeness and lack of disposition for discussion and dialogue. Theosophical, mystical and occult quests at the beginning of last century Along with the undoubtedly mystical and intrinsically orientated Christian tradition, in Bulgaria as elsewhere in Europe there was interest to theosophy and other "occult" teachings that facilitate the penetration of Eastern teachings in later time. Very important figure who made connection between different kinds of knowledge and perception of the world was Nikolay Raynov. He has many books: collections of fairy tales from all over the world (1930)(1931)(1932)(1933)(1934) in 30 volumes; Eternal in our Literature in 9 volumes; History of Plastic Arts in 12 volumes, Association Roerich (1930), etc. Significant for rethinking of Bulgarian spiritual heritage was his book Bogomil Legends, where Bogomilism was presented as the heretical ferment of all reformist spiritual movements, the connecting link between Eastern and Western religious systems, a model of a syncretic religion, a core containing the world's esoteric knowledge. Another important figure who made link between East and West in terms of art was the artist Boris Georgiev. He was named "an artist of the Spirit between East and West". Boris Georgiev was famous with his journey to India, meetings with Mahatma Gandhi, Rabindranath Tagore and travels to Himalayas. He admitted Nikolay Roerich as his the spiritual master and as we will see Roerich will play a special role in the opening towards mystery of Eastern culture. Original Bulgarian contribution to the spiritual quests of the last century The most important Bulgarian answer to the spiritual quests at the beginning of last century, however, was Dunovism. Dunovism or the teaching of the Universal White Brotherhood was established in Bulgaria in the early 20 th century. Dunovism was named after its founder Peter Dunov (1864Dunov ( -1944. Peter Dunov received his higher education at the Faculty of Theology at the University of New Jersey and the Theological Faculty of Harvard University in Boston. He studied medicine as well. During his stay in the United States, he gained knowledge in theoretical, especially Protestant theology, was impressed by the ideas of Theosophy, occultism, and Eastern philosophy. The combination of all these ideas influenced the teaching he developed when returning in Bulgaria. In many aspects, his teaching is original but it undoubtedly uses some concepts and ideas from Theosophy, Bogomilism, as well as by Eastern, especially Indian, teachings. It gives a comprehensive and overall vision about the state and development of human being regarding human as a being evolving to his higher and divine spirituality and giving methods for this development. Several features of this teaching facilitate or are in tune with Eastern teachings that came later. First, it creates an atmosphere of rethinking the attitude toward Christianity and the church. It defines itself as representing the true authentic Christianity as it was before being distorted by reasons that are not connected with spirituality. Christ is considered as a Teacher, not as a Son of God and God is perceived as an impersonal pantheistic or panentheistic (in Western interpretation) essence. Dunovism claims also to be an authentic Bulgarian teaching, a crown of a spiritual triad that includes Orphism and Bogomilism. These three teaching are interpreted as three great spiritual waves that place Bulgaria at a very important place for spiritual culture of humanity. Dunovism underlines the significance of Bulgaria as a spiritual centre. According to it, a symbolic representative of the sacredness of the place is Rila Mountain, the highest mountain in the Balkan Peninsula. It is revealed as being as important for the spiritual heritage and development of humankind as are Himalayas in India. In such a way, a spiritual kinship between the Indian and Bulgarian cultures is outlined and these adds to the greater interest towards Indian culture and ideas. In terms of structure and organization, in Dunovism there is no formal membership, the followers may belong to whatever church they want. Dunovism is organized on the model of free structures of friends or like-minded people who follow because of inner conviction the authority of a teacher and the charisma of a person rather than some established institutions or governing bodies. In this, it is also in tune with Eastern teachings that are structured in Europe because of personal not authoritative relations. Second, many concepts and ideas of Dunovism have direct parallels to Eastern teachings. Dunov widely uses some important categories of Eastern philosophy, such as karma, reincarnation, prana, nirvana regarding them, however, not as belonging to one single culture but as belonging to the old authentic universal and forgotten knowledge. The overall vision of Dunovism is in tune with many ideas of Indian thought, especially with the idea of unity and oneness. In the following words, for example, we can see parallels with Vedanta and Buddhism: "There is unity in being. For example, you stand among a hundred mirrors and see yourself in a hundred places in different poses. Your reflection is in hundred places, but you are one. All those you see in the mirror of Being are shadows, and the One who is outside the mirror is real ... The humiliation of one person is the humiliation of others. The success of somebody is our success. The virtues of all people are our virtues. The mistakes of others are our mistakes. There is one life. Life in us and in all beings is the same. In some chosen moments of life, when you are in an uplifted state, the great truth about the unity of all flashes for a moment and then you return to your ordinary consciousness ..." (Dunov, 2019: 367) Third, very important for Dunovism is the idea that the truth cannot be achieved through reasoning and intellectual speculation. The most important is inner spiritual path and spiritual practices. "Contrary to the understanding of Western European philosophy in general, mainly in the face of German transcendental philosophy, that the truth could be achieved speculatively, that is, without experience, the Master shares the understanding that the great spiritual discoveries and understanding of the truth could not be revealed without real moral uplifting and purification, without awakening the latent spiritual powers and abilities of human being. The emphasizing of practice and placing the experience in front of speculative knowledge (in this case, metaphysics) does not at all underestimate the latter. It, however, can only come to life after the results of lived experience or a specific spiritual practice are achieved" (Bachev, 2009). Accentuating importance of practice, Dunovism is in tune with all Eastern teachings where it is the spiritual and moral practices that are the starting point and aims of all theoretical ideas and considerations. In its practices Dunovism have many parallels with Eastern teachings as well. The main practices are based on meditation and breathing exercises. Even the names sometimes are with Indian origin as for example "Surya Yoga" or "Yoga of the Sun". This meditation is practiced between the spring and autumn equinoxes at sunrise that is interpreted as a time of renewal. Just as the sun is the center of the solar system, through meditation practitioners should establish connection with the center of their being. So, Dunovism has many similarities with Eastern teachings and share many common ideas with them. Dunov explicitly states that "the philosophy of ancient India gives all these answers" that contemporary church fails to answer. As Dunovism, the Indian philosophy reveals that "the man is an immortal being, who is reborn by perfecting himself in order to attain 'nirvana', to merge with the Whole and to live in eternal bliss". Peter Dunov uses the very term "yoga". For him yoga is a person who had achieved the most essential core of his being and acquired many virtues: "every person must first work to achieve his essential core. Do not strive to achieve all virtues at once. It is enough for you to attain one virtue every year… If you achieve one virtue every year, in 25 years you will acquire 25 virtues, and a person with 25 virtues is already yoga. Becoming man of yoga it is enough just to raise his hand and the living nature will answer him. It knows him and therefore meets all his wishes" (Dunov, 1926). Yoga, however, is interpreted not in its concrete Indian realization but as a name for a high spiritual state of being. Peter Dunov distinguishes his teaching and practices from the Eastern ones. Regarding yoga, he explicitly states: "Most of the people who practice yoga cannot understand the thinking of Indians at all. They view exercises and meditation techniques as some kind of healing and relaxation exercises. They do not realize that these are powerful weapons for building and displaying a certain type of consciousness and behavior. And that this awakened consciousness from the past will bring them back thousands of years, long before their already Christian definition" 2 . Therefore, he insists that "Their methods involve great risks and they are not adapted to the physical body of the European. Remember this well" (Ibid.). According to him, Hinduist methods are "inapplicable to Europeans". Explaining this inapplicability, he uses ideas that could be found in Indian cosmogony. According to this cosmogony, humankind periodically alternately passes through periods of entering of the spirit into matter and ascending of matter to the spirit. The first phase is interpreted as involution, the second as evolution. Peter Dunov insists that yoga practices were created in the previous period of involution while after the coming of Christ we are in a new period of evolution. Being created for a different period of the development of humankind yoga practices are in accordance with different vision of reality that is already not suitable for the new trend of development. After Christ who had brought the religion of love, mutual help and forgiveness, we need another type of exercises. Peter Dunov offered many kinds of physical, musical, and breathing exercises that could help for the physical, spiritual and mental growth of a person. The most important method is Paneurhythmy. Paneurhythmy is a circle dance in accordance with "the supreme cosmic rhythm". Its most important features is that it is performed in accordance with the rhythm of the sun in its day and annual cycles and it is performed not individually but in a group accentuating in such a way on the mutuality among human beings, from one side, and on mutuality of human beings and the cosmos, from the other side. In the English version of Wikipedia Dunovism is defined as "New Age-oriented new religious movement" 3 . It, however, is different from the most amount of such movement and its difference is in the fact that it had a stable presence in the spiritual and cultural life of Bulgaria during two world wars, becoming the most influential occult-mystical teaching. A proof for its success in attracting people is the very negative reaction of the Bulgarian Orthodox Church towards its ideas. Dunovism has a great impact among the intellectuals and this is kept during the communist time as well, in spite of discrimination. After changes in 1989, it acquires a new popularity and rapid development. It has complex relations with Eastern teachings. From one side -mutual respect and kinship, from the other -strict distinguishing from these "out-of-date" teachings. Nevertheless, in practice, there are many common contacts, initiatives, and performances with groups who are practicing Eastern teachings. Many followers of these teachings in one or other periods of their development have some contact with ideas of Dunovism and sometimes this Bulgarian teaching is their starting point to the Eastern teachings, milestone or even final stage on their spiritual journey. The supposed kinship of the Bulgarians with the East Among the spiritual heritage a specific premise for positive attitude to Eastern teachings, is the idea of the kinship between old Eastern cultures and Old Bulgarian culture. There are many examples of this idea. Here I will cite the site of Bulgarian Yoga Federation where the presentation of the history of yoga in Bulgaria begins with the following affirmation: "What is the connection between this ancient science and the Bulgarian people, whose roots also come from antiquity? The studies of our historians P. Dobrev, Sl. Tonchev and others show that the origin of Bulgarians is in the Far East, in the Imeon Mountains, located in Tibet. According to other studies, the Bulgarians, called in those times "honuri", lived at the border of contemporary China and India. The ancient Indian literary source, the Mahabharata, which contains one of the first historical accounts of Bulgarians, tells us about "bolhiki", a people with their own way of life and culture… In the preserved stone inscriptions of the Proto-Bulgarians the most common (157 times) word is the word IYI -"yug", which is the main root of the word "yoga". Of particular interest is the seven-beam rosette found in Pliska, where this word is the main one in the inscriptions on its rays. The word "yug" or "yu" means "yoga, god, and union with god." These inscriptions are interpreted by V. Luchanski as mantras (words with sacred sound) that correspond in sound to the seven chakras (according to yoga -energy centers -vortices of the human body). The inscriptions on the rosette were read and translated by Swami Chitananda, the head of the ashram in Rishikesh, North India. According to him, the inscriptions are in the ancient sacred language of the yogis, and in translation their content is the following: "Get rid of duality through yoga. Relive the suffering through yoga". 4 Many contemporary books of the so called "folk history" discuss the mutual influence of the Old Bulgarian culture and Eastern cultures and even the impact of the Bulgarian culture on the development of old Indian and Chinese cultures. Writers in this field find many proofs for this impact. They consider the Old Bulgarian calendar as the prototype of old Chinese calendar, regard Laozi as a Bulgarian thinker and so on. Here I will not comment the validity of these assertions. I want only to accentuate that among some circles in Bulgaria there is a conviction of such kinship. Therefore, in these circles Eastern teachings are accepted as a return to the one's own roots, as remembering of one's own forgotten tradition. Peculiarities of the socialist period First substantial introduction of Eastern teachings in Europe was during the period of socialism. This was a time of openness of Europe towards teachings and spirituality that comes from India and Far East. In Bulgaria, this interest has specific reasons and manifestation. Consequences of atheistic education One of the most important specific of the period was strong atheistic education. Atheism has at least two important consequences regarding the issue of penetrating of Eastern spirituality in Bulgaria. From one side, it creates specific spiritual vacuum and necessity to fill it with some alternative. From the other side, the lack of attachment to a particular religious ideology combined with nurturing of a critical and curious attitude toward the unknown creates openness and ability to accept the new and unknown ideas without prejudices. I would like to emphasize, based on my own observations and experience, that atheist education, paradoxically as it may sound, does not narrow the worldview. In Bulgaria, religious ideology has been replaced by the Marxist ideology. To a great extent, this ideology does not offer ready solutions to the problems. It is based on the laws of dialectics and one of its main goals is to search for the causes and roots of events, not to present ready information. It is true that these causes and roots are interpreted in terms of materialism. Even in this case, however, the ideology is based on elaborated philosophy. That is why education during this period is focused towards creating a curiosity to the unknown and developing a searching mind that is not satisfied with ready answers. Activity of Lyudmila Zhivkova Another important factor during this period was the activity of Lyudmila Zhivkova, the daughter of the communist leader of the country at that time and head of the Bulgarian Committee of culture (1975)(1976)(1977)(1978)(1979)(1980)(1981). Being inspired by the ideas of esoteric Eastern spirituality as presented by the Russian artist and philosopher Nikolay Roerich, in a very short period she opened Bulgaria to intensive cultural and spiritual communication with India and the Far East. As we mentioned the Russian painter, writer, archaeologist, theosophist, philosopher and public figure Nikolay Roerich has a special place in the spiritual life in Bulgaria at the first part of the last century. During the seventies Lyudmila Zhivkova accepted as her personal mission his ideas about evolution of human civilization, union of the cultures of East and West, new and ancient knowledge, science and religion as well as the vision of the high mission of art for brotherhood among people in the name of universal harmony and beauty. Having high position at Bulgarian government, she was able to put the ideas that inspired her in practice. Within several years, she performed great activities. After the ideas of Roerich, she developed long-term programs for "Harmonious Development of Human Personality" and "Peace through Culture". She initiates "Decade of great personalities" and organized wide presentation of the ideas of Roerich, Leonardo da Vinci and Lenin. The list should be continued with Rabindranath Tagore but her sudden death prevented this. The year 1978 was announced as the year of Nikolay Roerich. Lyudmila Zhivkova presented him as follows: "With his thought Roerich soars above the peaks of the Himalayas, rushes into the hidden world of legends, merges in an unstoppable creative impulse of the spirit with a monolithic in its integrity Universe" (Zhivkova, 1979: 22). In accordance with the ideas of Nikolay Roerich she organized the Children's Assembly "Banner of Peace" under the motto "Unity, Creativity, Beauty" initiating an attempt to educate the new generation in a completely new kind of thinking and consciousness. With the same aim, she tried to develop new sets of interdisciplinary laboratories and entire Institutes based on the unique experience of the Institute of Himalayan Studies "Uruswati", created by the Roerich family in the Kulu Valley of India. One of her realized project was building of the National Palace of Culture that has rich esoteric symbolism. All these activities contribute to the informing of the wide lay audience with new ideas that have strong spiritual character and Eastern origin. Ludmila Zhivkova herself was practicing yoga and meditation. She travelled several times to India and invited and welcomed in Bulgaria many Indian teachers. They gave lectures, performed workshops, initiated followers and for a relatively short time the interest towards Indian culture and spiritual heritage was an official Bulgarian policy. Therefore, since the end of seventies of last century yoga teaching in Bulgaria has official history and state support. In 1978 a Yoga section in the Bulgarian Union for Physical Education and Sport was established. Activity of Lyudmila Zhivkova as well as activity of the White Brotherhood are subject of many and contradictory assessments. Nevertheless, the influence of Dunovism, which prepared the inner spiritual soil for accepting the Eastern ideas, and the activity of Lyudmila Zhivkova, who introduced these ideas as a state policy, is a unique combination that facilitated the wide spread of yoga, meditation and different kinds of Far East spiritual teachings after the changes at the end of 1980s. Conclusions Bulgaria is at the border of East and West. It is "the East" for other parts of Europe. Its spiritual heritage is close to some Eastern theoretical ideas and practices especially of Indian origin. Bulgarians feel inner kinship with spiritual ideas of the East and some of them regard the land of Bulgaria with Rila Mountain as so sacred and spiritual place as India and Tibet with Himalaya mountains. Bulgaria itself is recognized by spiritual gurus coming here as a country that still does not suffer greatly from the negative effects of globalization. Atheistic education during communism combined with very good level of average education makes Bulgarians open and free to study and explore new ideas. It is important to notice that Bulgaria has no colonial past and does not share the feeling of colonial sin that is particular for some West Europeans. In Bulgaria, there is no feeling of superiority of the white race. Other cultures are accepted as equal and attracting. At the same time, in terms of Orthodox Christianity, every other religion, even other Christian denominations, is heresy. Unlike most West European countries, in Bulgaria there is no great number of immigrants from non-Islamic Asia who could want to establish their own religious organizations or institutions. Therefore, followers of Eastern teachings are predominantly Bulgarians.
2020-01-16T09:12:42.206Z
2019-12-30T00:00:00.000
{ "year": 2019, "sha1": "cabf921f3180aad3d8102bd39d7c28e2dc14a528", "oa_license": "CCBY", "oa_url": "http://centerprode.com/conferences/4IeCSHSS/4IeCSHSS.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0a86d52bc933d4ab8f6fe1d7b1bda32bd6fe0f5a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Business" ] }
239087017
pes2o/s2orc
v3-fos-license
Graphs Having Most of Their Eigenvalues Shared by a Vertex Deleted Subgraph : Let G be a simple graph and { 1,2, . . . , n } be its vertex set. The polynomial reconstruction problem asks the question: given a deck P ( G ) containing the n characteristic polynomials of the vertex deleted subgraphs G − 1, G − 2, ..., G − n of G , can φ ( G , x ) , the characteristic polynomial of G , be reconstructed uniquely? To date, this long-standing problem has only been solved in the affirmative for some specific classes of graphs. We prove that if there exists a vertex v such that more than half of the eigenvalues of G are shared with those of G − v , then this fact is recognizable from P ( G ) , which allows the reconstruction of φ ( G , x ) . To accomplish this, we make use of determinants of certain walk matrices of G . Our main result is used, in particular, to prove that the reconstruction of the characteristic polynomial from P ( G ) is possible for a large subclass of disconnected graphs, strengthening a result by Sciriha and Formosa. Introduction Let G be a simple undirected graph, having no loops, no multiple edges and no weighted edges. The vertex set of G is V (G) = {1, 2, . . . , n}. The monic characteristic polynomial of G in the variable x, denoted by φ(G, x), is the determinant of xI − A, where I is the n × n identity matrix and A is the adjacency matrix of G. The roots of φ(G, x) are the eigenvalues of G. Since A is a real and symmetric matrix, the eigenvalues of G are real numbers. An eigenvector x of an eigenvalue λ of G is a nonzero vector satisfying Ax = λx. The eigenspace associated with an eigenvalue λ of G, denoted by E(λ), is the vector space containing the zero vector 0 together with every possible eigenvector of λ. For any vertex v, the graph G − v is the graph obtained from G by removing v and the edges incident to v. In 1973, the polynomial reconstruction problem was posed by Cvetković at the XVIII International Scientific Colloquium held in Ilmenau [1,2]. It asked the following question: Is it true that for n > 2, the characteristic polynomial φ(G, x) of a simple graph G on n vertices is determined uniquely by the polynomial deck P(G) = {φ(G − 1, x), . . . , φ(G − n, x)} of characteristic polynomials of vertex deleted subgraphs of G? The problem, which was also posed by Schwenk [3] independently of Cvetković, is still open, although it was answered in the affirmative for several classes of graphs. Examples of such graph classes include the class containing graphs whose eigenvalues are bounded below by −2; for these graphs, the answer is 'yes' for both the connected [4] and the disconnected cases [5]. Other graph classes for which the problem has an affirmative answer are trees [6], unicyclic graphs [7] and regular graphs [8]. Further results on the reconstruction of φ(G, x) from P(x) were also put forward for graphs having terminal vertices [9], for bipartite graphs [8] and for disconnected graphs [10]. In this paper, we show that P(G) has enough information for us to deduce whether or not φ(G, x) shares more than half of its roots (where multiple roots are counted as many times as their respective multiplicities) with one of the polynomials in P(G). If more than half of the eigenvalues of G are indeed shared by those of one of its vertex deleted subgraphs G − v, say, then φ(G, x) is shown to be reconstructible from P(G), even though these common eigenvalues in G and G − v are not known prior to the completion of this reconstruction. To achieve our goals, we make use of graph walks. A walk in G of length k is a sequence v 0 , e 1 , v 1 , e 2 , v 2 , e 3 , . . . , v k−1 , e k , v k where v 0 , . . . , v k are vertices of G, e 1 , . . . , e k are edges of G, and each edge e j in this sequence connects vertices v j−1 and v j (neither the vertices nor the edges in this sequence are necessarily distinct). The walk is closed if v 0 = v k . We consider n walk matrices of G, W 1 , W 2 , . . . , W n , each containing walk enumerations of G that end at vertex 1, 2, . . . , n in G, respectively. Even though as far as this paper is concerned, these matrices are not themselves reconstructible from P(G), if for some v ∈ V (G), the greatest common divisor of φ(G, x) and φ(G − v, x) is of degree more than n 2 , then we shall be able to deduce enough information about W v for us to accomplish our task of reconstructing φ(G, x). The proof of the main result of this paper is in Section 5, specifically in Theorem 13. Before we present this proof, the next three sections contain results for walk matrices, for companion matrices, and for eigenvalues of both the graph G and for any of its vertex deleted subgraphs. Section 6 illustrates our techniques by successfully reconstructing the characteristic polynomial of two example graphs from their respective polynomial decks. The final section contains a remarkable consequence of our main result, Theorem 14, stating that the polynomial reconstruction problem is settled in the affirmative for a large subclass of disconnected graphs. This result further strengthens a result first obtained by Sciriha and Formosa in [10]. Walk Matrices In the literature, a walk matrix W b is a matrix of the form where b is a 0-1 vector. Usually, b is taken to be j, the vector of all ones [11][12][13][14], but there are exceptions [15][16][17][18]. For every i and j, the entry in the ith row and jth column of W b is equal to the number of walks of length j − 1 that start from vertex i and end at any vertex in S, where S is the subset of V (G) indicated by the entries in b that are equal to 1. It is known (see [15,19]) that, for any indicator vector b and any number of columns k of W b we choose the matrix to have, there is a number r such that the rank of W b is k for all k ≤ r and is r for all k > r. For this reason, W b is either assumed to have r columns [11,15,19], or n columns [12,16,18]. In this paper, we consider walk matrices W e v , v ∈ V (G), where {e 1 , . . . , e n } is the standard n dimensional vector basis for R n . To slightly simplify the notation, henceforth we denote the matrix W e v by W v . Moreover, for each vertex v, the number of columns of W v is the minimum possible number of columns such that its rank is maximized; in other words, the number of columns of W v is the number r described in the previous paragraph. Of course, for distinct vertices u and v, the ranks of W u and W v may differ. Thus, the n matrices W 1 , . . . , W n may not have the same number of columns, but for each of them, the rank will stay the same if the number of columns is increased by any number, and will decrease by s if the number of columns is decreased by s, for any feasible s. As in the References [15,16], for each walk matrix W v , we consider the Gram matrix of its columns, which is the matrix It is well-known that H v is a positive semidefinite matrix having the same rank as W v [20]. In this case, since, by definition, W v has full rank, H v is invertible and, hence, positive definite. Moreover, so that, for all j and k, the jkth entry of H v is the number of closed walks of length j + k − 2 in G that start and end at vertex v. Note also that H v has constant skew diagonals, so it is a Hankel matrix. We denote these walk enumerations by w 0 (v), w 1 (v), . . . , w 2r−2 (v) or by w 0 , w 1 , . . . , w 2r−2 if the vertex v in question is inferable from the context. Thus, for any t, the matrix has the same rank as the matrix and, as we said earlier, there is a number r such that the rank of (3) is equal to r for all t > r and is equal to t for all t ≤ r. Thus, the determinant of (2) is 0 for all t > r and is nonzero for all t ≤ r, leading to the following result. Theorem 1. Let w 0 , w 1 , . . . , w 2t−2 be the number of closed walks in G of length 0, 1, . . . , 2t − 2 that start and end at vertex v. Then there exists a number r such that the determinant of is zero for all t > r and is nonzero for all t ≤ r. Moreover, the rank of W v is r. Note also that, referring to (1), each of the upper p × p submatrices of H v are invertible. This may be deduced from the fact that H v is positive definite. Alternatively, any upper p × p submatrix of the r × r matrix H v is equal to W vp T W vp , where W vp is W v restricted to its first p columns, and this matrix has a full rank by Theorem 1. Companion Matrices and Eigenvalues The rank of a walk matrix W b may also be evaluated by finding the number of eigenvalues of G having an associated eigenvector that is not orthogonal to b [17,18]. In our case, since the walk matrices under discussion have b = e v , where v = 1, 2, . . . , n, we can equivalently say the following. Theorem 2 ([17]). The rank of W v is the number of eigenvalues of G with an associated eigenvector having a nonzero vth entry. The significance of zero and nonzero entries in eigenvectors has found applications in control theory [17,18,21] and in molecular conduction [2,22,23]. For any walk matrix W b , its companion matrix C b is the matrix satisfying For any vertex v ∈ V (G), we denote the companion matrix of W v by C v . As we shall see soon, C v may be different for each W v . Indeed, if W v is an n × r matrix, then C v is the r × r matrix e 2 e 3 · · · e r c v for an appropriate column vector c v . As can be seen if we evaluate the determinant of xI − C v using the Laplace determinant expansion along the last column, the characteristic polynomial of C v is [16]. Moreover, and more importantly, φ v (x) divides the characteristic polynomial of G [15,19]. Here, we provide an alternative proof of this result by stating the roots of φ v (x). are the eigenvalues of G whose eigenspaces contain an eigenvector with a nonzero entry in its vth position. Proof. Let λ be any eigenvalue of A with associated eigenvector x. We take the transpose on both sides of the relation By postmultiplying both sides of this equality by x, we get But Hence W T v x = 0 if and only if x has a zero entry in its vth position. Thus, referring to (4) and (5), whenever an eigenvector x associated with λ in A has a nonzero entry in its vth position, the (nonzero) vector 1 λ λ 2 · · · λ r−1 T would be an eigenvector associated with λ in C T v . Since the choice of the eigenvalue λ in A was arbitrary, we have proved the result. By combining Theorems 2 and 3 together, the following corollary is determined immediately. The following theorem conveys the fact that if r is the rank of W v , then the r × r companion matrix C v may be found solely from walk enumerations of closed walks of length up to 2r − 1 that start and end at vertex v in G. The argument of Theorem 4 below is based on the result in ( [24] p. 43) stating that the last column of C v is equal to Theorem 4. The r × r companion matrix of W v is the matrix e 2 e 3 · · · e r c v , where and w 0 , w 1 , . . . , w 2r−1 are the number of closed walks of length 0, 1, . . . , 2r − 1 that start and end at vertex v. Proof. By comparing the last columns of the equality We emphasize what we have accomplished in Theorem 4 by presenting the following corollary. Eigenvalues of Vertex Deleted Subgraphs Recall that the vertex deleted subgraph G − v of G is obtained from G by removing vertex v and all edges incident to it. The common eigenvalues of G − v and G are also able to tell us the rank of W v . The reason for this is presented in the proof of Theorem 7 below. Before proving Theorem 7, we first prove the following result that allows us to deduce the common eigenvalues of G and G − v from the eigenspaces of G. The result of Theorem 5 is required for the proof of Theorem 7. Theorem 5 ([17] ). Let λ be an eigenvalue of G with eigenspace E(λ). Then λ is also an eigenvalue of G − v if and only if E(λ) contains an eigenvector whose vth entry is zero. Proof. Let the adjacency matrices of G and G − v be A and A v , respectively. After reordering the vertices of G if necessary, A may be partitioned into the block matrix for some indicator vector s of the adjacencies of vertex v. We also have Ax = λx for some eigenvector x in E(λ). Suppose λ is an eigenvalue of both G and of G − v. We prove that E(λ) contains an eigenvector with a zero in its vth position. Let A v y = λy for some eigenvector y pertaining to the eigenvalue λ in the eigenspace of G − v. Let x be partitioned into z k T , where z is (n − 1)-dimensional. Then Ax = λx may be rewritten as from which we obtain, in particular, the relation By premultiplying both sides of (6) by y T , we obtain Hence, either k = 0 or y T s = 0. If k = 0, then we have proved the result, since then Thus, either way, λ has an eigenvector in E(λ) with a zero entry in its vth position, proving sufficiency. Conversely, suppose λ is an eigenvalue of G with an associated eigenvector having a zero in its vth position. We prove that λ is also an eigenvalue of Thus, in particular, A v z = λz, proving that λ is also an eigenvalue of G − v. We arrive at yet another way of determining the rank of W v , which is illustrated in Theorem 7. The proof of this result uses the so-called Interlacing theorem (see [25], for instance), presented below for the case of eigenvalues of graphs. As an immediate corollary of Theorem 6, the multiplicity of any eigenvalue λ of a graph G must be at most one more than the multiplicity of the same eigenvalue λ in the vertex deleted subgraph G − v. The statement of Theorem 7 below concerns eigenvalues of G whose multiplicities are exactly one more than those for the same eigenvalues in G − v. In its proof argument, an eigenvalue is assumed to have multiplicity zero if it is not an eigenvalue of its adjacency matrix. Theorem 7. Let λ be an eigenvalue of G whose multiplicity is one more than the multiplicity of λ in the vertex deleted subgraph G − v. The rank of W v is the number of all such distinct eigenvalues of G. Proof. Let λ have multiplicity q in G and let {x 1 , . . . , x q } be an eigenbasis for λ in G. We assume that all of the eigenvectors x 2 , . . . , x q have their vth entry equal to zero. If this is not the case, so that, without loss of generality, both x 1 and x j have a nonzero ith entry, then replace x j by (e T v x j )x 1 − (e T v x 1 )x j . By the Interlacing theorem (Theorem 6), the multiplicity of λ in G − v cannot be less than q − 1. Let z − v be the vector z without its vth entry. By the argument presented in the proof of Theorem 5, all of the eigenvectors x 2 − v, . . . , x q − v will also be eigenvectors of λ in G − v, and clearly these q − 1 eigenvectors are linearly independent. If x 1 also has its vth entry equal to zero, then x 1 − v will also be an eigenvector of λ in G − v that is linearly independent of x 2 − v, . . . , x q − v, so that the multiplicity of λ in G − v would be at least q. However, then that would mean that any linear combination of the vectors in the eigenbasis of λ in G would have a zero in its vth position, which, by Theorem 2, would lead to λ not contributing to the rank of W v . Thus, the eigenvalues that contribute to the rank of W v are precisely those whose multiplicity is one more than that of the same eigenvalue in G − v, as required. A vertex v whose removal from G reduces the multiplicity of an eigenvalue λ in G by one is called a λ-core vertex [26] or a downer vertex [27]. Thus, Theorem 7 may be restated as follows: Theorem 8. The rank of W v is the number of distinct eigenvalues of G for which v is a core/downer vertex. The eigenvalues of G described in the statement of Theorem 7 or Theorem 8 are precisely the roots of φ v (x). The following theorem proves this, and more. Theorem 9. Let G be a graph and let v be any of its vertices. The r distinct roots of φ v (x) are the eigenvalues of G for which v is a core/downer vertex. Moreover, the n − r remaining eigenvalues of G (including multiplicities) are also eigenvalues of G − v. Proof. Let λ be any root of φ v (x). By Theorem 3, λ is an eigenvalue of G whose eigenspace E(λ) contains an eigenvector with a nonzero number in its vth entry. As described in the first paragraph of the proof of Theorem 7, a basis {x 1 , x 2 , . . . , x q } for E(λ) can be chosen such that only x 1 has a nonzero entry in its vth position. By the argument in the second paragraph of the proof of Theorem 7, the multiplicity of λ in G − v must be q − 1, proving the first part of the theorem statement. Now suppose µ is an eigenvalue of G that is not a root of φ v (x). Then by Theorem 3, E(µ) contains only vectors whose vth entry is zero. Thus, any eigenbasis for E(µ) must be made up of such vectors as well. By again applying the argument used in Theorem 7, the multiplicity of µ in G − v must be at least equal to that of µ in G, which proves the second part of the theorem statement. We end this section by summarizing the various ways described in this paper to obtain the rank of W v . Theorem 10. The following are all equal to the rank of W v : • One less than the order of the smallest singular matrix whose skew diagonal entries are constant and equal to w 0 (v), w 1 (v), w 2 (v), . . . from left to right (Theorem 1); • The degree of φ v (x) (Corollary 1); • The number of distinct eigenvalues of G with an associated eigenvector having a nonzero entry in its vth position (Theorem 2); • The number of distinct eigenvalues of G for which v is a core/downer vertex (Theorem 7). Reconstruction Let P(G) be the polynomial deck {φ(G − 1, x), φ(G − 2, x), . . . , φ(G − n, x)} containing the (unordered) characteristic polynomials of all vertex deleted subgraphs of G. As is well-known, the derivative of the characteristic polynomial of G may be reconstructed from P(G) as in the following result. Theorem 11 ([28,29]). The derivative of the characteristic polynomial of G is equal to Thus, all the coefficients of φ(G, x) may be reconstructed from P(G), except possibly the constant one. Furthermore, if one of the members of P(G) has a multiple root, then by Theorem 6, this root must also be a root of φ (G, x), and φ(G, x) is reconstructed immediately. The number of closed walks of length 0, 1, . . . , n − 1 starting and ending at any vertex of G are also reconstructible from P(G). This was proved in [1]. Below, we slightly elaborate on the proof that is provided there. Proof. From ( [29] p. 34), the formal power series ∑ ∞ j=0 w j (v) x j may be generated by the following generating function: Note that the rational function (7) may be rewritten as where the notation p (x) represents the reflected polynomial of p(x), that is, the expression x n p(x −1 ), the polynomial p(x) with its coefficients in reverse order [30,31]. Hence Since the only unknown coefficient of φ (G, x) is the leading one, the numbers w 0 (v), . . . , w n−1 (v) may be discovered by expanding the left hand side of (8) and comparing coefficients. We have finally conveyed all the results we require in order to prove the main result of this paper, which is presented in Theorem 13 underneath. It is assumed that, when counting the roots of any polynomial mentioned in the statement of Theorem 13, multiple roots are counted as many times as their multiplicity. containing the characteristic polynomials of all vertex deleted subgraphs of G. Then P(G) determines whether or not φ(G, x) has more than half of its n roots included among the n − 1 roots of one of its characteristic polynomials. When this is the case, φ(G, x) is reconstructible from P(G). Proof. For each vertex v, we first obtain the walk enumerations w 0 (v), . . . , w n−1 (v) using Theorem 12. Moreover, for each v, we calculate the determinant where 2t − 2 is either n − 1 or n − 2, depending on the parity of n. If one of these determinants is zero for some particular vertex v, then by Theorem 1, the rank of W v , r, is the order of the largest nonsingular principal upper submatrix of this determinant, which is less than t. Indeed, if this happens, then when n is odd, t = n+1 2 , so r < n+1 2 < n 2 , and when n is even, t = n 2 , so r < n 2 . Either way, we infer that r is less than half n whenever (9) is zero for some vertex v. For this particular vertex v, we now use Theorem 4 to obtain the companion characteristic polynomial φ v (x) from w 0 , . . . , w 2r−1 . By Theorem 3, the r roots of φ v (x) are r of the n roots of φ(G, x). Moreover, by Theorem 9, the remaining n − r roots of φ(G, x) will also be eigenvalues of G − v. However, since r < n 2 , n − r > n 2 . We have determined, therefore, that when the determinant (9) is zero for some vertex v, then G must share more than half of its eigenvalues (if we include repetitions) with G − v. Since (9) may be found from P(G), the fact that G shares more than half of its eigenvalues with G − v is recognizable from P(G). To reconstruct φ(G, x), we use the fact that φ v (x) divides φ(G, x). By Theorem 11, all the coefficients of φ(G, x) bar the constant one K are reconstructible from P(G). Since φ v (x) is available, we reconstruct φ(G, x) by, for instance, performing the polynomial and solving for K. Remark 1. Focusing on the last sentence of the proof of Theorem 13 for a moment, the eigenvalues shared by G and G − v would be the n − r roots of the polynomial φ(G, x) φ v (x) . Examples We now illustrate the techniques in the proof of Theorem 13 by successfully reconstructing the characteristic polynomials of the following two example polynomial decks. The roots of φ(G − 7, x) are 2.79129, 1, 0.61803, −1, −1.61803, −1.79129. We show that more than half of the seven roots of φ(G, x) (in this case, more than three) are among these six roots. The number of closed walks of length 0, 1, . . . , 7 starting and ending at vertex 8 in G may again be obtained by comparing the coefficients of both sides of two appropriate formal power series in agreement with (8): We obtain the values w 0 = 1, w 1 = 0, w 2 = 4, w 3 = 6, w 4 = 32, w 5 = 86, w 6 = 320, w 7 = 1006. By showing that the determinant 1 0 4 6 0 4 6 32 4 6 32 86 6 32 86 320 is zero (Theorem 1) we deduce that G must share at least five eigenvalues with G − 8 (Theorem 13). Note that w 7 is not used here, because the number of vertices of G is even. The upper principal minor 1 0 4 0 4 6 4 6 32 is nonzero-thus, the rank of W 8 is three. This means that G shares exactly 8 − 3 = 5 eigenvalues with G − 8; however, these five eigenvalues are unknown, for now. We now discover φ 8 (x), the companion polynomial of W 8 . To this end, we work out the product  Disconnected Graphs Our main result, Theorem 13, reconstructs φ(G, x) from P(G) by obtaining a companion polynomial φ v (x) after deducing that the degree of φ v (x) is smaller than half of |V (G)|. This companion polynomial, having integer coefficients, is a factor of φ(G, x). It is known that if G is a connected graph, then the largest eigenvalue of φ(G, x) is always a root of φ v (x) for any v ∈ V (G) [16]. Hence, Theorem 13 cannot be used for connected graphs whose largest eigenvalue has a minimal polynomial whose degree is at least half of |V (G)|. This includes connected graphs whose characteristic polynomial is irreducible over Q. Note, however, that if G is a disconnected graph having two components G 1 and G 2 , then φ(G, x) = φ(G 1 , x) φ(G 2 , x). If |V (G 1 )| < |V (G 2 )|, then Theorem 13 is always able to reconstruct φ(G, x) from P(G). If G has more than two components or |V (G)| is odd, then it will always be the case that one of these components has less than 1 2 |V (G)| vertices, allowing the use of Theorem 13 to reconstruct φ(G, x) from P(G). The details are in the proof of the following corollary, which was proved in [10] using a different approach. Corollary 3 ([10]). Let G be a disconnected graph on n vertices. If not all components of G has exactly n 2 vertices, then φ(G, x) is reconstructible from P(G). In particular, if G has more than two components or has an odd number of vertices, then φ(G, x) is reconstructible from P(G). Proof. Let v be a vertex of a component K of G having less than n 2 vertices. Then G − v and G have more than n 2 common eigenvalues. By Theorem 13, this fact is inferable from P(G), which leads to the reconstruction of φ(G, x) from P(G). Clearly if a disconnected graph G has more than two components or has an odd number of vertices, then G must have such a component K. Note that if G is known to be disconnected, then its characteristic polynomial can be reconstructed from P(G) [32]. However, we need to stress here that the result of Corollary 3 is true irrespective of whether G is known to be disconnected. Only disconnected graphs having two components, each having n 2 vertices, are left out by Corollary 3. The results of this paper allow us to include more disconnected graphs than those described in Corollary 3, however. This corollary states that only disconnected graphs having two components with an equal number of vertices may possibly not have their characteristic polynomials reconstructed from their polynomial deck. Let G be one such disconnected graph on n = 2k vertices, whose two components G 1 and G 2 have k vertices each. Clearly, every vertex deleted subgraph of G will have at least k eigenvalues in common with those of G. Thus, our main result, Theorem 13, will only be inapplicable to G if it so happens that every vertex deleted subgraph of G has exactly k eigenvalues in common with those of G. However, by Theorem 5, if either of the components of G has an eigenvector associated with one of its eigenvalues λ with a zero entry in its vth position, then this eigenvalue will also be present in G − v, and hence G will have more than half of its eigenvalues shared by those of G − v. This will allow the reconstruction of φ(G, x) from P(G) by applying Theorem 13. Consequently, among all disconnected graphs, only disconnected graphs having two components G 1 and G 2 of equal order, where both G 1 and G 2 have no zeros in their eigenvectors, may possibly be counterexamples to the polynomial reconstruction conjecture. Such graph components with no zeros in their eigenvectors are called omnicontrollable in [17]. We thus conclude our paper with the following remarkable result that strengthens Corollary 3. Theorem 14. Let G be a disconnected graph with polynomial deck P(G). Then φ(G, x) is reconstructible from P(G), except possibly if G has two components of equal order, both of which are omnicontrollable graphs. Funding: This research received no external funding.
2021-10-21T15:26:33.523Z
2021-09-09T00:00:00.000
{ "year": 2021, "sha1": "3ce7422ebde0605b54d55fe7958916211038a6dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/13/9/1663/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e1eb5c1b976ccff71408bf3e759ac20820faae68", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
267635143
pes2o/s2orc
v3-fos-license
Real-world data analysis of bilayered living cellular construct and fetal bovine collagen dressing treatment for pressure injuries: a comparative effectiveness study Aim: To determine the effectiveness of bilayered living cellular construct (BLCC) versus a fetal bovine collagen dressing (FBCD) in pressure injuries (PRIs). Methods: A real-world data study was conducted on 1352 PRIs analyzed digitally. 1046 and 306 PRIs were treated with BLCC and FBCD, respectively. Results: Cox healing for BLCC (n = 1046) was significantly greater (p < 0.0001) at week 4 (13 vs 7%), 8 (29 vs 17%), 12 (42 vs 27%), 24 (64 vs 45%), and 36 (73 vs 56%). The probability of healing increased by 66%, (hazard ratio = 1.66 [95% CI (1.38, 2.00)]; p < 0.0001. Time to healing was 162 days for FBCD and 103 days for BLCC showing a 36% reduction in time to healing with BLCC; (p < 0.0001). Conclusion: BLCC significantly improved healing of PRIs versus FBCD. Pressure injuries raise the risk for infection, pain, disability and longer hospital stays associated with increased morbidity and mortality [1,2].Pressure injuries occur in up to 23% of patients in long-term care and rehabilitation facilities and up to 41% in intensive care units (ICUs) [3,4].PRIs affect more than 2.5 million individuals annually in the US alone [5,6].The US national cost of hospital-acquired pressure injuries may exceed $26.8 billion [6,7].Availability of safe and effective treatments for PRIs remains a critical, unmet medical need [8,9].Delivering appropriate PRI therapy remains a daily challenge for patients and wound care providers [8,10].Pressure injuries (PRIs) are chronic cutaneous wounds localized to the skin or underlying tissues over a bony prominence due to sustained pressure or pressure in combination with shear and tissue deformation.In general, the highest rates for PRIs are reported in critically ill patients in hospital.Populations and treatment settings of highest risk include: ≥65 years of age, critical care units, palliative care, spinal cord injuries, obese (by BMI), community care, rehabilitation centers and generally in patients who are sedentary.PRIs have been reported in neonates and children.Adult PRI patients typically demonstrate complicated wounds and multiple comorbidities.Most PRI patients typically present with partial and full thickness wounds (stages II-IV) [11,12].The prevalence of PRI varies from approximately 9-32% in long-term care facilities and 3-19% in home-care patients [10,13,14]. A bilayered living cellular construct (BLCC; Apligraf R ; Organogenesis Inc., MA, USA), a bioengineered, bilayered, viable skin with living keratinocytes and fibroblasts, is FDA approved for the treatment of venous leg ulcers (VLUs) and diabetic foot ulcers (DFUs) [15][16][17][18].Fetal bovine collagen dress (FBCD; PriMatrix R ; Integra Life Sciences, NJ, USA) is an acellular dermal matrix derived from fetal bovine dermis marketed under Section 510(k) of the US Food, Drug, and Cosmetic Act (The Act).Real-world data (RWD) were used to conduct a comparative effectiveness assessment (comparative effectiveness assessment [CEA]) of BLCC versus FBCD for the treatment of PRIs. The conditions for initiating advanced skin substitute therapies in PRIs have not been established.However, as demonstrated in other chronic wounds studies, advanced therapies may be appropriate to use in patients with poor prognostic indicators such as large size of the wound (e.g., >10 cm 2 ) and long duration of non-healing (e.g., >6 months) [10,19,20].Chronic wound data in VLUs and DFUs have shown that treatments other than routine wound care regimens should be considered in ulcers that have not reduced in surface area by ≥40% (VLUs) and ≥50% (DFU) after 4 weeks of care [21][22][23]. BLCC treatment of PRIs may prove to be a safe and effective adjunct to standard of care (SOC).BLCC is one of only three skin substitute products approved by the US Food and Drug Administration (US FDA) as a 'wound treatment' (FDA approved for VLUs and DFUs).US FDA approval requires a pre-market evaluation establishing safety and effectiveness.Showing 'compelling' scientific, medical and clinical data (US Code of Federal Regulations; CFR) in at least one phase III, pivotal, randomized controlled clinical trial [RCT]) is mandated to by the Agency to demonstrate a favorable risk/benefit ratio for a product's indication for use.In large, randomized clinical trials (RCTs) for the treatment of VLUs and DFUs, BLCC significantly increased the percent of healing and reduced the median time to healing compared with SOC (good wound care recommended by the Wound Healing Society guideline therapies [18,[24][25][26][27][28][29]. A wound covering, FBCD has been cleared by FDA as a 510(k) class II device for the management of chronic and acute skin wounds with the exception of third degree burns.FBCD is an animal-derived acellular collagen dressing that has been processed and treated to remove cellular elements, lipids, carbohydrates and non-collagenous proteins resulting in a scaffold with physiological amounts of collagen but without viable cells.Retrospective comparison of diabetic foot ulcer and venous stasis ulcer healing outcomes (n = 40) between FBCD and BLCC showed that both treatments were highly effective; however, the FBCD-treated wounds healed faster than patients treated with BLCC [30].FBCD in the published retrospective analysis was demonstrated to be successfully incorporated into standard of care therapy as a primary wound covering for the treatment of VLUs and DFUs [30]. Research questions In this PRI study, RWD were used to conduct a CEA analysis of BLCC versus FBCD for the treatment of PRIs.Research questions included: 1) Would analyses of over one thousand treated patients at over 300 wound care facilities in the US result in robust data to support clinically meaningful conclusions of effectiveness of BLCC versus FBCD? 2) Would RWD CEA show statistically significant differences between BLCC and FBCD treatment groups?Can results of this RWD CEA study help to inform patients, clinicians and policy makers on wound care therapy for PRIs? General study objective The general objective of this study was to generate data on large PRI patient populations using real-world data for comparative effectiveness assessments of BLCC and FBCD. Study design This study design is a retrospective analysis of RWD CEA of BLCC and FBCD using a patient de-identified EMRs transferred from Net Health (PA, USA) to Virtu Stat Ltd, (PA, USA).Effectiveness of BLCC was compared with FBCD for the treatment of PRIs.RWD were used for all computations of clinical outcome results.The data were collected from 315 US wound care facilities collected from 2017 to 2022.The analyses were conducted on 1182 PRI patients.There were 890 BLCC-treated and 292 FBCD-treated patients, respectively.A total of 1352 PRIs were treated with either BLCC or FBCD, and all of these 1352 PRIs were analyzed following intention-to treat (ITT) principles.There were 1046 BLCC-treated and 306 FBCD-treated PRIs.The primary end points were median time to healing and percentages of patients healed.Primary analyses were time-to-event (TTE, not adjusted for risk factors) and linear regression (adjusted for risk factors).Wound healing outcomes were assessed at the wound care facilities by site personnel.Time and frequency of healing over 36 weeks were compared between treatment groups. Patients Patients eligible for inclusion were those documented as receiving at least one treatment of either BLCC or FBCD with at least one documented follow-up visit.Patients were eligible for inclusion in the analysis if they demonstrated stage II-IV PRIs located at anatomical locations including over the sacrum, coccyx, greater trochanter, ischial tuberosity and calcaneus.Patients having wounds with surface areas between 1 and 20 cm 2 were included.Wounds without baseline or follow-up area measurements were excluded.Censoring occurred for non-healed wounds at their last visit with an area measurement.Patients were also censored at the visit where the alternate product was applied (i.e., BLCC on FBCD-treated PRIs; FBCD on BLCC-treated PRIs. Data collection Electronic medical records (EMRs) for wound care management (WoundExpert R ; Net Health, PA, USA) were used to evaluate the effectiveness of BLCC versus FBCD for the treatment of pressure injuries (PRIs).Data were obtained from the WoundExpert R electronic medical record (EMR; i.e., electronic case report form, eCRF) which was de-identified under the terms and conditions of the US Health Insurance Portability and Accountability Act of 1996 (HIPAA).Net Health provided all records for all PRI patients receiving at least one application of BLCC or FBCD at the 315 US centers with contracted agreements for the transfer of de-identified data for research purposes.Patient EMRs recorded baseline demographics including patient characteristics (e.gs, age in years, ≤89 [per US HIPAA], sex, race and BMI), wound characteristics (e.gs., wound size, depth and duration) and treatment characteristics (e.gs., number of treatment applications and interval of time between applications).All measurements of wound dimensions were performed at the wound care facility by site personnel using rulers to measure length (i.e., at the longest points) and width (at the widest points) to compute wound areas (cm 2 ) and wound depth (mm) measured using a cotton-tipped applicator gently inserted into the deepest part of the wound]. Statistical analysis Descriptive data are expressed as mean (standard deviation) and median for continuous variables and n (%) for categorical variables.An alpha level of p < 0.05 was used for statistical significance.Continuous and categorical baseline characteristics were reported as observational data.Missing data were imputed using a mixed-effects model of repeated measures (MMRM) with the mean value of the BLCC group.The primary analyses comparing incidence and median time to wound closure were computed by Kaplan-Meier (K-M) analysis with a two-tailed log-rank test.Cox proportional hazards regression analysis was used to estimate the percentage of PRIs healed at week 4, 8, 12, 24 and 36.Median time to wound closure was determined by the method of K-M.The frequency of healed wounds closed at week 4, 8, 12, 24 and 36, and median time to wound closure, hazard ratio (HR) with 95% CI and p-value (Wald test) were estimated from the Cox model with terms that included: treatment, baseline wound area, baseline wound duration, baseline wound depth, sex, BMI and patient age at first treatment. Results All PRIs that received their first treatment with BLCC or FBCD between 2017 and 2022 at all participating Net Health centers were eligible for analysis.A total of 1182 patients with PRIs met the eligibility requirements for inclusion in the analysis.1046 wounds (890 patients) were treated with BLCC and 306 wounds (292 patients) were treated with FBCD.Patient, wound and treatment characteristics are shown in Table 1.Treatment groups were similar with respect to age, sex, baseline wound area, interval of time (days) between treatment applications.The median age was 70 and 69 years old in the BLCC and FBCD groups, respectively.Women represented a 58.1% of the population in the BLCC group and 54.9% in the FBCD group.Men represented slightly less than half of the population in both the BLCC and FBCD groups (41.9 vs 45.1%, respectively).The two groups were comparable for mean wound area at baseline, p = 0.063 (BLCC, 6.90 ± 11.03 cm 2 vs FBCD, 6.33 ± 14.83 cm 2 ).Notably, the percentages of patients presenting with a single wound were 55.1 versus 89.0% for BLCC and FBCD; p < 0.0001, and the percentages of patients presenting with multiple wounds at baseline we re 44.9 versus 11.0% for BLCC and FBCD; p < 0.0001.Baseline wound depths (mm) for BLCC versus FBCD were 3.1 ± 4.8 versus 7.5 ± 8.99; (p < 0.0001).Baseline wound durations (months) were 6.54 ± 10.52 for BLCC and 9.34 ± 15.70 for FBCD; (p < 0.0001).Treatment characteristics are also shown in Table 1.The mean number of treatment applications between groups showed significant differences: BLCC-treated PRIs had fewer treated applications than FBCD-treated PRIs (2.15 ± 1.59 vs 2.43 ± 1.93, respectively; p = 0.023). Discussion Results of RWD CEA studies have become increasingly important to clinicians, patients, Boards of Health (BOH; regulatory bodies) and third party payers [40,41].We report the first RWD CEA study examining outcomes of BLCC and FBCD for the treatment of PRIs.We found PRIs treated with BLCC had higher rates of healing in less time compared with those treated with FBCD increasing the probability of healing by 66%. Studies using RWD can be used to show comparative effectiveness of treatment options on patient clinical outcomes.Data from RWD CEAs can guide clinicians to limit overuse of ineffective therapies and underuse of effective therapies [42,43].Real-world comparative effectiveness assessments can aide in answering key questions about particular patient populations and conditions by using active comparators and employing broad inclusion criteria to evaluate large, diverse populations that are representative of patients treated in routine practice [44].RCTs determine efficacy (i.e., whether a product can work in a controlled setting).RCTs show safety and efficacy data and serve as the cornerstone for agency pre-market review and pre-market authorizations to commercialize drugs, biologics and class III devices (US FDA device classification).RWD CEA studies, on the other hand, show whether a post-approved product does work in widespread use [45].Products that demonstrate efficacy in RCTs may perform differently in general clinical practice where variability in treatment applications, patient compliance and other clinically meaningful factors tend to impact the net benefits of a chosen treatment modality [46,47].RWD CEA research allows for valid determinations of the applicability of efficacy results to diverse patient populations treated in multiple settings.Concordant efficacy and effectiveness clinical outcomes data across RCT and RWD studies, respectively are indicative of robust, strong data [43,48,49]. The RWD CEA study we report for BLCC-treated PRI wounds shows consistent results with the RCT results demonstrated in the VLU and DFU phase III trials for US FDA BLCC approvals.It has previously been demonstrated that the two pivotal phase III RCT results, one for VLUs and one for DFUs, were comparable to each other [16,17,50].The BLCC DFU RCT showed a median time and frequency to healing observed of 65 days and 56% (week 12) [17,18].The BLCC VLU RCT showed a median time and frequency of healing of 61 days and 57% (week 24) [18,51].These BLCC efficacy data for VLU and DFU are reasonably consistent with the results observed in the RWD CEA study of BLCC versus FBCD in PRIs, (Figures 1 & 2).In the RWC CEA study, median time to healing was 103 days for BLCC and percentages of healing were 64% (week 24) and 73% (week 36; Figures 1 & 2).Notably, time to healing was accelerated in the two RCTs and the RWD CEA studies when compared with control-treated wounds.The percent improvement in time to healing was 66% (VLUs), 28% (DFUs) and 36% (PRIs), respectively.The finding that healing of PRIs treated with BLCC was accelerated by 36% when compared with FBCD (i.e., for BLCC, 103 days to heal; for FBCD, 162 days: 36% decreased time to healing in 10.57264/cer-2023-0109 favor of BLCC; p < 0.05) was particularly noteworthy given that the BLCC-treated group in the PRI study was compared with an active control.For the VLU and DFU trials, per US FDA guidance, BLCC was compared with standard of care (SOC; e.gs, water balance dressings, compression for VLU; water balance dressings, off-loading and debridement for DFU). A significant advantage of using RWD for analysis is that clinical effectiveness results are more likely to be generalizable to broad, diverse patient populations reflective of clinicians' practices when compared with RCT efficacy results.Issues of data variability that may arise in observational, cohort wound studies or even in many RCTs, may be significantly reduced by employing RWD CEA study designs where large numbers of patients and wounds are analyzed.High variability in healing values have been reported in the literature for partial area reductions of PRIs, times to healing and percentages of healed PRIs.Both times to healing and percentages of healing have demonstrated broad ranges, between approximately 100-180 days and 20-40% for times and percentages, respectively [9,52,53].The most important consequence of studies with disparate results is that clinicians and policy makers have had difficulties generalizing study results to everyday, clinical practice.Compared with RCTs for wound products, in the current RWD CEA PRI study we included a large number of wound care centers (n = 315), patients (n = 1,182), PRIs (n = 1,352) and longer durations of patient follow-ups (36 weeks).Only one standardized eCRF (WoundExpert) was used across all wound care facilities and uniform coding/programming was applied to all raw patient data.Statistical analyses were performed by one principal statistician (Virtu Stat Ltd; PA, USA) on the intention-to-treat (ITT) study population.Principled statistics were employed for all analyses, Kaplan-Meier (K-M) survival, life tables methods were used to show unadjusted (e.g., no statistical adjustments for patient, wound and treatment characteristics) results and Cox proportional hazards regression (Cox) analyses demonstrated adjusted results that acted as sensitivity analyses.Primary, secondary and exploratory analyses were done using all patient data.In consideration of real-world study design, adherence to our study plan and results that summarized patient demographics, wound characteristics and treatment characteristics, we take the position that our sample of PRI patients treated with BLCC or FBCD is reasonably representative of the general PRI patient population treated with other wound care products in a variety of practice settings (i.e., outpatient and in-patient wound care environments).The external validity of our RWD CEA study results compares favorably to small, under-powered RCT results and certainly observational case series.Under the conditions of study, we regard the current RWD PRI findings as uniquely generalizable.Interpretation of the results may be applied more broadly and to larger PRI populations beyond the study population.Given the context that PRIs pose significant management challenges where few effective therapies exist, alternative treatments to routine dressings and off-loading are needed. PRIs are characterized by unique sequelae and PRI patients demonstrate high rates of co-morbidities and high rates of mortality that point to questions that may be addressed by RWD studies.Patients with PRIs generally heal poorly and are at risk for infection, cellulitis, osteomyelitis, sepsis and death.Development of PRIs are associated with poor overall prognosis for patients [12].An increased risk of death has been associated with the presence of PRIs, however, the PRI may be a sign of the severity of underlying comorbidities instead of an independent predictor of mortality [53].With real-world datasets of hundreds of wound care facilities, and thousands of patients, Cox modeling methods requiring extremely large databases of patient and wound covariates could be applied to identify statistically significant risk factors (positive or negative) for clinical outcomes.RWD CEAs are well suited to identify the complex relationships between comorbidities, disease sequelae, PRI pathophysiology, wound healing outcomes and patient reported/patient centric outcomes (PROs/PCOs) [29]. Of note, like all retrospective analyses, we recognize that this RWD study introduces 'noise' into the clinical study environment that is diminished in prospective RCTs.A limitation of this study is that electronic medical record databases often are not developed for effectiveness research purposes [54].Differences between individual treatment centers exist.Even with the WoundExpert EMR data collection system, uniform data reporting is not actively monitored (e.g., RCT) [35].The possibility of patient selection bias did exist.That randomization was not done in this RWD CEA study, selection of patients for BLCC or FBCD was made on site by the treating clinician at the wound care facility.However, using ITT principles lead to the largest number of centers (m = 315), patients (n = 1182) and wounds (n = 1352) available in our database to contribute to the results.Given that 1352 PRIs were assessed over 36 weeks, it was unlikely that clinician bias or any imbalance between groups in potential risk factors for healing affected the study results.Additionally, Cox analyses used to determine wound closure outcomes adjusted for multiple covariates and corrected for any imbalances between groups that might have arisen based on entry criteria [55]. However, reliance on eCRFs to capture RWD does offer advantages in collecting patient data such as enabling longitudinal analyses of log-fold greater numbers of patients over significantly longer periods of time than in RCTs. Conclusion RWD CEA analyses as secondary databases are expected to become increasingly important as tools to inform clinicians, regulatory bodies, third party payers and other policy makers on the comparative benefits of wound treatments [46,47,56]. These real-world data showed that BLCC, compared with FBCD, significantly improved the probability, speed and the incidence of wound closure in PRIs. Summary points • This is the first comparative effectiveness assessment (CEA) research study that compares the clinical outcomes of a bilayer living cellular construct (BLCC) and an acellular fetal bovine collagen dressing (FBCD) for the treatment of pressure injuries (PRIs) in a real-world setting.• Treatment with BLCC significantly improves the incidence and speed of PRI wound closure compared with FBCD. • The effectiveness of BLCC in these analyses was supportive of the efficacy results from the pivotal BLCC trials in venous leg ulcers (VLUs) and diabetic foot ulcers (DFUs).• BLCC showed a 66% greater probability of wound closure when compared with FBCD. • BLCC demonstrated accelerated time to wound closure of 36%, an 59-day improvement compared with FBCD. • The incidence of wound closure was superior with BLCC versus FBCD at all time points in the study. • Improvements in the probability, speed and the incidence of wound closure in PRIs treated with BLCC compared with FBCD showed clinical effectiveness benefits.• Limitations include: real-world data studies introduce 'noise' into the clinical study environment that is diminished in prospective RCTs, EMRs often are not developed for effectiveness research, and differences between individual treatment centers exist.• Electronic healthcare databases when used in real-world data, comparative effectiveness research studies offer significant advantages of robust sources of data, large study populations, and extended observation periods. Financial disclosure This study was funded by Organogenesis, Inc.The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. Table 1 . Patient, wound and treatment characteristics.
2024-02-14T06:18:32.756Z
2024-02-13T00:00:00.000
{ "year": 2024, "sha1": "4f5afc9234cb6cb26083878ba87ef88ac312fed7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.57264/cer-2023-0109", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d4843968b881e8958bb47611b4f94d673a22685", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262721188
pes2o/s2orc
v3-fos-license
Validation of LC-MS/MS methods for determination of remdesivir and its metabolites GS-441524 and GS-704277 in acidified human plasma and their application in COVID-19 related clinical studies Remdesivir (RDV) is a phosphoramidate prodrug designed to have activity against a broad spectrum of viruses. Following IV administration, RDV is rapidly distributed into cells and tissues and simultaneously metabolized into GS-441524 and GS-704277 in plasma. LC-MS/MS methods were validated for determination of the 3 analytes in human plasma that involved two key aspects to guarantee their precision, accuracy and robustness. First, instability issues of the analytes were overcome by diluted formic acid (FA) treatment of the plasma samples. Secondly, a separate injection for each analyte was performed with different ESI modes and organic gradients to achieve sensitivity and minimize carryover. Chromatographic separation was achieved on an Acquity UPLC HSS T3 column (2.1 × 50 mm, 1.8 μm) with a run time of 3.4 min. The calibration ranges were 4–4000, 2–2000, and 2–2000 ng/mL, respectively for RDV, GS-441524 and GS-704277. The intraday and interday precision (%CV) across validation runs at 3 QC levels for all 3 analytes was less than 6.6%, and the accuracy was within ±11.5%. The long-term storage stability in FA-treated plasma was established to be 392, 392 and 257 days at −70 °C, respectively for RDV, GS-441524 and GS-704277. The validated method was successfully applied in COVID-19 related clinical studies. Since the outbreak of the severe acute respiratory coronavirus 2 (SARS-CoV-2, COVID- 19) in December 2019, it has become a worldwide pandemic. RDV was first found to have activity against SARS-CoV-2 in in vitro testing [16], and then showed clinical improvement against COVID-19 in its compassionate use for patients with severe symptoms from COVID-19 infection [17]. In a NIAID supported, randomized, controlled clinical trial to evaluate the safety and efficacy of the investigational antiviral remdesivir in hospitalized adults diagnosed with coronavirus disease 2019 (COVID-19) that took place in multiple locations globally, remdesivir was proved to be superior to placebo in shortening the time to recovery in adults hospitalized with Covid-19 and evidence of lower respiratory tract infection [18]. . Currently, there are multiple clinical trials with RDV at multiple sites at different geographic locations to access its effectiveness against broader patient populations. In these clinical studies, accurate determination of the prodrug RDV and its major metabolites, GS-441524 and GS-704277, in human plasma is critical for appropriate characterization of the pharmacokinetics (PK) and pharmacodynamics (PD) of RDV and its metabolites. To our knowledge, currently there is only one publication by Avataneo et al. [19] on the validation of a bioanalytical method for remdesivir and GS-441524 quantification in human plasma. The paper mentioned the stability issues regarding RDV and GS-441524, noting the lack of stability of RDV in plasma at room temperature (RT) and 4 • C within 24 h. Furthermore, no degradation was observed for GS-441524 after heat treatment of the plasma. However, very limited experimental details were provided, and it was not clear if both RDV and GS-441524 were present in the stability QC samples tested. The intermediate metabolite, , that could be important in understanding the stability of RDV and GS-441524, was not mentioned. Furthermore, the authors used a 2-in-1 method, but did not address carryover issues, especially for the less polar RDV. We observed, however, early in the development of methods for determination of RDV and its metabolites in plasma (of rat, dog, monkey and human), temperature-dependent and pH-dependent stability shown by experimental data. Furthermore, degradation of RDV always led to observable increases in GS-441524 and GS-704277 and degradation of GS-704277 always led to observable increases in GS-441524. The conversion scheme of RDV to the intermediate metabolite, GS-704277 and the stable metabolite (parent), GS-441524 in plasma is shown in Fig. 1. Though instability is expected for such a prodrug that is designed to convert in vivo to an active metabolite, in a bioanalytical method development this instability issue must be addressed to ensure the precision, accuracy and robustness of the method. Moreover, since GS-441524 and GS-704277 are much less polar than RDV, it is challenging to address the carryover issue for RDV if the same LC gradient is used for all 3 analytes, or even for two analytes (RDV and GS-441524). The carryover issue for RDV need to be addressed separately. In this paper, we present the method development and validation of an LC-MS/MS method for determination of RDV and its major metabolites GS-441524 and GS-704277 in acidified human plasma, as well as the method's application in clinical studies. Pooled and individual human plasma, hemolyzed, and lipemic human plasma, and human whole blood (all with K 2 EDTA as anticoagulant) were obtained from Bioreclamation (Bioreclamation IVT, West Berry, NY). HPLC grade water, methanol, dimethyl sulfoxide (DMSO), Preparation of primary stock solutions For test articles, two primary stock solutions from independent weightings by two different scientists were prepared and were verified to be within 5.0% of each other. For ISs, one primary stock solution was prepared. The concentration of each stock solutions was calculated using a corresponding correction factor (factor required to convert the mass of reference material weighed to the mass of the analyte free base or free acid that it contains) for the reference standard provided in the certificate of analysis. Table 1 lists the concentrations, solvent and correction factors for primary stock solutions preparation. Stock solutions were stored at − 20 • C and protected from light. IS Working Solutions: The appropriate amounts of GS-829143/GS-829466/GS-828840 stock solution was added to a volumetric flask. The volumetric flask was filled to volume with methanol:water:FA at 50:50:0.1 (v:v:v), mixed well, and stored in an appropriate reagent bottle. Solutions were stored at approximately − 20 • C until analysis. Standard Spike-In Solution: The appropriate volume of stock solution was added with the appropriate volume of acetonitrile:dimethyl sulfoxide at 50:50 (v:v) to make a standard spike-in solution with GS-5734/GS-441524/GS-704277 concentrations of 160/80/80 (μg/mL). Calibration standards and quality control samples for method validation Calibration standards were prepared in pooled FA-treated plasma . QC samples were prepared in FAtreated plasma from different stock solutions of RDV, GS-441524, and GS-704277 at five concentration levels: 4/2/2 (lower limit of quantification, LLOQ); 12/6/6 (low quality control, LQC); 200/100/100 (low middle quality control, Low MQC); 1600/800/800 (high middle quality control, High MQC); and 3200/1600/1600 ng/mL (high quality control, HQC). Calibration standards and QC samples were stored at − 70 • C until use, except that freshly prepared calibration standards were used for assessments of bench-top, freeze/thaw cycle, and long-term frozen storage stability of RDV, GS-441524, and GS-704277 in FA-treated plasma. Clinical samples Within 30 min of the blood collection, human blood samples were processed by centrifugation at ~1500 g (3000 rpm) for 10 min at 4 • C to obtain plasma. Next, 500 μL of each plasma sample was immediately transferred into a corresponding clean polypropylene tube containing 40 μL of the 20% FA solution and mixed well. Immediately thereafter and within 1 h (h) of blood collection, the polypropylene tubes were placed upright on dry ice prior to transfer to a − 70 • C freezer for storage prior to shipping. These clinical study FA-treated plasma samples were then kept frozen at − 70 • C during shipping and storage until analysis. Sample processing Prior to analysis, all frozen clinical study samples, calibration standards, and QC samples were thawed and allowed to equilibrate in an ice bath, and then vortex-mixed for approximately 1 min before pipetting. Samples were kept in an ice-bath during the processing steps. For sample processing and pretreatment, 50 μL aliquots of plasma samples, calibration standards, or QC samples were added to separate wells of an appropriately labeled 96-well extraction plate. 50 μL of IS was spiked into the Blank + IS, Calibration Standard, QC (and system suitability test During method development, the individual stability of RDV, GS-441524, and GS-704277 at the LQC (12, 6, 6 ng/mL) and HQC concentrations (3200, 1600, 1600 ng/mL) in 20% FA-treated pooled K 2 EDTA human plasma was compared with that in untreated pooled K 2 EDTA human plasma. Analyte:IS peak area ratios (n = 3) after incubation at either room temperature (RT) or 4 • C were determined by LC-MS/MS for assessment of stability. Stability was further confirmed during method validation as described below. Liquid chromatographic conditions The chromatographic analysis was performed using an Acquity UPLC HSS T3 column (2.1 × 50 mm, 1.8 μm, waters, Milford, MA). Table 2 lists optimized gradients for each of the analyte that was injected separately and the combined mobile phase flow rate. Table 3. The sourcedependent parameters maintained for the 3 analytes were as shown in Table 4. Analyst® software version 1.4.1 was used for LC-MS/MS parameter control and data collection. Bioanalytical method validation Validation of the method for determination of RDV, GS-441524, and GS-704277 in FA-treated plasma was done following the FDA and EMA guidelines [20,21]. The calibration and linearity, precision and accuracy, dilution linearity, selectivity, matrix effect, injection carryover, extraction recovery, effect of hemolysis, and effect of lipemia were evaluated. Experiments were also conducted to evaluate the stability of RDV, GS-441524, and GS-704277 in FA-treated plasma samples stored in wet ice, carried through freeze/thaw cycles, and following long-term storage (− 20 • C and − 70 • C). RDV, GS-441524, and GS-704277 stability was further assessed in human whole blood and in processed samples. To accommodate the possible need for decontamination of samples from virus-infected individuals (e.g., Ebola), stability to standard procedures using gamma-ray exposure known to inactivate such viruses both on the tube exterior and within the tube interior contents was also assessed. Means, standard deviations, and values of %CV (Coefficient of Variation) and %RE (Relative Error) were calculated by standard statistical calculations, and except where specifically stated, the nominal and the observed concentrations were used for calculation of %RE. Unless otherwise stated, %Diff of a determined value from a reference value was calculated as the [(determined value) -(mean reference value)]/(mean reference value) and expressed as a percentage. Calibration and linearity The linearity of the method was determined by analysis of standard plots associated with an eight-point standard calibration curve. Eight non-zero calibration standards were analyzed in each of the three precision and accuracy batches. Peak area ratios of analyte:IS obtained from MRM analysis of the chromatograms from the calibration standards and their corresponding nominal concentrations were utilized for the construction of calibration curves, using weighted (1/x 2 ) linear least squares regression. Back-calculations were made from the curve equations to determine the concentration of each analyte in each individual calibration standard sample. A correlation coefficient (r 2 ) greater than 0.99 was required for each the calibration curve to be acceptable. The lowest standard on the calibration curve was to be accepted as the lower limit of quantitation (LLOQ), at which the analyte response (peak area ratio) was required to be at least five times greater than response at the same retention time from drug free (blank) extracted plasma. In addition, the analyte peak of the LLOQ sample needed to be identifiable, discrete, and reproducible, and have a mean precision (%CV) not greater than 20.0% and mean accuracy (RE%) within 80.0-120.0% of its nominal concentration. The deviation of the mean back calculated concentrations of individual standards other than the LLOQ standard needed to be within ±15.0% of the corresponding nominal concentrations. Precision and accuracy Precision and accuracy of the method were evaluated by analyzing QC sample replicates (n = 6) at five different nominal analyte concentrations across the standard curve range. Intraday precision and accuracy were determined by analyzing six replicate aliquots of the QC samples prepared at five concentrations (LLOQ QC, LQC, Low MQC, High MQC, and HQC) in each of the three precision and accuracy runs. Interday precision and accuracy were determined by analyzing six replicate aliquots of all QC concentrations over three independent precision and accuracy runs. The observed mean, %CV and %RE were calculated at the QC levels for all three analytical runs. The acceptance criteria both for intraday and interday precision and accuracy runs required the %CV value to be ≤ 15.0% and the %RE value of the mean to be equal to or within ±15.0% of nominal, except for LLOQ QC samples, for which the acceptable %CV value was ≤20.0% and the %RE of the mean was equal to or within ±20.0% of nominal. Dilution integrity To ensure accurate measurement for samples with concentration above the upper limit of the standard curve or for samples with limited volume, dilution integrity needed to be established. The dilution test was conducted to ensure that samples with concentrations above the upper limit of the standard curve could be diluted with blank matrix without affecting the final calculated concentration. A FA-treated plasma sample was prepared at one concentration of RDV, GS-441524, and GS-704277 (10 000/5000/5000 ng/mL, respectively) and diluted in five replicates at a dilution factor of 20 with pooled blank FA-treated plasma. For the dilution integrity results to be acceptable, the %RE of the determined concentrations of the diluted samples after applying the dilution factor had to be within ±15.0% of the nominal value before dilution, and the %CV could not exceed 15.0%. Selectivity The selectivity of the method towards endogenous plasma matrix components was assessed by extracting and analyzing six different individual lots of FA-treated plasma (i.e., each lot from a single donor) with no added analyte or IS. For the selectivity test to be acceptable, none of the six individual lots could show an interference peak area at the retention time of the analyte that was >20.0% of the mean analyte peak area from the LLOQ (4/2/2 ng/mL, respectively) and none of the six individual lots could show an interference peak area at the retention time of IS that was >5.0% of the mean IS peak area. Matrix effect The matrix effect was determined in six different individual lots of FA treated plasma at two analyte concentrations (12/6/6 and 3200/1600/ 1600 ng/mL, n = 3) for RDV/GS-441524/GS-704277 and at one concentration (400/200/200 ng/mL, n = 3) for their ISs. The matrix effect was evaluated by comparing the ratio of peak areas of solutions in the presence of the matrix to the peak areas of solutions in the absence of the matrix, which serve as reference samples. The %CV of the results for the mean IS-normalized matrix factor could not exceed 15.0% for it to be considered acceptable and consistent across the validated assay method range. In addition to the normal matrix, the effects of lipemic FA-treated plasma and 5% hemolyzed FA-treated plasma on the assay performance were examined at two analyte concentrations (12/6/6 and 3200/ 1600/1600 ng/mL, n = 3) for RDV/GS-441524/GS-704277 and at one concentration (400/200/200 ng/mL, n = 3) for their ISs. One lot of lipemic matrix and one lot of hemolyzed matrix were evaluated. For the results from the lipemic and hemolyzed plasma tests to be acceptable, the %RE of the five replicates needed to be within ±15.0% and the %CV could not exceed 15.0%. Carryover An extracted blank sample was inserted in the injection sequence after the highest calibration standard (ULOQ) from both the first and second set of calibration standards, and injection volumes (10 μL) were constant for all samples. Carryover was defined as minimal if the peak areas of the analyte observed in the first and second carryover blanks were less than 20.0% of the corresponding analyte peak area observed in the lowest calibration standard. Protein precipitation recovery The recovery test was conducted to evaluate the efficiency of the protein precipitation extraction process. Recovery was determined at three standard concentrations (12/6/6, 200/100/100, and 3200/1600/ 1600 ng/mL, n = 5) for RDV/GS-441524/GS-704277. The recovery test for the IS was not required since a stable isotope label was used and therefore, the results are expected to be similar to those of the unlabeled analyte. The recovery of the analytes in this assay was evaluated by comparing the mean peak areas from the analyte added to and recovered from the biological matrix (extracted samples) to the peak areas from the sample extracts spiked at the nominal analyte concentrations (postextract spiked samples). The %CV of the results for the three concentrations tested could not exceed 20.0%. Benchtop ice bather stability of RDV, GS-441524, and GS-704277 in FA-treated K 2 EDTA human plasma was tested to evaluate analyte stability in the matrix in an ice bath during sample handling and processing. Stability was determined at two concentrations (12/6/6 and 3200/1600/1600 ng/mL for RDV/GS-441524/ GS-704277). The samples were stored in an ice bath for 8 h prior to extraction. The determined concentration at each level could not exceed ±15.0% RE from the nominal concentration, and the %CV of the determined concentrations at each level could not exceed 15.0%. Freeze/thaw stability. Freeze/thaw stability was tested to evaluate the stability of RDV, GS-441524, and GS-704277 in FA-treated K 2 EDTA human plasma after five freeze/thaw cycles. Stability samples at two concentrations (12/6/6 and 3200/1600/1600 ng/mL for RDV/ GS-441524/GS-704277) were frozen at − 20 • C or − 70 • C (for a minimum of 24 h for the first cycle and a minimum of 12 h for the other cycles) and thawed in an ice bath. After the completion of the fifth cycle, the samples were analyzed. The determined concentrations at each level could not exceed ±15.0 %RE from the nominal concentration, and the % CV of the determined concentrations at each level could not exceed 15.0%. Processed sample stability. Processed sample stability was tested to ensure that the integrity of the processed samples from an analytical run would be maintained if those samples were stored for the specified time interval prior to injection. Processed sample stability was determined at 4 • C. All replicates of the low and high QC samples (PSS QCs) of a valid run were kept refrigerated. When evaluating the processed sample stability, the PSS QCs were injected to the LC-MS/MS system along with newly extracted calibration standards and quality control samples. Whole blood stability. Stability of RDV, GS-441524, and GS-704277 in K 2 EDTA human whole blood was evaluated to ensure the stability of the analyte during the sample collection process. Human blood was pre-incubated at 37 • C for approximately 20 min. RDV, GS-441524, and GS-704277 were spiked into pre-incubated K 2 EDTA whole blood at 12/6/6 and 3200/1600/1600 ng/mL for RDV/GS-441524/GS-704277 in triplicate within 4 h of collection. The spiked whole blood stability samples were incubated at 37 • C for 10 min to reach equilibrium. The spiked whole blood stability samples were then transferred to plastic culture tubes and then held in an ice bath for 0, 1, 2, and 4 h before centrifugation in a refrigerated (4 • C) centrifuge for approximately 10 min at 1600 g. A total of 500 μL of each resulting plasma was spiked into corresponding a plastic culture tube and 40 μL of formic acid solution was added to each, and the tubes were vortex-mixed well. Aliquots of the samples were subjected to the standard sample processing procedure, and stability for each analyte was evaluated using analyte-to-IS peak area ratio as a function of ice bath storage time of the spiked whole blood samples. Analyte stock solution stability. Solution stability for each analyte was tested to evaluate analyte stability in the stock solutions that were used to prepare calibrations standards, QCs, and other validation samples. Stock solution storage stabilities in either acetonitrile:dimethyl sulfoxide at 50:50 (v:v) or in water was evaluated by comparing the response of a stock kept at − 20 • C to the response of a freshly prepared solution (from powder or sealed ampule) as a reference solution. The reference solution must be used within one day of its preparation. Similarly, stability of a stock solution stored at ambient temperature was determined by comparing its response initially to the response of a freshly prepared reference stock, and later to the response to the reference stock stored at − 20 • C when verification of its stability at − 20 • had been confirmed for the specified duration. The solution maintained in the freezer or the freshly prepared stock solution served as the reference for the ambient temperature stock solution stability evaluation. In order for the solution to be considered stable, the %CV of responses from replicates determination (n = 3) of both the test and reference solutions could be no greater than 15.0% and the %Diff between the mean responses of the test and reference solutions could be no more than ±10.0%. Long-term storage stability in matrix. Long-term storage stability was evaluated to ensure RDV, GS-441524, and GS-704277 in FA-treated K 2 EDTA human plasma was stable after storage at − 20 • C or − 70 • C. The stability samples were initially analyzed once to verify that the samples were prepared correctly. . For the verification assessment, the %CV of the calculated concentrations at each level could not exceed 15.0% and %RE calculated for the mean of the determinations using the observed and nominal concentration values had to be within ±15.0%. For subsequent stability timepoint evaluations, the same acceptance criteria for precision (%CV) and accuracy (%RE) that were used in the initial verification assessment were applied. Gamma-ray irradiation stability in matrix. Gamma-ray irradiation stability was evaluated to ensure RDV, GS-441524, and GS-704277 in FA-treated K 2 EDTA human plasma were stable after being subjected to Gamma-ray irradiation, used as a means of destroying virus species such as Ebola. The stability samples were first analyzed once to verify that the samples were prepared correctly, as described for long-term storage stability. Two sets of the stability samples were then shipped frozen to the NIH NIAID Integrated Research Facility to perform gammaray irradiation, where one set of the stability samples was not irradiated and served as the control, whereas the other set of the stability samples was subjected to the gamma-ray irradiation of minimum required dose of 5 Mrad which is sufficient to inactivate EBOLA virus and coronavirus in a sample with 1 × 10 6 focus forming units [FFU]/mL [22]. Both sets of samples were then shipped back to QPS, LLC for analysis. Acceptance criteria to demonstrate adequate stability were that the %CV and%RE calculated for the mean of the determinations using the observed and nominal concentration values must be within ±15.0%. Investigation of plasma acidification to stabilize concentrations of GS-5734, GS-441524, and GS-704277 The stabilities of GS-5734, GS-441524, and especially the stability of GS-704277 in FA-treated pooled human plasma was evaluated and compared with the stabilities in untreated plasma. Fig. 2 shows RDV and GS-441524 stabilities in untreated human plasma measured by peak area ratio, RDV/[ 13 C 3 ]-RDV, and GS-441524/[ 13 C 3 ]-RDV at both room temperature (RT) and 4 • C. At the respective LQC concentrations for RDV and GS-441524 (12 and 6 ng/mL, both in the same sample), stability issues were observed. After 24 h RVD had decreased >80% at RT and 13% at 4 • C, while GS-441524 decreased 4% and 6% respectively ( Fig. 2A). At the respective HQC concentrations for RDV and GS-441524 (1600 and 800 ng/mL), a similar pattern was observed: After 24 h RVD had lost more than 48% at RT and 5% at 4 • C, while GS-441524 increased 5% and 0.4%, respectively (Fig. 2B). Based on these results, conditions were sought: 1). To prevent potential conversion of RDV to GS-704277 and GS-441524; 2). To prevent potential conversion of GS-704277 to GS-441524. The effect of adding FA to human plasma on the stability of RDV and GS-441524 is shown in Fig. 3A and Fig. 3B, in which RDV alone was spiked at 4000 ng/mL into blank pooled K 2 EDTA plasma, and aliquots were withdrawn at the indicated times after the sample was incubated at room temperature and analyzed by the developed method, in which the IS was added when the sample was extracted. As shown in Fig. 3A, in 24 h at 4 • C, RDV peak area ratio (RDV:[ 13 C3]-RDV) decreased only 9% with FA treated plasma, whereas the peak area ratio decreased by more than 20% for the untreated plasma sample, and at RT the peak area ratio decreased by more than 60%. For the same samples, Fig. 3B shows the observed GS-441524 concentrations initially at or near zero at time 0, after 24 h at 4 • C were 1.43 ng in FA-treated plasma and 1.90 in untreated plasma, but after 24 h at RT had significantly increased to 56 ng/ mL. Fig. 3C shows the stability of GS-704277 (6 ng/mL) in FA-treated human plasma at 4 • C and RT measured by GS-704277/[ 13 C 3 ]-GS-704277 peak area ratio observed upon analysis of the sample by the developed method. The data show that GS-704277 is only moderately stable in FA-treated plasma at 4 • C (<10% decrease in ~8 h) and less table at RT (~10% loss in ~2 h). Therefore, for accurate determination of GS-704277 itself, a FA-treated plasma sample should be analyzed after storage for 8 h or less at 4 • C and less than 2 h at RT. Also, depending on relative concentrations of GS-704277 and GS-441524 in a sample, accuracy of determinations of GS-441524 could be affected by its generation from GS-704277. The stability study results demonstrate the need for stabilization of clinical samples upon collection. Results from plasma stability studies of RDV, GS-704277, and GS-441524 further confirmed the need for FA as a stabilizing agent to prevent conversion of RDV to GS-704277 and conversion of existing or newly formed GS-704277 to GS-441524 during the sample collection, storage and analysis processes. Human plasma samples with K 2 EDTA as anticoagulant (K 2 EDTA plasma) were treated immediately upon collection as described above. Such acidification was a suitable balance of inhibition of endogenous esterase activities, reagent acceptability for clinical sites, and prevention of acid-related plasma sample gelling; it and had been successfully used for similar prodrugs [23] As a prodrug, RDV was designed to be subject to hydrolysis by endogenous esterases [24], and previous work had shown that known esterase inhibitor dichlorvos was effective in minimizing esterase hydrolysis of RDV in animal plasma samples. However, the toxicological effects of dichlorvos [25]. Precluded its use as a stabilizer at many clinical sites. Moreover, the specificity of dichlorvos or another esterase inhibitor toward multiple esterases that might be present in a sample was a concern. The previous success with FA addition for ester prodrug stabilization, and observations that both the final sample pH and the concentration of the added FA can impact the onset of gelling (or coagulation) of plasma led to experiments that showed that acidification of human K 2 EDTA plasma to a pH of <4.7 caused gelation after ≤24 h, whereas acidification to a pH of 5.3 showed no gelation. Furthermore, both the concentration and the corresponding volume of added acid needed to achieve a plasma pH of 5.3 was important: Addition 40 μL of 20% aqueous FA to 500 μL of plasma resulted in no gelling, whereas addition of lower volumes of higher FA concentrations caused time-dependent gelling, and addition of a very low volume of full-strength (88%) FA caused instantaneous gelling. Therefore, upon collection at clinical sites, plasma samples were treated with 20% FA, and calibration standards and QC samples used for method validation and sample analysis were also prepared in FA-treated human K 2 EDTA plasma. LC-MS/MS conditions optimization for each individual anlayte GS-5734, GS-441524, and especially GS-704277 are polar compounds that are hard to retain on a reversed phase column. Hydrophilic interaction chromatography (HILIC) was tested, which typically runs at very high organic mobile phase. However, we found that GS-5734 and GS-441524 could not be retained under HILIC condition with multiple types of columns tested, while GS-704277 always got very broad retention without a reasonable peak. An Acquity UPLC HSS T3 (C18,1.8 μm) column, whose stationary phase was designed to be aqueous mobile phase compatible and to retain and separate small, water-soluble polar organic compounds [26], was selected based on both the preliminary assessments and our previous experience with a variety of related nucleoside drugs and prodrugs. Though standards and QCs were prepared as 3-in-1 solutions and one protein precipitation extraction was conducted, 3 separate injections for GS-5734, GS-441524, and GS-704277 from the same processed sample plate were performed with 3 separate organic gradients as shown in Tables 2 and 3 for the following considerations: However, GS-704277 showed much higher sensitivity (4-5 folds) with negative mode detection. For GS-5734, we found that the sensitivity at both positive mode and negative mode were similar. However, GS-441524 had very low sensitivity at negative mode that would not meet the target LLOQ, thus it had to be in positive mode. Due to the fact that GS-704277 required negative mode for sensitivity, whereas GS-441524 required positive mode for sensitivity, the 3-in-1 assay required 3 separate injections for GS-5734, GS-441524, and GS-704277, respectively. In regard to instrument sensitivity, it was found that API-6500 did not have better, actually lower, sensitivity for GS-704277 than API-5000 at negative mode. At positive mode, API-5000 and API-6500 had the same sensitivity of GS-704277. We concluded that GS-704277 had the best sensitivity on API-5000 in negative MRM mode. 2). Different LC gradients. When keeping both GS-5734 and GS-441524 within the same chromatography (injection), the starting mobile phase had to contain very low organic solvent (5%) in order for GS-441524 to be retained by the column, which caused GS-5734 to have significantly higher and potentially unacceptable carryover. Separation of GS-5734 and GS-441524 into two separate LC gradients (injections) enabled use of a starting mobile phase with high organic solvent (60%) for GS-5734 chromatography, which minimized the carryover of GS-5734. GS-704277, due to its high polarity, required 100% aqueous running buffer initially, and then a gradual organic gradient that reached to 65% organic in 1.6 min, followed with 100% aqueous phase equilibration for 0.8 min. The results from three precision and accuracy batch runs are presented in Table 5, which show that the precision and accuracy of the method were within the aforementioned acceptance criteria. All the validation assessments (calibration and linearity, dilution linearity, selectivity, matrix effect, extraction recovery, effect of hemolysis, and effect of lipemia) also passed the acceptance criteria, and the results are also presented in Table 5. These results are straightforward and are not discussed in further detail. Fig. 4A, Fig. 4B, and Fig. 4C show the chromatograms from an LLOQ sample for RDV, GS-441524, and GS-704277, respectively. Carryover Autosampler injection carryover was evaluated by injection of an extracted blank matrix sample, (containing neither analyte nor IS), immediately after the injection of the highest calibration standard (ULOQ) extract. The carryover was calculated as the peak area observed in of the carryover blank expressed as a percentage of the mean peak area of the lowest calibration standard determined in the same run. Ideally, the carryover value is <20% of the mean LLOQ analyte peak area. For 63 of 65 runs, the peak area of RDV observed in the first and second carryover blanks was less than 20.0% of the corresponding observed LLOQ peak area. For GS-704277, injection carryover for the two carryover replicates was 20.6% and 23.7% in Run 12, 25.5% and 22.5% in Run 15,22.3% and 38.2% in Run 58. For GS-441524, injection carryover was 32.6% in Run 14. In the application of the method, based on the maximum level of carryover observed, each sample was assessed against that possible carryover amount. If a sample was deemed to have been potentially impacted by carryover, i.e., if the calculated carryover value from a preceding sample to a study sample is more than 0.05 (i.e., 5%), the affected sample would be re-analyzed. Carryover was less than 5.0% for the IS. The injection carryover of the IS met the acceptance criteria in all runs. Stability assessment Analyte stability is a function of the chemical properties of the analyte, the matrix in which the analyte is stored, and the storage conditions. Table 6 summarizes the analyte stability data established in the method validation. The assessment of stability of one analyte was always performed in the presence of the other two analytes to mimic clinical applications, and the tests evaluated the stability of the analyte in situations likely to be encountered during actual sample handling and analysis. RDV, GS− 441524, and GS− 704277 stability in FA-treated K 2 EDTA human plasma was demonstrated for 8 h in an ice bath and for five freeze/thaw cycles at− 70 • C. GS− 5734 and GS− 441524 stability in FA-treated K 2 EDTA human plasma was demonstrated for five freeze/thaw cycles at − 20 • C, for 3 days of long-term storage at − 20 • C, for at least 392 days of long-term storage at − 70 • C and upon sample exposure to gamma-ray irradiation. GS− 704277 stability in FA-treated K 2 EDTA human plasma was also demonstrated for at least 257 days of long-term storage at − 70 Freeze/thaw stability results for GS-704277 in formic acid-treated K 2 EDTA human plasma after five freeze/thaw cycles at − 20 • C did not meet the acceptance criteria in one run and its repeated run. For these failed tests, it was noticed that samples were gelling as they were being used for extraction. Freeze/thaw stability results for of GS-5734, GS-704277, and GS-441524 in formic acid-treated K 2 EDTA human plasma after five freeze/thaw cycles at − 70 • C met the acceptance criteria. Long-term storage stability tests at − 20 • C were conducted due to the above failed 31-day long-term storage stability tests at − 20 • C. The LQC and HQC for GS-704277 and the HQC for both GS-5734 and GS-441524 in formic acid-treated K 2 EDTA human plasma did not met the acceptance criteria after long-term storage at − 20 • C for at least 31 days in 3 out of 11 runs. For these failed tests, it was noticed that samples were gelling as they were being used for extraction. This sample gelling was only observed in the samples stored at − 20 • C. Based on the data, it is recommended that study samples be stored at − 70 • C only. Clinical application Plasma sample treatment with FA stabilized the analytes and improved the robustness of the method. This and the individually optimized LC gradient and ESI mode for each analyte ensured the success of the validation of the method. Which was applied to multiple clinical studies for application for use of RDV as treatment for COVID-19. Fig. 5A, Fig. 5B log or linear scale, upon 150-mg intravenous infusion of RDV in lyophilized formulation over a 2-h period from Gilead study GS-US-399-1812 [27]. As expected, RDV plasma concentration exposure reaches to a mean C max of 2720 ng/mL immediately after infusion cessation and decreases rapidly to the LLOQ (4 ng/mL) at 5 h postdose. GS-441524, the stable metabolite, shows a mean C max of 148 ng/mL at 4 h and a long half-life of ~25 h. GS-704277, the intermediate metabolite, peaks at ~2 h with a mean C max of 230 ng/mL but drops to the LLOQ (2 ng/mL) at 10 h. The performance of the method was proved to be excellent in multiple RDV related clinical studies. In the first human study involving IV doses 200 mg RDV, or PTM (placebo to match), administered IV for the first day, followed by 100 mg RDV, or PTM, daily for 4, 9, and 13 days, in which 952 plasma samples were analyzed in 35 runs, the overall %CV values from the results of duplicate analyses per run of each of 4 QC samples with concentrations that spanned the calibration curve ranged from 3.6% to 6.5% for GS-5734, from 5.7% to 10.0% for GS-445134, and from 4.5% to 11.8% for GS-704277, and %RE values ranged from 0.9% to 8.0% for GS-5734, from − 0.6% to 6.8% for GS-441524, and from 0.6% to 11.0% for GS-GS-704277. Recently, results of such plasma sample analyses were included in regulatory submissions that resulted in approvals in the US, Japan, and EU for use of remdesivir (Veklury®) as treatment for COVID-19. Conclusions The LC-MS/MS bioanalytical method for the determination of concentrations of RDV, GS− 441524, and GS− 704277 in FA-treated K 2 EDTA human plasma was validated successfully with respect to linearity, sensitivity, accuracy, precision, dilution, selectivity, hemolyzed plasma, lipemic plasma, batch size, recovery, matrix effect, and carryover. Since RDV can be hydrolyzed to its metabolites in untreated human plasma samples, it was important to stabilize it by adding FA at the appropriate amount, concentration, and FA:plasma ratio upon sample collections. This avoided overestimation of GS− 704277 and 441524 concentrations, especially when relatively high RDV concentrations were present in a sample, (e.g., typically 1-2 h after administration of RDV). As such, the stability of all the analytes in K2EDTA human plasma samples treated with FA solution has been established for processed sample stability, benchtop stability in plasma, freeze/thaw stability in plasma, benchtop stability in whole blood, and long-term frozen storage stability in plasma. In addition, the individually optimized LC gradient for each analyte avoid carryover issue that would happen when using a single LC gradient for all analytes. Detecting GS− 704277 in negative ion mode made better sensitivity than detecting it in positive ion mode, which is better for both GS− 5734 and GS− 441524. Overall, the validated method was precise, accurate, reproducible and robust enough for its application in multiple clinical studies that were the basis of a preliminary new drug application for RDV. Declaration of competing interest The authors have declared that no conflicts of interest exist.
2021-01-26T14:06:48.838Z
2021-01-25T00:00:00.000
{ "year": 2021, "sha1": "1b2175fdd87140e568247d17488923b5f3b255d2", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ab.2021.114118", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6540b1e0880b3dcd7443298d3cf6f8fc5e823345", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
265649474
pes2o/s2orc
v3-fos-license
Host–Parasitoid Phenology, Distribution, and Biological Control under Climate Change Climate change raises a serious threat to global entomofauna—the foundation of many ecosystems—by threatening species preservation and the ecosystem services they provide. Already, changes in climate—warming—are causing (i) sharp phenological mismatches among host–parasitoid systems by reducing the window of host susceptibility, leading to early emergence of either the host or its associated parasitoid and affecting mismatched species’ fitness and abundance; (ii) shifting arthropods’ expansion range towards higher altitudes, and therefore migratory pest infestations are more likely; and (iii) reducing biological control effectiveness by natural enemies, leading to potential pest outbreaks. Here, we provided an overview of the warming consequences on biodiversity and functionality of agroecosystems, highlighting the vital role that phenology plays in ecology. Also, we discussed how phenological mismatches would affect biological control efficacy, since an accurate description of stage differentiation (metamorphosis) of a pest and its associated natural enemy is crucial in order to know the exact time of the host susceptibility/suitability or stage when the parasitoids are able to optimize their parasitization or performance. Campaigns regarding landscape structure/heterogeneity, reduction of pesticides, and modelling approaches are urgently needed in order to safeguard populations of natural enemies in a future warmer world. Introduction As greenhouse gas emissions increase and according to climate models, the world's average temperature will rise by between 2.1 • C and 3.9 • C by the end of the 21st century [1,2].Extreme heat waves, droughts, and rainfall events across regions and sectors are likely outcomes of global warming predictions [3,4], raising serious threats to global biodiversity [5][6][7][8][9][10]. Pollution, increased frequency of extreme events, as well as altered weather patterns are important drivers of insect populations [11][12][13], thus exposing them to unprecedented challenging stresses [14,15].Increasing temperatures are the main result of global anthropogenic climate change and are disrupting interactions between herbivore-plant, predatorprey and, parasitoid-host, therefore affecting the dynamics and structure of populations and communities [16][17][18].Additionally, the current urbanization rate and agricultural land use also threaten arthropod biodiversity and may reshape insect communities by favoring some lineages over others [19][20][21]; e.g., human landscape modification and landuse intensity (monoculture) affect host-parasitoid interactions [22] and distributions of specialist insects [23], while habitats containing patchy cropland, meadows, hedgerows, flower/grassland strips, and shelterbelts have been shown to provide greater parasitoid abundance, diversity, and parasitism rates than more simple landscape systems [24][25][26][27][28][29][30].Furthermore, these habitats provide diverse microclimate, shelter, and structural vegetation variety that is also important for beneficial diversity and associated ecosystem services [31,32]. The presence and influence of arthropod species have significant and well-known benefits/values to human well-being in terms of the ecosystem services they provide (e.g., pollination, food security, biological control, maintenance of wider biodiversity, and ecosystem stability) and as well in achieving Sustainable Development Goals (SDGs) (e.g., crop pest and disease vectors) [33][34][35].From the perspective of biodiversity and ecological impact, insect parasitoids are quantitatively important components of terrestrial ecosystems [36,37], because they perform a top-down control of many insect pests and consequently regulate the abundance and dynamics of their hosts [38][39][40]. Regarding host-parasitoid interactions, the life cycle of insect parasitoids consists of a larval stage (parasitic) living inside the host followed by an adult stage in which the parasitoid is free-living (Figure 1a).Parasitoids depend on other insect hosts in order to develop their offspring [41].The adult female parasitoid deposits one egg (or more than one) inside (endoparasitoid) or attached to the host surface (ectoparasitoid); the eggs hatch into larvae, which develop by feeding on their hosts' bodies and eventually die (Figure 1a) [42]. The presence and influence of arthropod species have significant and well-known benefits/values to human well-being in terms of the ecosystem services they provide (e.g., pollination, food security, biological control, maintenance of wider biodiversity, and ecosystem stability) and as well in achieving Sustainable Development Goals (SDGs) (e.g., crop pest and disease vectors) [33][34][35].From the perspective of biodiversity and ecological impact, insect parasitoids are quantitatively important components of terrestrial ecosystems [36,37], because they perform a top-down control of many insect pests and consequently regulate the abundance and dynamics of their hosts [38][39][40]. Regarding host-parasitoid interactions, the life cycle of insect parasitoids consists of a larval stage (parasitic) living inside the host followed by an adult stage in which the parasitoid is free-living (Figure 1a).Parasitoids depend on other insect hosts in order to develop their offspring [41].The adult female parasitoid deposits one egg (or more than one) inside (endoparasitoid) or attached to the host surface (ectoparasitoid); the eggs hatch into larvae, which develop by feeding on their hosts' bodies and eventually die (Figure 1a) [42].The atmospheric temperature is intimately linked to the development and survival of parasitoid preadult/immature stages because their phenology, morphology, physiology, demography, and behavior have evolved and adapted in accordance with a specific range of thermal limits, enabling them to adapt to their surrounding environments [44].However, it is likely that the newly predicted extreme climatic conditions will vary over time and space, thus challenging terrestrial arthropods' life-history parameters/traits, due to the temperature-dependent nature of ectotherm activity and metabolism [45].Nevertheless, in nature, climate changes occur over many years, decades, centuries, or longer and involve significant alterations in the averages of temperature, precipitation, wind, sunshine, etc. [46]. In this article, we organized the knowledge about climate change's effects on parasitoidhost phenology, distribution, and biological control in light of recent publications, approaches, and advances across the disciplines that contribute to phenology research.Furthermore, we presented a link between phenological synchrony and shifts in phenology using Diaphorina citri (Kuwayama) (Hemiptera: Liviidae) and its associated natural enemy Tamarixia radiata as a model, to understanding and anticipate how climate change will impact phenology, demographics, and insect declines.Here, we also provided a review of what is known about the underlying mechanisms that govern parasitoid-host interactions in response to climate change.Additionally, we discussed approaches enabling us to draw appropriate mitigation plans and preparedness. Arthropods' Phenology and Climate Relationship An organism's phenology describes the timings of cyclical or seasonal biological events and how it progresses through its life cycle [47,48]: e.g., egg laying, the preadult developmental time (egg, larva, pupa), and adult longevity (female, male) (Figure 1b).In arthropod populations, the timing of life-history events is highly temperature-sensitive [49,50], and any change in temperature results in differential phenological shifts [51,52].Currently, the ways in which these shifts might affect seasonal life cycles are increasingly being explored by ecologists [53].So far, measures for climate change vulnerability have largely evaluated species' responses to critical and lethal thermal limits [14,54].This could be explained by its well-known direct effect on insect development [17,47], where warmer conditions accelerate preadult stages' development [55,56]; conversely, low temperatures prolong arthropods' developmental time [43]. Since insect metabolic rate is extremely dependent upon environmental temperature [57], any altered temperature regime is a critical factor influencing their population dynamics [58,59], mainly due to their limited capacity in maintaining body temperature through metabolic heat [60][61][62]; e.g., field experiments have demonstrated that high temperature has lethal impacts during the pupal stage releases of the parasitoid Telenomus podisi (Ashmead) (Hymenoptera: Platygastridae) throughout the soybean development cycle [63]. Nevertheless, in the case of parasitoids, if the ambient temperature is below the optimal temperature, increasing the temperature to close to the optimal temperature will accelerate their development [64][65][66]; however, for some species living in the tropics, the ambient temperature is near their optimal temperature (they are already living close to their thermal limits), and extreme heat waves will cause high preadult stage mortality and decrease parasitoids' demography [44].Furthermore, slightly warmer conditions may result in earlier adult emergence [67], benefiting some arthropod populations by increasing the number of generations per season [66], thus disrupting the relative timing of interacting species: e.g., a change in phenological synchrony between host-parasitoid interactions [5,38,[68][69][70][71], affecting mismatched species' fitness and abundance [6], disturbing ecosystem functioning [37,69,72], and ultimately leading to pest outbreaks [15,73].For example, phenological mismatch among the cereal leaf beetle Oulema melanopus (Linnaeus) (Coleoptera: Chrysomelidae) and its associated parasitoid Tetrastichus julis (Walker) (Hymenoptera: Eulophidae) was attributed to changes in spring temperature over the years, where in warmer springs, larval phenology of O. melanopus was delayed relative to adult parasitoid activity and parasitism was reduced [74].Also, increasing temperature reduces the window of the host Agrilus planipennis's (Fairmaire) (Coleoptera: Buprestidae) susceptibility to Oobius agrili (Zhang and Huang) (Hymenoptera: Encyrtidae) parasitism [75].In an experimental warming, development times of Euphydryas aurinia (Rottemburg) (Lepidoptera: Nymphalidae) were significantly affected, but not for its specialized parasitoid, Cotesia bignellii (Marshall) (Hymenoptera: Braconidae) [76]. Tropical ectotherms will be most adversely affected by climate change since their physiological optimum temperature is much closer to those at higher altitudes [77][78][79].This implies that the sooner a certain degree of temperature is reached in this area, the higher the risk of extinction, since species will have less time to disperse naturally to track their physiological optimum climate.However, adaptive responses to new temperatures are also possible [80,81], since evidence of traits changing is strong; e.g., color variation of the body of the parasitoid Cirrospilus pictus (Nees) (Hymenoptera: Eulophidae) depends on the seasonal temperature (light individuals in spring-summer and dark individuals in autumn-winter), suggesting an ecological adaptation to climatic conditions [82].But an explicit understanding of what underlies these changes, such as genetics or plasticity, is lacking [13].Despite this, even within a landscape, populations and species may respond differently to climatic changes, making it difficult to identify general trends [83]. It is important to note, however, that species' phenological shifts often do not occur at the same rate [84], and the same thermal stress can have different phenotypic and fitness effects during the various stages of an organism's development [70,85,86]; these may consequently lead to unequal shifts in the seasonal timing [47].For instance, recent field investigations have reported a mismatch in Torymus sinensis (Linnaeus) (Hymenoptera: Torymidae) emergence and a reduced biocontrol effectiveness of the Asian chestnut gall wasp Dryocosmus kuriphilus (Yasumatsu) (Hymenoptera: Cynipidae) as effects of warmer winter temperatures [39].Warmer temperatures may therefore determine an earlier T. sinensis's emergence, and by the time they emerge, fresh galls of the host are not available, resulting in a lower parasitism pressure and increasing the risk of host outbreaks [39].In addition, climate-associated shifts in the phenology of wild bees have advanced by a mean of 10.4 ± 1.3 days and are associated with global temperature increases [87,88].Also, climate change has been documented to be associated with shifts in autumn phenology toward later dates and spring phenology toward earlier dates [89,90].Latitude has also been reported to alter the phenological responses between host and parasitoids [91], thereby affecting insect population abundance and range dynamics [55]. Insects have developed a seasonal timing system to measure day/night duration (photoperiod) and anticipate/coordinate their development and physiology [92,93].This allows them to regulate their seasonal rhythms [94] and adapt their phenology to their local environment [52,95], in this manner allowing susceptible life stages to avoid unfavorable environmental conditions [96] and favoring the synchrony of insect populations with the resources they consume, which ultimately allows them to persist/survive [17,97].However, new daylength regimes due to climate change are altering host-parasitoid interactions and community dynamics [98,99]. In addition, interactions within trophic networks have greatly influenced insect phenology [17]; in these interactions, organisms from a specific trophic level should regulate their life cycle to match those of their prey and hosts according to their level of trophic dependence [100]; otherwise, any phenological shifts have population-level consequences [101], therefore altering the already-established communities and systems function, and having an impact on the benefits and services provided by natural ecosystems. Host-Parasitoid Geographical Distribution under Climate Change A temperature limit restricts the distribution of insects; however, as a result of climate change, more suitable areas have emerged allowing species' upslope migration (Figure 2) [66,102,103], shifting their niches to escape warming and match their current thermal preferences [52,104].However, according to Román-Palacios and Wiens [8], niche shifts in response to climate change can only potentially reduce less than 30% of species extinction, which sparks serious concerns for the future fate of biodiversity.Agricultural pests are most likely to benefit from present and future climate change with worldwide pest proliferation, especially in temperate zones (Figure 2) [105]; e.g., warm temperatures increase population growth of a nonnative defoliator Coleophora laricella (Hübner) (Lepidoptera: Coleophoridae) and inhibit demographic responses of two imported parasitoids, Agathis pumila (Ratzeburg) (Hymenoptera: Braconidae) and Chrysocharis laricinellae (Ratzeburg) (Hymenoptera: Eulophidae).The positive response of hosts to warming might have contributed to the outbreak of C. laricella in North America [106]. function, and having an impact on the benefits and services provided by natural ecosystems. Host-Parasitoid Geographical Distribution under Climate Change A temperature limit restricts the distribution of insects; however, as a result of climate change, more suitable areas have emerged allowing species' upslope migration (Figure 2) [66,102,103], shifting their niches to escape warming and match their current thermal preferences [52,104].However, according to Román-Palacios and Wiens [8], niche shifts in response to climate change can only potentially reduce less than 30% of species extinction, which sparks serious concerns for the future fate of biodiversity.Agricultural pests are most likely to benefit from present and future climate change with worldwide pest proliferation, especially in temperate zones (Figure 2) [105]; e.g., warm temperatures increase population growth of a nonnative defoliator Coleophora laricella (Hübner) (Lepidoptera: Coleophoridae) and inhibit demographic responses of two imported parasitoids, Agathis pumila (Ratzeburg) (Hymenoptera: Braconidae) and Chrysocharis laricinellae (Ratzeburg) (Hymenoptera: Eulophidae).The positive response of hosts to warming might have contributed to the outbreak of C. laricella in North America [106].Correlative species distribution modelling is a widely used approach for predicting the impacts of climate change on biodiversity, e.g., assessing extinction rates, estimating Correlative species distribution modelling is a widely used approach for predicting the impacts of climate change on biodiversity, e.g., assessing extinction rates, estimating species distribution changes, and setting up conservation priorities [108,109].Already, several insect taxa have shifted their distribution ranges towards higher altitudes [110,111].However, regarding host-parasitoids, there is limited evidence of such geographical shifts and adaptations to these new climatic changes.For instance, D. citri in China has expanded significantly northward, and prediction studies revealed that this pest will move even further as a result of climate change [112]; however, using the Climate Change Experiment (CLIMEX) model, Souza et al. [113] and Aidoo et al. [114] reported that its associated Life 2023, 13, 2290 6 of 19 natural enemy T. radiata will also move beyond its presently known native and non-native areas.Additionally, using climate change simulations, Li et al. [115] reported that three aphid species including Schizaphis graminum (Rondani), Rhopalosiphum padi (Linnaeus), and Sitobion avenae (Fabricius) (Hemiptera: Aphididae) and their associated natural enemies Aphidius gifuensis (Ashmead) (Hymenoptera: Braconidae), Episyrphus balteatus (De Geer) (Diptera: Syrphidae), and Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae) will move toward higher altitudes in most regions, and as the climate warms, ladybug H. axyridis will become more effective at suppressing aphid populations.On the contrary, warming will weaken parasitoid A. gifuensis and hoverfly E. balteatus performance and survival.Also, Zhang et al. [116] reported a northward range shift of Anoplophora glabripennis (Motschulsky) (Coleoptera: Cerambycidae) and its associated natural enemies Dastarcus helophoroides (Fairmaire) (Coleoptera: Bothrideridae) and Dendrocopos major (Linnaeus) (Piciformes: Picidae).According to a model studied by Furlong and Zalucki [117] on the interaction between the diamondback moth Plutella xylostella (Linnaeus) (Lepidoptera: Plutellidae) and its parasitoid Diadegma semiclausum (Hellén) (Hymenoptera: Ichneumonidae), the predicted temperature increases will negatively affect the parasitoid's distribution more than its host's.These studies suggested that warming can favor generalist predators over specialist (Hymenoptera) biocontrol agents. Warmer Winter Effects on Host-Parasitoid Interactions As the global climate warms, fewer extreme cold events have been registered in recent decades [107], and these have generated new seasonal environment conditions (long and warmer pre-winter periods), representing a major challenge for arthropods' life in these environments [135].Thus, with warming, an alteration of the response to seasonal changes is expected.The survival of parasitoids within a host depends on complex physiological mechanisms, but the lethal temperature events can significantly damage these mechanisms [136,137].This is very important to consider for interacting species because different responses to thermal performance curves (TPCs) may lead to phenological mismatches in the system [138], potentially affecting trophic interactions [139] and consequently decreasing the effectiveness/success of biological control [140].Parasitoids, in order to be effective in regulating host pests, must have a synchronized emergence with the pest populations (i.e., suitable pest stage for parasitism), high reproduction rates, good searching/finding abilities, and a long lifespan.However, laboratory studies suggest that biological control could be negatively affected by extremes of temperature [64].For instance, during the autumn and winter transitions [141], Senior et al. [90] reported that warmer winter temperature drives asynchronous shifts between two aphid species Drepanosiphum platanoidis (Schrank) (Hemiptera: Aphididae) and Periphyllus testudinaceus (Fernie) (Hemiptera: Aphididae), and their associated braconid parasitoid wasps (Hymenoptera: Braconidae).Similarly, the genus Alabagrus of braconid wasps (in the family Braconidae) and a primary parasitoid of the fern moth Callopistaria flooridensis (Guenée) (Lepidoptera: Noctuidae) have showed significant mismatches in emergence due to the rapid temperature increase [142].Alford et al. [143] reported that favorable warm winters have extended the activity of the parasitoid Aphidius avenae (Haliday) (Hymenoptera: Braconidae), which has made them increasingly susceptible to unpredictable cold events during the winter. According to Schneider et al. [144] in Switzerland, there have been fewer cold days over the past 40 years, and by the end of the 21st century, temperatures below −12 °C will occur only infrequently up to 1700 m.These events have allowed tropical cold-sensitive species to expand their ranges and colonize new areas, due to a reduction in the incidence of cold-induced physiological damage and mortality (Figure 2) [107,145].However, for endemic arthropod species, these ambient changes represent a serious challenge, mainly because insects often enter diapause as winter approaches (Figure 3), where during this state, development stops and the metabolism is slowed/reduced, causing the body to enter a hormonally programmed resting state [146].The aphid parasitoid A. avenae has been known to adopt a winter diapausing strategy, until recent reports of active winter populations in cereal crops [147].Also, Alfaro-Tapia et al. [148] reported that diapause incidence of aphid parasitoids did not increase during winter in the Chilean central-south valley; instead, activity and abundance of parasitoids were observed.However, a study by Mehrnejad and Copland [149] on the parasitoid Psyllaephagus pistaciae (Ferrière) (Hymenoptera: Encyrtidae) reported that a 100% diapause was produced when low temperature was combined with [135]; and Lindestad et al. [146].) The aphid parasitoid A. avenae has been known to adopt a winter diapausing strategy, until recent reports of active winter populations in cereal crops [147].Also, Alfaro-Tapia et al. [148] reported that diapause incidence of aphid parasitoids did not increase during winter in the Chilean central-south valley; instead, activity and abundance of parasitoids were observed.However, a study by Mehrnejad and Copland [149] on the parasitoid Psyllaephagus pistaciae (Ferrière) (Hymenoptera: Encyrtidae) reported that a 100% diapause was produced when low temperature was combined with a short-day photoperiod, which led to an increase in diapause incidence.A laboratory study under nine different photoperiods and temperature conditions by Tougeron et al. [150] reported that two historically winter-active parasitoid species Aphidius rhopalosiphi (Esenbeck) (Hymenoptera: Braconidae) and Aphidius matricariae (Haliday) (Hymenoptera: Braconidae) never entered diapause; in contrast, two species more recently active during winter, A. avenae and Aphidius ervi (Haliday) (Hymenoptera: Braconidae), did enter diapause but at a low proportion.Tougeron et al. [151] suggested that this recent modification in the composition of parasitoid community is linked to shifts in diapause expression (reduction of the use of winter diapause).These results suggest that aphid parasitoids' overwintering strategies have changed rapidly in the last three decades and active adult overwintering can replace diapause; this new species will affect the food web structure between aphids and parasitoids as well as host-exploitation strategies of parasitoids already existing in the system. Daylength and temperature are the primary factors by which diapausing insects anticipate and prepare for harsh conditions [152].According to Polgár et al. [153], Brodeur and McNeil [154], and Polgár and Hardie [155], parasitoids also enter diapause based on host life cycle, development stage, species, size, host morph, and host plant quality.However, a question that has been less explored is what happens when organisms are unable to predict when winter will actually begin, since they need to enter diapause well before hostile conditions arrive.Any changes in diapause timing and duration generally determine or affect the number of generations per year [156].Warmer winters may have a particularly strong effect on the biological processes of insects' life cycles (i.e., eclosion from pupation) that are adapted to survive and overcome the winter's coldest conditions [135,146,157]. Xiao et al. [158] reported that an increased mortality of arthropods may result from warmer winter conditions during dormant diapause, because warming conditions can reduce nutritional reserves and lead to changes in larval body weight and suffering from higher mortality.According to Wu et al. [130], a decline in the size of communities can be expected if there are widely observed reductions in the developmental size with climate warming.Indeed, Forister et al. [159] reported that in the past four decades, the number of butterflies observed has declined by 1.6% annually across landscapes of West America, and this decline was associated in particular with warmer months in the autumn.Nice et al. [83] reported that late spring precipitation as an outcome of global warming has negatively impacted butterfly populations.Dahlhoff et al. [160] also reported that in the Sierra Nevada mountains, low snowpack drives a decrease in the population abundance of the leaf beetle Chrysomela aeneicollis (Schaeffer) (Coleoptera: Chrysomelidae).Several pollinators, including the beetle Mylabris nevadensis (Escalera) (Coleoptera: Meloidae), were negatively affected by warming in Mediterranean regions [161].Soroye et al. [162] also found that increasing frequency of unusually hot days is leading to increasing local arthropod extinction rates, reducing colonization and site occupancy and decreasing species richness within a region.Also, Burkle et al. [88] reported loss of species, co-occurrence, and function of plantpollinator interactions over a 120-year timespan in Carlinville, Illinois (USA).It has been estimated that worldwide insect losses are approximately 9% per decade [163,164].Currently, there is mounting evidence that arthropods are disappearing rapidly, with climate change being the main contributing factor [8,[164][165][166][167][168][169][170][171].According to Warren et al. [172], the geographic range losses of insects will reach 18% with 2 °C increases in temperature.As a result of these findings, climate change may threaten seasonal organisms in the future and may reduce insect survival over the winter, in this manner reshaping insect biodiversity worldwide [8,167,173]. Temperature Tolerance Ranges and Implications for Biocontrol Efficacy The effects of heat stress and lower humidity (i.e., summer droughts) are detrimental to insect neurological function, muscular control, and immune function, resulting in coma and eventual death in severe situations [174,175].TPCs have been widely used to determine and understand insect thermal plasticity and adaptation [176] and global warming effects [66].As temperature increases, parasitoid performance typically increases proportionally, reaching its peak at optimum temperatures (Topt) (Figure 4b) [38], after which any increase in temperature produces a decline in their performance (Figure 4a) [65].Under warming conditions, both the host and the parasitoid will develop faster, although the hosts have a higher thermal limit than their associated parasitoid [117].Indeed, the thermal tolerance of parasitoids is lower compared to their hosts [70,178], giving them limited plasticity to respond to high temperatures and decreasing parasitoid biomass [73,117].In a thermal study carried out by Moore et al. [179], the authors reported that the parasitoid wasp Cotesia congregata (Say) (Hymenoptera: Braconidae) suffered complete mortality at a temperature range that was slightly stressful for its larval host Manduca sexta (Linnaeus) (Lepidoptera: Sphingidae).Also, Andrade et al. [180] reported that the emergence rates of Trichogramma exiguum (Pinto & Platner) and Trichogramma acacioi (Brun, Gomez de Moraes & Soares) (Hymenoptera: Trichogrammatidae) were significantly affected at 30 °C; there was also a higher incidence of Trichogramma parasitism in climates with lower seasonality [181].A very high mortality rate of the immature stages of Aganaspis daci (Weld) (Hymenoptera: Figitidae), a natural enemy of Ceratitis capitata (Wiedemann) (Diptera: Tephritidae), was observed at 15 and 30 °C [182].[177]).The variation in temperature over the course of the year as a result of climate change is not uniform, and thus can easily lead to differential phenological shifts and thereby to mismatches among the interacting species. Under warming conditions, both the host and the parasitoid will develop faster, although the hosts have a higher thermal limit than their associated parasitoid [117].Indeed, the thermal tolerance of parasitoids is lower compared to their hosts [70,178], giving them limited plasticity to respond to high temperatures and decreasing parasitoid biomass [73,117].In a thermal study carried out by Moore et al. [179], the authors reported that the parasitoid wasp Cotesia congregata (Say) (Hymenoptera: Braconidae) suffered complete mortality at a temperature range that was slightly stressful for its larval host Manduca sexta (Linnaeus) (Lepidoptera: Sphingidae).Also, Andrade et al. [180] reported that the emergence rates of Trichogramma exiguum (Pinto & Platner) and Trichogramma acacioi (Brun, Gomez de Moraes & Soares) (Hymenoptera: Trichogrammatidae) were significantly affected at 30 • C; there was also a higher incidence of Trichogramma parasitism in climates with lower seasonality [181].A very high mortality rate of the immature stages of Aganaspis daci (Weld) (Hymenoptera: Figitidae), a natural enemy of Ceratitis capitata (Wiedemann) (Diptera: Tephritidae), was observed at 15 and 30 • C [182].Qiu et al. [183] reported that at 26 • C Microplitis manilae (Ashmead) (Hymenoptera: Braconidae) presented maximum parasitism rate on Spodoptera exigua (Hübner) and Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae), which significantly dropped at 32 • C. Similarly, other laboratory experiments have demonstrated a reduced parasitism rate, short lifespan, and high pupal mortality when temperature exceeded the thermal limit of the parasitoids [64,179,[184][185][186]. In biological control, the timing of biological activities and life-history events (i.e., stage differentiation = metamorphosis) of a pest and its associated natural enemy must be accurately described in order to determine the exact time of the host's susceptibility or stage when the parasitoid/predator can parasitize/prey on their hosts.Life table analysis is a research tool commonly used in population and community ecology studies; this principle has been used as the basis for parasitoid-host, predator-prey studies due to its ability to graphically illustrate and describe the unique and important features of stage differentiation [64,187].This knowledge is therefore a key component in biological control programs in achieving successful pest management.However, the stage differentiation of arthropods is temperature-dependent, and the current rising temperature due to climate change has disrupted the synchrony of host-parasitoid interaction networks.Disrupted synchronization implies that the future mass rearing of parasitoids and predatory natural enemies might face serious problems, primarily because increasing temperature accelerates arthropods' development rate, shifts the timing of emergence, and shortens the window of host susceptibility since species of varying trophic levels respond differently to climate variations, consequently modifying the normal already known stage differentiation and developmental rate; consequently, the release of exotic parasitoids, could fall within a wrong timing (during the wrong phenological development of the target pest species), resulting in unsuccessful establishment, performance, and spread of these biocontrol agents. An important step often omitted, and which needs great attention in successful biological control, is the link between phenological synchrony and shifts in phenology that impact population dynamics.Establishing these links is the first step to understanding and anticipating how climate change will impact phenology, demography, and insect declines.Figure 4 shows the developmental stages (egg, larva, pupa, adult) of T. radiata, an ectoparasitoid of D. citri reared at normal temperature of 27.5 • C with a mean preadult duration of 9.57 days (d) (Figure 4b) and extreme temperatures of 35 and 20 • C, with mean preadult durations of 7.29 and 16.53 d, respectively (Figure 4a,c) [43], and its host D. citri reared at 25 ± 2 • C, with a mean preadult duration of 18.20 d (Figure 4d) [177]. When analyzing the curves of T. radiata at 35 • C (Figure 4a), the authors indicate that the parasitoids' preadult development is 2.28 and 9.24 d faster than at 27.5 and 20 • C, respectively, as well as the adult emergence, which is also observed to be faster.When projecting their biological control effectiveness on D. citri (Fuchsia color), T. radiata would match the ideal instar for parasitism.However, at this temperature (35.5 • C), the parasitoid survival rate and adult longevity is very low (Figure 4a), resulting in a severe decline in parasitism rate and reduced biological control effectiveness [64].As a result, these two species have a mismatch in their interactions.The curves at 27.5 • C (Figure 4b) also show that adults of T. radiata emerge at the ideal time for parasitism when the host is in the 3-5th instar (light yellow color); parasitism, survival rate and longevity are high; and phenology does not differ.When analyzing the curves at 20 • C (Figure 4c) (light blue color), it was observed that the parasitoids take more time to emerge as adults, and by the time they emerge, it is too late; therefore, adults will parasitize only a small number of the host nymphs, since D. citri nymphs are then finishing the last nymphal development (N5), indicating that the two species' interactions are thus mismatched.After analyzing these laboratory experiment results as evidence, we can see that extreme temperature regimes shifted the parasitoids' phenology and the majority of individuals emerged earlier (or later) than the optimal time window or host susceptibility, resulting in differential phenological shifts and thereby mismatches between the interacting species. Also, temperatures affect endosymbiont bacteria (temperature-sensitive symbiotic partners) present in parasitoids [188]; e.g., Buchnera and Wolbachia, two dominant groups of endosymbionts present in parasitoids and hosts, may be affected or eliminated if exposed to short-term high temperatures [189][190][191].When reducing their population, this reduction is reflected in the fitness and several aspects of the parasitoids' life-history traits [190,192], because endosymbionts act as nutritional mutualists boosting/regulating the vital functions of their host [193,194].Furthermore, different synchronization mismatches among predators and prey as a result of raising temperature have been documented [17,195,196]. Conclusions Recent evidence of marked host-parasitoids' phenological shifts, geographical distribution, and reduced biological control as side effects of climate change sparked global concerns and highlighted the vital role that phenology plays in ecology due to its ecological and economic importance for ecosystem functioning.Host-parasitoid interactions are affected by the effects of global warming through a variety of mechanisms, primarily because temperature accelerates their metabolism and growth, thus affecting their biological activities and life-history events.Beyond the impacts on individual organisms, these changes are affecting the higher trophic levels, altering already-established communities and ecosystem functions.In order to gain insight into host-parasitoid populations' reactions to altered temperature regimes, results from laboratory and field experiments must be incorporated into long-term monitoring programs.We therefore need to conduct more field studies in natural ecosystems, in order to obtain a better understanding of the effects of temperature on the host-parasitoid system and the trophic levels adjacent to them.Additionally, human-induced stresses such as farming and cow breeding intensification, introduction of exotic species, land use, pollution, habitat loss, and fragmentation are all together contributing to increasing the global temperature, and this is driving sharp phenological mismatches among host-parasitoid systems throughout the planet.To reduce climate change, agricultural practices must be redesigned in order to reduce CO 2 emissions; in particular, a significant reduction in cow breeding and chemical pesticide inputs is needed and, in place of it, more eco-friendly and sustainable practices need to be adopted, in particular for intensively farmed areas.For example, improved landscape planning-heterogeneity and configuration-at both local and wide areas will be essential to promote parasitoid biodiversity and maintain essential ecological services, because these approaches have been shown to harbor natural enemies that are crucial to the control of herbivorous pest species that pose a threat to many crops.Therefore, there is an urgent need for these strategies to be promoted and implemented to reverse or slow down current trends and allow the recovery of parasitoid populations by providing suitable habitats for them and consequently safeguarding the vital ecosystem services they provide. Life 2023, 13, x FOR PEER REVIEW 8 of 20 Figure 3 . Figure 3. Decreases in the frequency and intensity of extreme winter cold events (long and warmer pre-winter periods) have created new seasonal environmental conditions, extended arthropod activity, allowed expansion of cold-sensitive tropical organisms, and created high pest overwintering potential.(Created based on Biella et al. [145]; Nielsen et al. [135]; and Lindestad et al. [146].) Figure 3 . Figure 3. Decreases in the frequency and intensity of extreme winter cold events (long and warmer pre-winter periods) have created new seasonal environmental conditions, extended arthropod activity, allowed expansion of cold-sensitive tropical organisms, and created high pest overwintering potential.(Created based on Biella et al. [145]; Nielsen et al. [135]; and Lindestad et al. [146].) Life 2023 , 13, x FOR PEER REVIEW 10 of 20 Figure 4 . Figure 4. (a-c) Life-history events of T. radiata at 35, 27.5, and 20 °C, respectively (adapted from Ramos Aguila et al. [43]); and (d) life-history events of D. citri at 25 ± 2 °C (adapted from Ramos Aguila et al.[177]).The variation in temperature over the course of the year as a result of climate change is not uniform, and thus can easily lead to differential phenological shifts and thereby to mismatches among the interacting species. Figure 4 . Figure 4. (a-c) Life-history events of T. radiata at 35, 27.5, and 20• C, respectively (adapted from Ramos Aguila et al.[43]); and (d) life-history events of D. citri at 25 ± 2 • C (adapted from Ramos Aguila et al.[177]).The variation in temperature over the course of the year as a result of climate change is not uniform, and thus can easily lead to differential phenological shifts and thereby to mismatches among the interacting species.
2023-12-05T16:30:27.519Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "a7a4c2d2b5dee8b53096c300dbb20744b55026c0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/13/12/2290/pdf?version=1701353487", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67d5e6a8866f9c56561cd2b3a148de7761228ec1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
114620634
pes2o/s2orc
v3-fos-license
A new model of Ishikawa diagram for quality assessment The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy. Introduction Most organizations use quality tools for various purposes related to controlling and assuring quality. Although a good number of quality tools specific are available for certain domains, fields and practices, some of the quality tools can be used across such domains. These quality tools are quite generic and can be applied to any condition. There are seven basic quality tools used in organizations. These tools can provide much information about problems in the organization assisting to derive solutions for the same. The seven tools are: histogram, cause-effect diagram, Pareto diagram, correlation diagram, control chart, data stratification, Brainstorming. Ishikawa diagrams were popularized in the 1960s by Kaoru Ishikawa, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton. Dr. Kaoru Ishikawa (1915 -1989) was a Japanese professor, advisor and motivator with respect to the innovative developments within the field of quality management. Kaoru Ishikawa is best known for the development of the concept of the fishbone diagram, which is also known as the "Ishikawa diagram". This diagram is still used in many organizations for making diagnoses or taking concrete actions in which the root cause of the problem is identified. With his cause and effect diagram (also called the "Ishikawa" or "fishbone" diagram), management leader made significant and specific advancements in quality improvement. The design of the diagram looks much like a skeleton of a fish. Fishbone diagrams are typically worked right to left, with each large "bone" of the fish branching out to include smaller bones containing more detail. The technique uses a diagram-based approach for thinking through all of the possible causes of a problem. This helps you to carry out a thorough analysis of the situation. There are four steps to using the tool: 1. Identify the problem. 2. Work out the major factors involved. 4. Analyze your diagram. Causes are usually grouped into major categories to identify these sources of variation. The categories typically include: • People: Anyone involved with the process; • Methods: How the process is performed and the specific requirements for doing it, such as policies, procedures, rules, regulations and laws; • Machines: Any equipment, computers, tools, etc. required to accomplish the job; • Materials: Raw materials, parts, pens, paper, etc. used to produce the final product; • Measurements: Data generated from the process that are used to evaluate its quality; • Environment: The conditions, such as location, time, temperature, and culture in which the process operates. Ishikawa diagram is being defined as a graphic representation that schematically illustrates the relations between a specific result and its causes, [1,2]. The studied effect or negative problem is "the fish head" and the potential causes and sub-causes define the "fish bone structure". Therefore, the diagram clearly reveals the relations between a problem identified in a product and its potential causes. Ishikawa Diagram is a simple graphical instrument to understand the causes that produce quality defects and is used to analyze the relation between a problem and all possible causes. All categories of causes start with the letter M (machines, methods, man, materials, maintenance, mother natureenvironment, management) for the productive domains. 4M, 5M, 6M, 7M Ishikawa diagram were performed like this. In [3] it is shown that obtaining a correct diagram is possible only through working in a team with experience. An interesting model of Ishikawa diagram was developed in the case of some automotive defects [4,5]. In [6] it is presented a method for assessing the quality of welding by applying one of the classic instruments of quality management. Ishikawa diagram application areas are continuously expanding. For example, nowadays the method is also being applied in the medical field [7]. In [8] it is presented a study regarding tracing the cause-effect diagram concerning the tolerance dimensions by using software instruments. Many specialized work in the fields of Quality Management and Quality Engineering show different patterns of Ishikawa diagrams. We can illustrate with a few categories of main cases which were the basis of some existing Ishikawa diagrams, [3,9,10,11,12, The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine a non-quality problem in the evaluation of the parts precision. The studied problem was "errors in the evaluation of the parts precision in the machine construction field". The development of the Ishikawa diagram in a detailed form for determining the possible causes of a problem has the advantage of giving the possibility of identifying and analyzing all the factors connected to the problem. Study concerning the use of the Ishikawa diagram in analysing the causes that determine errors in the evaluation of the parts precision The non-quality problemstudiedin this paperis"errors in the evaluation of the parts precision in the machine construction field". This paperproposesIshikawadiagramby covering the stepssetforthbyDalein [15], namely the following: -It is defined very clearly the effect of the problem considered, -It is written the effect in the right and it is drawn a line from right to left, -It is checked if each team member has understood well the problem, -They are determined the main categories of causes which are the main branches of the diagram, -It is organized a brainstorming session to determine possible secondary causes, -It is organized another brainstorming session in order to discuss in detail the causes and to determine those who have the major degree of probability for producing the studied effect, -They are traced and recorded the appropriate sub-branches. Following the brainstorming session conducted with specialists from the technical measurements domain, potential causes were identified coming from 3 directions. The study identified three directions from which derive the causes: A) Causes that lead to errors in evaluating the dimensional accuracy, fish skeleton -(5MA): Man: tired and nervous operator; untrained operator, inexperienced operator; carelessness within measuring; Methods: inadequate measuring method; inaccurate measuring scheme; error of the position of the measured object; error of the position of the device; errors of the regularization procedure; inaccurate regularization to the nominal size; number of the realized measurements; suppression of gross errors; blocks with inaccurately chosen amount of scales; not applying the corrections generated by systematical errors; Machines: A. devices with an accuracy inadequate to the tolerance; devices with inadequate measuring limits; attrition of the devices; not observing the periodical metrical checks; errors of the device limiting the measurement force; theoretical errors of the devices; abnormalities of the measuring surfaces; inaccurate choice of the sensitive contacts; inaccurate choice of the changeable elements (tips, calibrated wires); Materials: patterns realized with flaws; scales not clinging; opening of the position prisms; caliber attrition; spatial variations of the piece; Medio ambiente (sp.) / Environment: temperature, pressure, humidity, vibrations, noise, light, air composition. B) Causes that determine errors in the evaluation of shape and position abnormalities, fish skeleton -(5MB): Man: tired and nervous operator; untrained operator, inexperienced operator; carelessness within measuring. Methods: inaccurate choice of the measuring base; the position of the verified surfaces; errors of the regularization procedure; inadequate measuring method; inaccurate measuring scheme; error of the position of the measured object; error of the position of the device. Machines: Errors within the movement system of the measured object; errors within the movement system of the device; errors of the design of the measuring device; errors within the fabrication of the measuring device; devices with inadequate measuring limits; not observing the periodical metrical checks; Materials: Spatial variation of the surfaces; scales not clinging; Medio ambiente (sp.) / Environment: temperature, pressure, humidity, vibrations, noise, light, air composition. C) Causes for errors in roughness evaluation, fish skeleton -(5MC): Man: visual acuity; eye sensitivity and adaptation, tired and nervous operator; untrained operator, inexperienced operator; carelessness within measuring. Methods: A. inadequate measuring method; inaccurate measuring scheme; error of the position of the measured object; error of the position of the device; number of the realized measurements. Machines: Flaws of the printing system of the roughness graph; altered tip of the sensitive contact; errors of the movement device of the sensitive contact; inaccurate settings of the working parameters of the devices; flaws of the optic systems; not observing the periodical metrical checks; Materials: Bending of the piece surfaces; inadequate roughness samples; Medio ambiente (sp.) / Environment: temperature, pressure, humidity, vibrations, noise, light, air composition. A new model for Ishikawa diagram We determined many possible causes and potential sub-causes and we grouped them into three main causes of the defect: A. Causes that lead to errors in evaluating the dimensional accuracy, B. Causes that determine errors in the evaluation of shape and position abnormalities, C. Causes for errors in the evaluation of roughness. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, Medio ambiente (sp.)/ Environment). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy. This new model with the formula (5MA + 5MB + 5MC) adds itself to the list of multiple choice Ishikawa diagrams that have been created so far. We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy. This diagram is presented in the paper, figure 1. Conclusions Above seven basic quality tools help you to address different concerns in an organization. Therefore, use of such tools should be a basic practice in the organization in order to enhance the efficiency. Performing the Ishikawa diagram in a more detailed form in order to determine the potential causes of a found defect has the advantage that it offers the possibility to identify and analyze all factors, which relate to the problem studied.This tool is excellent for capturing team brainstorming output and for filling in from the 'wide picture'. Helps organize and relate factors, providing a sequential view. This diagram deals with time direction but not quantity. It can become very complex and can be difficult to identify or demonstrate interrelationships. Benefits of Using a Cause-and-Effect Diagram: helps determine root causes; encourages group participation; uses an orderly, easy-to-read format; indicates possible causes of variation; increases process knowledge; identifies areas for collecting data. The paper presented a new formula for the Ishikawa diagram was determined, (5MA + 5MB+5 MC). The determined Ishikawa diagram provides a complete picture of all potential causes that produce the studied failure. The application of control and quality assessment techniques proves the important role that the customer with his requirements has. The traditional tools of quality management are the base in many organizations where improving quality is desired and they must be known and applied.
2019-04-15T13:07:53.068Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "5d160012cc0d9ca70cd73734efaba5dcc2891390", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/161/1/012099", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "963e2426a36908dad50c153eb862423aadb32d08", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
117028591
pes2o/s2orc
v3-fos-license
Proof of Stembridge's conjecture on stability of Kronecker coefficients We prove a conjecture of Stembridge concerning stability of Kronecker coefficients that vastly generalizes Murnaghan's theorem. The main idea is to identify the sequences of Kronecker coefficients in question with Hilbert functions of modules over finitely generated algebras. The proof only uses Schur-Weyl duality and the Borel-Weil theorem and does not rely on any existing work on Kronecker coefficients. 1. Introduction 1.1. Stembridge's conjecture. Given a partition λ of n, let M λ denote the associated irreducible complex representation of the symmetric group S n . The important Kronecker coefficients g λ,µ,ν are the tensor product multiplicities: One can attempt to understand these coefficients by studying their limiting behavior, in various senses. An important result in this direction is Murnaghan's observation (conjectured by Murnaghan in [Mu] and proved by Littlewood in [L, §4]): g (d)+λ,(d)+µ,(d)+ν is constant for d ≫ 0. In [Ste], Stembridge proposes a vast generalization of this result, centered on the following concept: Proof. Suppose g dα,dβ,dγ = 1 for all d > 0. Then B α,β,γ is isomorphic to C[t], where t has degree one. It now follows from the structure theorem for finitely generated C[t]-modules that N λ,µ,ν α,β,γ is isomorphic to n i=1 B α,β,γ [r i ] for some r 1 , . . . , r n , where [r i ] denotes a shift in grading. We thus see that g λ+dα,µ+dβ,ν+dγ = n for d ≥ max(r 1 , . . . , r n ). Proof. Let N = N λ,µ,ν α,β,γ . The Hilbert series H N (t) has a pole of order r at t = 1, by standard properties of Krull dimension. Let x be a non-zero degree one element of B α,β,γ . Since N is torsion free, H N (t) = (1 − t) −1 H N/xN (t). As N/xN has Krull dimension r − 1, all poles of H N/xN (t) have order ≤ r − 1. Thus H N (t) has a pole of order r at t = 1 and all other poles have order ≤ r − 1 (and are at roots of unity). The corollary now follows from [Sta,Theorem 4.1.1(iii)], and the fact that the dth coefficient of H N (t) is g λ+dα,µ+dβ,ν+dγ . Remark 1.7. We will see in §4.1 that the ring B α,β,γ is normal and has rational singularities. We omitted this from the main result for simplicity and because it is not strictly needed for the application to Conjecture 1.2. However, normality can be used to prove the "only if" direction of Conjecture 1.2, as we will explain. 1.3. Related work. Vallejo introduces a notion of additive stability in [V] and proves that it implies stability in Stembridge's sense. Additive stability is provided by the existence of a certain additive matrix, and hence is easier to apply, but it is less general [V,Example 6.3]. Pak and Panova show that for any k ≥ 1, the triple ((1 k ), (1 k ), (k)) is stable [PP,Theorem 1.1]. This is also a special case of Vallejo's work just mentioned and Stembridge's result that ((k), α, α) is stable for any partition α of k [Ste, Example 6.3]. Finally, Manivel uses geometric techniques in [Ma1,Ma2] to produce many more examples of stable triples and to study the cone of stable triples. For a partition λ of n define the Schur functor S λ by where V is a complex vector space. We recall the well-known connection between Schur functors and Kronecker coefficients: Proof. We have decompositions Tensoring these together, and using the decomposition (1.3), we find Taking the M λ isotypic component yields the stated result. (V ) . Proposition 2.1 therefore gives a natural isomorphism We now recast (2.2) so that the right side reflects the symmetry of the left, at least superficially. Let U, V , and W be finite dimensional vector spaces and let ω : U ×V ×W → C be a trilinear form. Assume that ω is non-degenerate in the sense that it induces an isomorphism U → V * ⊗ W * . Note that this implies that dim(U) = dim(V ) dim(W ). We let G(ω) ⊂ GL(U) × GL(V ) × GL(W ) be the stabilizer of ω; this projects isomorphically to GL(V ) × GL(W ). We can restate (2.2) as: Proposition 2.3. Let ω : U ×V ×W → C be a non-degenerate trilinear form and assume that dim(U) ≥ ℓ(λ), dim(V ) ≥ ℓ(µ), and dim(W ) ≥ ℓ(ν). Then we have a natural isomorphism Invariant theory and Segre products. Theorem 2.4. Let G be a complex reductive group acting on a finitely generated C-algebra A and also compatibly on a finitely generated A-module M, i.e., the multiplication map [PV,Theorem 3.25]. Let V and W be graded vector spaces. We define the Segre product of V and W by This has the following interpretation in terms of invariant theory. The gradings on V and W are equivalent to algebraic C * actions. Thus V ⊗ W is naturally a representation of (C * ) 2 , and V ⊠ W is the invariants under the diagonal subgroup {(a, a −1 ) | a ∈ C * } ∼ = C * . From this, we get the following corollary. For partitions α and λ, define Proposition 3.1. Let U be a vector space with dim(U) ≥ ℓ(α), ℓ(λ). Then A α (U) naturally has the structure of a finitely generated graded integral domain over C, and M α,λ (U) naturally has the structure of a finitely generated torsion-free graded A α (U)-module. Proof. Let X be the flag variety of GL(U). For every partition α with ℓ(α) ≤ dim(U), there is a G-equivariant line bundle L(α) on X whose sections are H 0 (X; L(α)) = S α (U) (this is the Borel-Weil theorem, see [Fu,§9.3]) and they satisfy L(α) ⊗ L(β) = L(α + β). Let V be the total space of the vector bundle (L(α) ⊕ L(λ)) * , and let R = H 0 (X; Sym(L(α) ⊕ L(λ))) be the ring of global functions on V. This is an integral domain, since V is an irreducible variety. It also has a bigrading given by R n,m = H 0 (X; L(nα + mλ)). Since each R n,m is a (non-zero) irreducible representation of GL(V ) (by the Borel-Weil theorem), and R is an integral domain, it follows that R is generated as a C-algebra by R 1,0 and R 0,1 . In particular, R is finitely generated as a C-algebra. The bigrading on R can be regarded as a (C * ) 2 action. Then A α (U) is the ring of invariants under the second C * and hence is a finitely generated domain; and M α,λ (U) is the degree 1 piece under the second C * action (this can be interpreted as invariants of a twist by the −1 character) and hence is a finitely generated torsion-free module over A α (U). Here we use Theorem 2.4 in both cases. Remark 3.2. In fact, the above proposition holds without the restriction on dim(U). Proof of Theorem 1.4. Let U, V , and W be sufficiently large vector spaces satisfying dim(U) = dim(V ) dim(W ), and let ω : U × V × W → C be a non-degenerate trilinear form. Then (V ), and A γ (W ) are finitely generated graded integral domains (Proposition 3.1), so is their Segre product (Corollary 2.5). Since G(ω) ∼ = GL(V ) × GL(W ) is a reductive group, the above invariant ring is also finitely generated (Theorem 2.4). This shows that B α,β,γ is a finitely generated graded integral domain. Remarks and complements 4.1. Algebraic properties. See [Ke,§3] for the definition of rational singularities. is the homogeneous coordinate ring of a homogeneous space for GL(U) × GL(V ) × GL(W ) and hence has rational singularities [Ke,§2]. This property is preserved by taking invariants under a reductive group [Bo, Corollaire]. We now give a proof of the "only if" direction of Conjecture 1.2. Assume that (α, β, γ) is stable. In particular, g dα,dβ,dγ is constant for d ≫ 0. Furthermore, g α,β,γ > 0 by definition. Thus B α,β,γ is a finitely generated graded normal domain whose Hilbert polynomial has degree 0 and whose first graded piece is non-zero. It follows that B α,β,γ ∼ = C[t], with t of degree one, and so g dα,dβ,dγ = 1 for all d > 0. Remark 4.3. From what we have shown, (α, β, γ) is stable if and only if the Krull dimension of B α,β,γ is 1. Since B α,β,γ is a ring of invariants, this property can be determined algorithmically. Although it is probably impractical, there did not seem to be an algorithm previously for determining stability. 4.3. Twisted commutative algebras. Murnaghan's stability theorem was reinterpreted in [CEF,§3.4] as the fact that the Segre product of finitely generated FI-modules is finitely generated. This is a useful reformulation since it turns Murnaghan's numerical result into a structural result. The rings A α are examples of twisted commutative algebras (see [SS2] for an introduction to these objects), and modules over the tca A (1) are equivalent to FI-modules (see [SS1,§1.3 ]). One might therefore hope that the stability results in this paper could be reformulated as structural results for A α -modules. We believe that there is such a reformulation, though it is more complicated than the case of Murnaghan's theorem. Nonetheless, this point of view led to the proof in this paper. We plan to pursue the connection to tca's in future work.
2015-07-11T22:12:25.000Z
2015-01-02T00:00:00.000
{ "year": 2015, "sha1": "63ffc20a38f2fcd2beecd6202735cb9982e3b914", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.00333", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "844ffa1ac9d909f78360a3948c2c5bb7b5638e4f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
30585072
pes2o/s2orc
v3-fos-license
Namaste Theory: A Quantitative Grounded Theory on Religion and Spirituality in Mental Health Treatment A growing body of research is beginning to identify characteristics that influence or are related to helping professionals’ integration of clients’ religion and spirituality (RS) in mental health treatment. This article presents Namaste Theory, a new theory for understanding the role of mental health practitioners’ RS in clinical practice. Using Glaser’s (2008) formal quantitative grounded theory approach, this article describes an emerging theme in the author’s line of work—particularly that practitioners’ intrinsic religiosity is significantly related to their consideration of clients’ RS—and explores the findings of related, interdisciplinary studies. The Hindu term, Namaste, meaning, “the sacred in me recognizes the sacred in you”, provided a framework to explain the emerging theme. Specifically, Namaste Theory introduces the concept that as helping professionals infuse their own RS beliefs/practices into their daily lives, deepening their intrinsic religiosity and awareness of what they deem sacred, they tend to consider and integrate clients’ RS beliefs/practices, and what clients consider sacred as well. In order words, as the helping professional recognizes the sacred within him or herself, s/he appears to be more open to recognizing the sacred within his/her client. Future directions for research, as well as practice and education implications, are discussed. little content on RS in the classroom, US practitioners have had to identify other ways of attending to this area of clients' lives in mental health treatment. Recognizing the critical role many helping professionals have in clients' mental health treatment, the self-reported importance of RS in Americans' lives (Pew Research Center 2015), and clients' use of RS to positively or negatively cope with mental and behavioral health struggles (Pargament 1997(Pargament , 2007, it is imperative helping professions better understand the complex mechanisms that influence the assessment and integration of clients' RS into treatment. Thus, the purpose of the current study was to generate a theory, grounded in previous studies, to explain the role of practitioners' RS on their attitudes and behaviors related to integrating clients' RS in mental health treatment. Design Though grounded theory (GT) is often understood within the context of utilizing qualitative data, many have discussed the use of quantitative data and findings to build theory. For example, Strauss and Corbin (2000) have indicated, "grounded theorists can utilize quantitative data or combine qualitative and quantitative techniques of analysis" (p. 274). Further, Johnson et al. (2010) argue that GT truly has the ability to be grounded in both quantitative and qualitative methods and data. In an effort to lay the foundation for quantitative grounded theory (QGT) with his text, Glaser (2008) described formal QGT as one option that allows scholars to generate theory by comparing the results across multiple surveys for conceptual generation, which is the "product of doing formal GT" (p. 60). Glaser (2008) posits that formal QGT compares various groups' or subgroups' responses regarding a particular item or topic across surveys in order to detect patterns and for conceptual generation. Further, Glaser (2008) suggests that such comparisons also include qualitative studies on related subgroups when available. These comparisons are not solely to identify similarities or differences, but rather, for conceptual generation. Therefore, the current study utilizes a two-part approach for conceptual generation and the early development of a theory. First, the author describes and explores the results of her own three-pronged study of a national sample of clinical social workers' integration of clients' RS in practice and the pattern that emerged from the analyses. Second, the author turned to the broader literature to further support the theory that was emerging regarding helping professionals' integration of clients' RS in practice. Initial Development of Theory The initial seed for this theory emerged after conducting a three-pronged study of a national survey of licensed clinical social workers (LCSWs) to better understand their views and behaviors surrounding the integration of clients' RS into treatment (Oxhandler et al. 2015;Oxhandler and Giardina 2017;Oxhandler and Parrish 2016). The author began identifying practitioners' intrinsic religiosity (IR) as an important characteristic among those integrating clients' RS in practice. Simultaneously, she began to recognize variations of RS (e.g., personal RS, frequency of RS service attendance, and RS affiliation) as reportedly important factors in other studies of practitioners' views and behaviors regarding integrating clients' RS. Before describing this role of IR across studies, it is important to reflect upon what IR is and how it has been measured, particularly in the author's studies described below. As mentioned above, IR involves one's desire to deeply live out, internalize, embrace, and be motivated by their beliefs (Allport and Ross 1967). In 1997, the Duke University Religion Index was developed as a brief, five-item alternative to Hoge (1972) IR scale, measuring two elements of extrinsic religiosity (ER; organized and non-organized religious activity) and a three-item subscale to measure IR (Koenig et al. 1997;Koenig and Büssing 2010). The DUREL IR items were taken from Hoge (1972) IR scale to measure the degree to which the respondent experiences the presence of the Divine, their religious beliefs influence their approach to life, and their religion is infused into their life. Though there are a number of ways in which IR can be measured (Liu and Koenig 2013), the DUREL was used in the studies below due to its ability to measure IR and ER, as well as its brevity within a large, cross-sectional survey that contained a 40-item scale, over 20 background items, and two open-ended items. 2.1. Part 1: Intrinsic Religiosity among a National Sample of Licensed Clinical Social Workers Study 1. The first study tested the reliability and various levels of validity of the newly developed Religious/Spiritually Integrated Practice Assessment Scale (RSIPAS) with a national sample of 470 LCSWs (Oxhandler and Parrish 2016). The RSIPAS was developed specifically to measure practitioners' attitudes, self-efficacy, perceived feasibility, and behaviors related to integrating clients' RS in practice, with an overarching construct of practitioners' overall orientation toward integrating clients' RS into practice. As described in the article, the confirmatory factor analysis resulted in four first-order factors, as well as a second-order factor. Criterion validity was tested by comparing the subscale scores and overall RSIPAS score with various background variables, including prior course or continuing education on RS in practice, knowledge of empirically-supported interventions on RS, and the DUREL (Koenig and Büssing 2010). Of any background items to test criterion validity, IR emerged as having the strongest significant relationship with all four subscales (r = 0.31-0.43, p < 0.01) and the overall scale (r = 0.46, p < 0.01). Study 2. The second study explored the responses of this administration of the RSIPAS, and included a regression analysis to identify practitioner characteristics that predict their views, behaviors, and overall orientation toward integrating clients' RS (Oxhandler et al. 2015). Interestingly, there were no significant relationships between practitioners' RSIPAS score and their age, race, region in the US, gender, age of clients served, years in practice, or degree of burnout. The only significant variables, which ended up accounting for 37% of the variance, included their score on the DUREL IR scale (β = 0.44, p < 0.001) having the most influence on the model, with prior training (course or continuing education) (β = 0.32, p < 0.001) as the other influential variable. Study 3. Finally, the author explored the practitioners' responses to the two open-ended items in the survey. The first asked, "What (if anything) has helped or supported you to assess and/or integrate your clients' religious/spiritual beliefs in your clinical practice?" (n = 319). The second item asked, "What (if anything) has hindered or prevented you from assessing and/or integrating your clients' religious/spiritual beliefs in your clinical practice?" (n = 279). A total of 329 LCSWs responded to either item (Oxhandler and Giardina 2017). Though the respondents had no priming questions regarding what helps or hinders such integration, it was interesting to see that nearly half of the respondents (43.9%) freely indicated in their open-ended response that their personal religiosity (including their RS journey, RS belief system, RS practices, and RS curiosity) helped them to consider their clients' RS in practice. Part 2: Exploring the Role of Practitioners' RS in the Broader Literature The second part of this design was to compare the role of practitioners' RS in the broader literature and other studies conducted by the author to further support this conceptual generation and theory development. Though the aforementioned findings between IR and integrating clients' RS were limited to LCSWs in the US, their findings are not limited to social workers. In fact, Oxhandler (2016) revalidation of the RSIPAS with five helping professions in Texas (LCSWs, licensed professional counselors, marriage and family therapists, advanced practice nurses, and psychologists) also reported the DUREL IR scale had strong, significant relationships with all four subscales (r = 0.29-0.45, p < 0.01) and had the strongest relationship with the overall scale (r = 0.45, p < 0.01) across criterion variables with this diverse sample. Additionally, the qualitative responses to what helps and hinders such integration across these diverse professions were similar to the findings in study 3, above (Oxhandler et al. n.d.). Similarly, within marriage and family therapy, McNeil et al. (2012) surveyed 135 graduate students and also found a positive relationship between Allport and Ross's IR (r = 0.31, p < 0.001) and ER (r = 0.19, p < 0.05) scores with whether they considered incorporating RS in therapy to be important. Other elements of RS and the integration of clients' RS. Though IR initially emerged in this pattern recognition, certainly not all researchers assess for respondents' IR in their surveys and may measure other, tangentially-related elements of practitioners' RS. Interestingly, it is clear that the depth and frequency of the practitioners' RS beliefs and practices appears to have a strong relationship with their attitudes and engagement of clients' RS in practice. Generally, these other RS elements indicated within the literature include personal RS, frequency of RS practices or service attendance, and RS affiliation. Personal RS. A number of studies across helping professions have shown personal RS has directly or indirectly influenced the consideration of RS in practice. Stewart et al. (2006) identified a model that indicated social workers' personal RS was directly related to utilized RS-related interventions, which impacted their perceived appropriateness and attitudes toward RS in practice. Specifically, spirituality was "conceptualized as a general connection with some transcendent force or being and the importance of that connection in daily life" (p. 75) and measured based on the Multidimensional Measurement of Religiousness/Spirituality (Fetzer Institute and National Institute of Aging Working Group 1999). The results suggested spirituality had the largest effect size (β = 0.42, p < 0.001) related to the practitioners' integration of RS. Also in social work, Mattison et al. (2000) found that the more important the social worker viewed religion to be important in his/her life, the more appropriate they viewed various RS practices to be in their social work practice. Similarly, Shafranske and Maloney (1990) found psychologists with an "ends orientation", by which religion provides answers to existential questions, had the highest degree of competence in knowledge and skills in addressing RS issues with clients [F (2, 398) = 8.39, p < 001]. In a transdisciplinary meta-analysis, Walker et al. (2004) explored the relationship between therapists' (psychologists, marriage and family therapists, and social workers) personal RS and willingness to discuss RS issues in counseling. Among the four studies identified, an overall average r of 0.39 (p < 0.01) emerged, indicating a significant relationship between the two variables. Similarly, in social work, Sheridan (2009) literature review found four out of five studies had personal RS (measured as either religious affiliation, participation in communal RS services, or personal RS practices) as a significant predictor of higher RS intervention use. More recently, Cummings et al. (2014) conducted a systematic review regarding the relationship between practitioners' RS and therapy attitudes and behaviors, and found seven of eight identified articles indicated a positive, significant relationship between the two. Further, five out of six studies found a positive association between therapists' RS and self-rated competence with integrating clients' RS, and eight out of 10 studies found therapists' RS predicted the use of RS interventions in treatment. Finally, the authors included studies that described the relationships between therapists' RS and the therapeutic relationship, as well as the effects of therapists' RS on treatment outcomes (Cummings et al. 2014). More recently, Blair (2015) conducted a qualitative study to explore the influence of nine therapists' spirituality on their practice. He found that there is a "reflective, dynamic, and developmental process to integrate spiritual and therapeutic identities" (p. 164), and that therapists' spirituality not only influenced their work, but they often strive to find harmony between their spirituality and profession. Frequency of RS practices/service attendance. The frequency of RS service attendance or RS practices is arguably one of the strongest methods for measuring degree of RS, as it reduces tautological issues that often accompany many spirituality measurements (King 2011). Thus, the frequency of RS practices and service attendance has often been used in studies that explore the consideration of clients' RS in treatment as a way to measure practitioners' RS. Among social workers in New York, the frequency of spiritual participation had a significant, positive correlation with their attitudes toward RS in practice (r = 0.47, p < 0.001) (Heyman et al. 2006). Further, within two regression analyses, the frequency of spiritual participation was the largest predictor. In the first, spiritual participation (β = 0.50, p < 0.001) was the only predictor compared with age, gender, and race. In the second analysis, spiritual participation (β = 0.46, p < 0.001) was the largest predictor compared with age, gender, race, years of social work experience (β = 0.20, p < 0.05), or whether they had taken a course in spirituality (β = 0.18, p < 0.01). Similarly, in a mid-Atlantic state, 204 clinical social workers' participation in communal RS services (β = 0.17, p = 0.02) emerged as one of the four significant predictors of using spiritually derived interventions in practice (Sheridan 2004). Among 299 gerontological social workers, Murdock (2005) found that private spiritual activities were significantly related with the use of RS interventions in practice. In one survey of clinical social workers working with youth across the US, Kvarfordt and Sheridan (2009) found the frequency of engaging in personal RS practices among the top predictors of their use of spiritually derived interventions in practice (β = 0.14, p < 0.01). Interestingly, a follow-up path analysis suggested the frequency of personal RS practices was the initial starting point for each path to the use of spiritually derived interventions, with personal practices having the strongest impact on general attitudes toward the role of RS in practice (β = 0.56, p < 0.001). Religious affiliation. Practitioners' religious affiliation has somewhat mixed results in the literature. Utilizing the same sample described above in studies 1-3, Oxhandler and Ellor (2017) compared Christian responses with those who did not self-identify as Christian, given: (1) a majority of social workers self-identify as being affiliated with a Christian denomination (Furman et al. 2011;Sheridan et al. 1992), and (2) Sherwood (1999) description of how a Christian worldview and ethical code may affect how a social work practitioner views and engages with clients. However, only five items (one attitude, three self-efficacy, and one behavior item) across the 40-item RSIPAS indicated a significant difference, with Christians having higher responses. Further, there was no difference between the two groups regarding their overall orientation toward integrating clients' RS. Still, other studies have found religious affiliation to influence whether practitioners integrate clients' RS. For example, to assess discriminant validity for the Spiritually Derived Intervention Checklist, Canda and Furman (2010) compared Christians' and Atheists/Agnostics' responses. Similarly, as outlined in Cummings et al. (2014) review, having an RS affiliation has been positively related to RS intervention use (Shafranske and Maloney 1990), more self-disclosure of RS beliefs (Payman 2000), and therapists' view of appropriateness of discussing RS (Beatty et al. 2007). Summary The initial development of this theory had two-parts, based in Glaser (2008) formal QGT methods. The first part was to explore the authors' previous studies of LCSWs, which formed the initial seed of this theory. The second part included examining the literature and comparing others' results regarding the relationship between practitioners' RS and the integration of clients' RS. Though RS has not been measured in the same way across studies, and neither have practitioners' views and behaviors related to integrating clients' RS, it is clear a conceptual pattern has largely emerged across studies related to practitioners' RS and their views and behaviors related to integrating clients' RS. Namaste One term that helped organize and make sense of what was happening within the data was Namaste. Nambiar describes Namaste as a combination of two Sanskrit words: Namah (to bow or bend) and te (to you), with the two influences behind this word being "Matter and Spirit" (Nambiar 1979, p. 5). He explains the secret of Namaste is the "blending of matter with spirit or the mortal body with the immortal soul, as demonstrated by the folded hands" (Nambiar 1979, p. 18), and that the "gesture is an expression of humility: 'I recognise God in you' . . . a feeling that almost becomes an instinct." (Nambiar 1979, p. 7). Similarly, Chatterjee describes a related term, Namaskar, as an ancient Hindu word used to describe a posture of greeting the sacred in others by "touching of the forehead with folded hands as the thumbs touch the forehead several times as if one is respecting the other by touching the point of the third eye or between the eyebrows" (Chatterjee 1996, p. 47). In America's culture today, Namaste is used as a term to greet others and is often said at the conclusion of yoga classes, with the usual translation being 'the sacred in me honors the sacred in you'. Others have written about Namaste as "the God in me greets the God in you" (Cessna 2011, p. 43) or "to honor the spirit within" (Duffin 2012, p. 14) or "I bow to you" (Cotton 2011, p. 108). Namaste has been integrated into a program for caring for older adults, entitled Namaste Care (Duffin 2012;Simard 2007), and even conceptualized as an approach to grading (Cotton 2011). Further, Namaste Care has been extended to older adults' mental health, noting that not only does it focus on gentle, loving, respectful touch, but also that "it is in this reciprocal process that the mental health aspect is made clear. Implicit in that reciprocity is the notion that mental health is at least a two-way process: caring for someone else's mental health implies a simultaneous caring for one's own." (Nicholls et al. 2013, p. 572). Namaste Theory and Helping Professionals Thus, recognizing the role IR or some deeply personal religiosity appears to have on the integration of clients' RS within the aforementioned studies, this idea of Namaste then began to bring order or attempt to explain this phenomenon. Specifically, as practitioners experience, are engaged in, become aware of, and infuse their own RS beliefs and practices into their daily lives-deepening their IR and becoming more attune to the sacred within-they tend to hold more positive views and engage in clients' RS beliefs and practices as well. In order words, as helping professionals recognize the sacred within themselves, they appear to be more open to recognizing the sacred within their client. This idea is not specific to one denomination or RS affiliation, but appears to transcend across RS affiliations, again focusing on the degree to which one's awareness of the sacred in his/her life is infused into the everyday, including their clinical practice. Nambiar expands upon this idea of Namaste extending beyond denominations or affiliations: Namaste in its true spirit helps our ego to surrender to the goal of our faith. With folded hands and with mind attuned to the feeling of the oneness of humanity, we slowly and steadily attain complete identification with God. In this manner, Namaste helps us to break all the barriers in us and to become humble. This in turn makes us work as an instrument of God in the spiritual or social fields of our activities . . . When this knowledge grows in faith, it becomes wisdom and this is the goal of the simple Namaste greeting and therefore it is equally applicable to everybody alike, irrespective of caste, creed, colour, or nationality. (Nambiar 1979, pp. 20-21) Utilizing this perspective, regardless of RS affiliation, it is truly the recognition of the sacred within that allows and empowers us to recognize the sacred within others. The term sacred has been defined as "a person, an object, a principle, or a concept that transcends the self. Though the Sacred may be found within the self, it has perceived value independent of the self. Perceptions of the Sacred invoke feelings of respect, reverence, devotion and may, ideally, serve an integrative function in human personality" (Hill et al. 2000, p. 64). Understanding sacred in this capacity, not limited to the confines of RS traditions or language, then allows the concept of Namaste to extend beyond religion, or even spirituality, to also include what those who consider themselves Atheists or Agnostics hold to be sacred. In fact, while peer debriefing the findings with colleagues, one point of discussion was the potential for this theory to extend beyond practitioners deeply recognizing their RS and in turn, considering clients' RS. For example, practitioners who are deeply aware of and invested in understanding the intersectionality of other various, diverse elements of their identity and/or culture (e.g., race, ethnicity, sexual orientation, sexual identity, gender, age, ability/disability, politics, community, socioeconomic status, geography, career, education, interests, etc.) may be more likely to consider the role these various elements play in others' lives, such as their clients (Elizabeth Goatley, personal communication, 22 May 2017;Danielle Parrish, personal communication, 29 March 2017). Phrased in another way, the more practitioners are reflective and aware of their own intersectionality, including how a variety of unique elements of diversity influences them as individuals, and the more practitioners take the time to deeply understand who they are, it may then be that those practitioners are more likely to recognize such intersectionality in clients' lives. However, this extends beyond the current study and may be worth exploring in future studies. Though this concept certainly mirrors elements of the use of self as a skill discussed in helping professions, use of self is a largely ambiguous term. Indeed, Dewane (2006) has attempted to hone in an explanation of this skill to include our use of personality, belief system (though RS is not explicitly mentioned in this example but can be inferred), relational dynamics, anxiety, and self-disclosure. What the use of self as a skill fails to address however, is this process in Namaste Theory which states, "the sacred in me sees and honors the sacred in you." The helping professionals who have taken the time and energy to deeply discover their RS beliefs/practices, have infused RS in their whole approach to life, find RS to influence their daily life, report experiencing the Divine and have higher DUREL IR scores, and freely describe their personal RS as being what helps them to integrate clients' RS, appear to be more likely to recognize this element in clients' lives. Discussion Given the growing body of research that has explored the role of the practitioners' RS-particularly IR-and its relationship with integrating clients' RS, Namaste Theory provides one option for organizing these results into a conceptualized explanation. Specifically, as practitioners reflect on and recognize the sacred within themselves (in this case, their own RS beliefs/practices, infused into their daily lives), they appear to recognize this within their clients. Though practitioners' RS, as well as their views and behaviors related to integrating clients' RS, have been previously measured in a variety of ways, future studies of this theory may want to utilize more intentional measurement strategies. With regards to practitioners' RS, the DUREL is a brief, validated instrument to measure not only IR, but also ER (Koenig and Büssing 2010). As Pargament reminds us, "much of religious experience remains private, subjective, and highly symbolic" (Pargament 1997, p. 11), so careful consideration of measurement strategies is important. Measuring practitioners' IR and ER can help to alleviate the potentially positive bias of some RS instruments and reduce the risk of tautological issues (King 2011). To assess helping professionals' integration of clients' RS in practice, the RSIPAS is the only reliable and valid instrument that not only has excellent reliability and established all forms of validity, but is valid across five helping professions. Further, the RSIPAS is able to measure practitioners' attitudes, self-efficacy, perceived feasibility, behaviors, and their overall orientation toward integrating clients' RS in practice (Oxhandler 2016;Oxhandler and Parrish 2016). Additionally, future studies may utilize qualitative methods to deeply investigate the three DUREL IR subscale questions, particularly as they relate to working with clients. Similarly, researchers may be interested in having practitioners frequently journal about their IR and clinical practice to better understand how their RS is infused into professional practice (Moffatt and Oxhandler forthcoming). While various practitioner characteristics did not have a significant relationship with their views or behaviors related to integrating clients' RS in many studies above (Oxhandler et al. 2015), it is worth exploring other mechanisms that support RS integration. For example, in Oxhandler and Giardina (2017), though 44% of practitioners freely mentioned that their personal RS helped them to assess and integrate clients' RS, 56% did not include this. In fact, two other themes emerged in this sample, and in many cases, respondents mentioned more than one of these themes in their response. Indeed, a majority mentioned having an RS-sensitive practice (67%) helped them consider clients' RS, with utilizing a person-centered approach as the most common practice, and 25% mentioned educational experience. Thus, it is worth better understanding what exactly having an RS sensitive practice means to practitioners in order to clarify what influences the process of RS integration. For example, for practitioners who do not view themselves as religious or spiritual, perhaps claiming to be open to understanding clients' RS is how they define having an RS-sensitive practice. Still, one might argue that practitioners who claim to offer an RS-sensitive practice have done reflective work on their own RS beliefs, whatever they may be, to the point they feel comfortable exploring what clients believe. On the other hand, there may be other characteristics that help and/or hinder integrating clients' RS, which cannot be captured in a quantitative survey or brief, open-ended response. For example, though age of the practitioner, age of client served, region of the country, gender, years of practice, and degree of burnout were not significantly related to integrating clients' RS (Oxhandler et al. 2015), that does not mean these characteristics are not relevant for some practitioners when it comes to considering clients' RS. Other characteristics that could potentially impact RS integration include: (1) practitioners' views of God/Higher Power as benevolent, critical, distant, or authoritative (Froese and Bader 2010); (2) their previous experience with RS organizations; (3) whether or not they work in a secular or religiously-affiliated setting; (4) the types of presenting clinical issues in their practice; (5) the amount of time allowed with each client; or (6) the clients' views of RS. Some of these barriers are mentioned by practitioners in Oxhandler and Giardina (2017), but would require more exploration. Regardless, identifying that practitioners' IR and other RS elements can and do influence whether and the degree to which practitioners integrate clients' RS has a number of practical and educational implications. Given that RS emerges across multiple helping professions' ethical codes, primarily focused on not discriminating but also on the integration of clients' RS (American Psychological Association 2010; American Association for Marriage and Family Therapy 2012; American Counseling Association 2014; American Nurses Association 2015; National Association of Social Workers 2008), helping professions must carefully attend to this area in training programs and during post-graduate supervision. Additionally, it is important that practitioners be well trained to effectively and ethically assess for and integrate clients' RS while setting appropriate boundaries related to their own RS beliefs and practices. As Canda (2008) noted, practitioners' beliefs "may intentionally or unintentionally be a direct or indirect party to . . . harmful practices" (p. 32). Though growing evidence suggests ethically assessing and integrating clients' RS yields positive health and mental health outcomes (Koenig et al. 2001, previous research and this theory clearly indicates the practitioners' RS (particularly their IR) cannot be ignored. Although the early development of Namaste Theory offers an initial option to help organize the results regarding practitioners' IR and their integration of clients' RS into a conceptual explanation, future research is necessary to obtain practitioners' views regarding this theory and to ground it in qualitative data. Glaser (2008) recommendations for formal QGT served as an appropriate foundation for examining and generating this concept, with the ability to relax typically strict guidelines often found in quantitative studies in order for the theory to emerge. Certainly, studies may exist or be conducted in the future that refute this theory. During this review of studies exploring the integration of clients' RS, only one study (Larsen 2011) was identified that found practitioners' RS-measured by Hodge (2003) Intrinsic Spirituality Scale-was not significantly related to their use of RS interventions in practice; though, other studies may exist. However, the evidence was overwhelming supportive of the emergence of Namaste Theory and the author hopes it provides a meaningful framework for practice and education across helping professions. The author also hopes that those who test Namaste Theory will share their results regarding its viability, as it must be continually tested in order to remain grounded in data.
2017-09-24T05:13:12.319Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "5bf258dfb20ad0f6a278b41556d424a244b5df96", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/8/9/168/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5bf258dfb20ad0f6a278b41556d424a244b5df96", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Sociology" ] }
270009623
pes2o/s2orc
v3-fos-license
Adaptive Gait Acquisition through Learning Dynamic Stimulus Instinct of Bipedal Robot Standard alternating leg motions serve as the foundation for simple bipedal gaits, and the effectiveness of the fixed stimulus signal has been proved in recent studies. However, in order to address perturbations and imbalances, robots require more dynamic gaits. In this paper, we introduce dynamic stimulus signals together with a bipedal locomotion policy into reinforcement learning (RL). Through the learned stimulus frequency policy, we induce the bipedal robot to obtain both three-dimensional (3D) locomotion and an adaptive gait under disturbance without relying on an explicit and model-based gait in both the training stage and deployment. In addition, a set of specialized reward functions focusing on reliable frequency reflections is used in our framework to ensure correspondence between locomotion features and the dynamic stimulus. Moreover, we demonstrate efficient sim-to-real transfer, making a bipedal robot called BITeno achieve robust locomotion and disturbance resistance, even in extreme situations of foot sliding in the real world. In detail, under a sudden change in torso velocity of −1.2 m/s in 0.65 s, the recovery time is within 1.5–2.0 s. Introduction As a type of legged robot, the bipedal robot shows excellent integration with human society.Similarly, environments designed for humans are also suitable for bipedal robots.However, the motion control of legged robots is challenging, especially in tasks with randomness, such as irregular terrain and external disturbances.Until now, equipping a bipedal robot with adaptive gaits has been a complex problem involving rigid body mechanics and actuator control issues. In previous studies, model-based bipedal locomotion algorithms have made progress [1][2][3].The simplified mechanical model facilitates bipedal motion planning and balance control, which enables bipedal robots to achieve walking, jogging, and simple jumping in structured environments.However, these bipedal locomotion lack sufficient resistance to non-preset perturbations due to the limitations of artificial state machines, which limits the potential of bipedal robots.Therefore, there is a need to develop more comprehensive and efficient control methods. In order to simplify the RL training process, some robots utilize a model-based controller for guidance or initialization.Recent RL studies [11,12] have used the residual RL [13][14][15] framework to train corrective policies to track the joint trajectories from the model-based controllers better.However, although the reference trajectories ensure smooth bipedal locomotion, this type of residual RL method sacrifices the advanced knowledge of adaptive gaits.In addition, the framework combining the optimization of a single rigid body model [7] with RL enables the bipedal robot to achieve a maximum speed of 3 m/s, and the footstep-constrained learning [8] can predict the next touchdown location, but the mechanical constraints also limit the RL from exploring bipedal features when trying more dynamic movements.Actually, except for some specific applications of bipedal locomotion, such as using RL to adjust controller parameters [16,17], the model-free RL methods show more potential than model-based ones. Domain randomization is effective for carrying unsensed dynamic loads [6] and dealing with blind stairs [9], but the essence of this type of method is to expand the knowledge pool of RL without direction.Therefore, it is difficult for users to design skill instructions through such randomized high-dimensional features, which is also the key issue of the model-free RL method.Moreover, RL integrated with imitation learning (IL) can be used to train a more bionic bipedal policy [10,18,19], but the low-dimensional imitation tends to hinder the high-value development of the policy easily.In the RL process, a reasonable expression of bipedal gait is important for the robot to learn robust skills.The parameterized gait library used in [20] preset a locomotion encoder that is beneficial to the RL process, while the learned policy cannot handle situations that are not covered by the gait library.Hence, a training method that is both practicable and explorable will improve the performance of the bipedal locomotion. In order to learn orderly leg movements, periodic rewards and inputs have been used to provide criteria for training the bipedal policy [21], thereby enabling users to switch between learned gaits.Similarly, a symmetric loss and curriculum learning were designed in [22], and the robot achieved a balanced, low-energy gait.However, since the periodic signals are static for double legs in the frequency domain, it is difficult to train an adaptive gait by only relying on the simple design. From the perspective of legged robots, the RL method has achieved state-of-the-art results in the field of quadruped robots [23][24][25][26][27]. Quadruped robots have a lower centerof-mass (CoM) height and a larger support area than bipedal robots, which means more stability during the locomotion.The quadruped robot ANYmal [28] utilizes four identical foot trajectory generators (FTGs) [29] together with a neural network policy to learn the dynamic gait to traverse different terrains [25], demonstrating that artificial gaits based on inverse kinematics can assist the quadruped policy in learning skills.Moreover, a more parametric generator based on central pattern generators (CPGs) was used in RL tasks [26] to achieve quadrupedal locomotion on mountain roads.For quadruped robots, regularized FTGs can not only meet the needs of locomotion but also facilitate RL training.But for bipedal robots, the adaptive gaits need to be more dynamic and agile, so neither the generator nor inverse kinematics is helpful for this purpose. Based on our previous work on the BRS1-P robot [1,30], the 3D locomotion requires an independent state estimation module due to the absence of proprioceptive velocity sensors.As an important observation and reward element, an accurate linear velocity of CoM is the basis of tracking commands.Recently, some model-based algorithms of state estimation were used in RL tasks of bipedal locomotion [10,11,31].Therefore, an efficient state estimator is necessary for our RL method. In this paper, we propose an RL framework consisting of an actor policy and a stimulus policy that outputs dynamic frequencies for the clock signal generator, as shown in Figure 1.Based on the fixed periodic components that are similar to [21] and our previous work [30], we obtained the primary gait in 3D space.In order to design an implicit mechanism that can both correlate adaptive gaits and preserve sufficient exploration potential, we use the dynamic signals as a part of the input of the actor policy.In addition, we introduce a reward component that is corresponding to the stimulus frequency adjustment to train the adaptive gaits.The learned policies are deployed on the physical robot called BITeno through learning with the embedded mechanics properties (EMP) [30], and the framework consists of two main modules.I. RL: in order to acquire the adaptive gaits, the stimulus frequency policy and the actor policy are trained together.II.Deployment: all of the policies achieve sim-to-real on physical robot BITeno, and the processes corresponding to the dotted lines do not work at this stage. The contributions of this study can be summarized as a trainable framework, including the gait stimulation policy for RL, which provides both the guidance and exploration space for adaptive gaits.Furthermore, from a bionic perspective, we propose the independent stimulus frequency for each leg to explore a more diverse range of gait patterns.Finally, a series of experiments on physical robots verified the generalization ability of trained policies and demonstrated better anti-disturbance performances than static stimulus methods. The construction of this paper is as follows.In Section 2, we explain the complete RL framework and details of the BITeno platform.In Section 3, the experimental results and discussions are presented.Finally, in Section 4, we summarize the conclusions of our works in this study. Reinforcement Learning Framework and Hardware Platform We aimed to acquire adaptive gaits using RL methods so that a bipedal robot can resist unknown perturbations while tracking user commands well.In this process, the linear velocity of the CoM is an important observation that cannot be obtained directly by proprioceptive sensors in the physical world.Therefore, we utilize a state estimator based on previous works [23] to map the current state to the linear velocity V E , which is considered a cooperator of 3D bipedal locomotion in our methods.In this framework, as shown in Figure 1, two additional agents, namely, the actor policy and stimulus frequency policy, are incorporated into the multilayer perceptron (MLP).In detail, the actor policy operates a core controller outputting the target positions of whole-body joints.Furthermore, the stimulus frequency policy is a front-end, high-dimensional controller that adjusts the left and right implicit frequency (L-IF and R-IF) of two legs according to the real-time states of the robot.More importantly, the clock signal generator was designed to convert the frequency feature into explicit stimulus signals that serve as the key components of the actor policy inputs.Specifically, compared with the locomotion of the quadruped robot, the bipedal locomotion indeed tends to be constrained by the preset gaits like FTGs [25].And the real-time frequency that was designed for each leg aligns more closely with the bionic principles. In order to make the RL policies converge well, we trained the robot in simulation as shown in Figure 1.In order to learn a basic balanced skill as a preparation, an initial value was continuously applied to the clock signal generator to output regular signals until the robot acquired a normal gait.During this process, the stimulus frequency policy was trained using supervised learning (SL) according to the initial value, with the goal of enabling the instinctive generation of an original stimulus frequency.Subsequently, both of the policies were trained using RL in simulation.Moreover, for the purpose of reducing syntony and maintaining the control ability, the joint action outputted at 100 Hz was actually the joint reference of the PD controller working at 1000 Hz. All neural networks in the RL task were trained using the data from the highperformance simulator Isaac Gym [32], and proximal policy optimization [33] was used to train the actor policy and the stimulus frequency policy based on the actor-critic [34] method. As for the point-footed platform illustrated in Figure 2, the bipedal robot BITeno was originally designed by our team for dynamic locomotion, and the six actuators concern the torque control with a peak value of 62.5 N•m.In addition, the reduction ratio of each joint is 10, which provides abundant torques and enough agility.The total mass of BITeno is about 16 kg, its standing height is 0.95 m, and the IMU sensor was assembled at the CoM position calculated by the simulator to reduce sim-to-real challenges.In addition, the EtherCAT was used for communication between the computer (ASUS-PN51/R75700U) and joint controllers. Reinforcement Learning Formulation The physical world of bipedal locomotion is continuous, but in our RL task, the control problem is formulated in discrete time to simplify the modeling process.At time step t, the observation o t represents the state of the current environment, so the locomotion can be explained using a Markov Decision Process (MDP).Each of the MLPs in our RL framework can be regarded as a policy π(a t | o t ) outputting the action a t according to the o t , and the environment will move to the next state o t+1 .In detail, both a t and the transfer of environment come from their respective probability density functions.Furthermore, the reward R t+1 = R(o t , a t , o t+1 ) evaluates the control performance of the current unit cycle at time step t + 1.However, the scalar reward cannot evaluate the future trend of locomotion, especially on the condition of unknown disturbance.Hence, the expected discounted reward D(π) is introduced in the RL task, and the goal of the RL task is to explore the optimum policy π * (A | O) that is closest to the theoretically ideal policy, where O is the observation space, and A is the action space.Actually, policies in an RL task can only converge well when the local optimum is covered by O and A, and the implicit stimulus was designed to suppose this purpose better. Observation, Action, and Network Architecture The observations of each policy in our framework are slightly different because of the specific logical relationship between the two policies. As shown in Table 1, the full observation of the actor policy consists of user command R 3 , including three expected linear velocities along the X, Y, and Z axes, respectively; joint position R 6 ; and joint angular velocities R 6 of 6 actuators, 12 in total; torso pose R 3 and torso rotational velocities R 3 obtained by the IMU, six in total; the action history R 6 of the last time step; estimated linear velocity R 3 ; and the dynamic signal R 2 .Moreover, the action of the actor policy is a vector containing the joint target positions R 6 .In addition, the linear velocity vector is concatenated with the observation from proprioceptive sensors, which provides the whole-body feature for the stimulus frequency policy to produce the clipped frequency R 2 to regulate the dynamic signal.The policy networks in our work are composed of MLPs.Specifically, the stimulus frequency policy contains two hidden layers with {128, 64} hidden units, and the actor policy has three hidden layers with {512, 256, 128} hidden units.The activation function for each is ReLU. Clock Signal Generator The periodic signal is effective guidance for the bipedal gaits [21].In detail, the frequency, amplitude, and phase variables can influence the joint movement produced by the actor policy; hence, each single leg will reflect the corresponding routine.When two legs work together, an interchanged gait is produced, avoiding the occurrence of asymmetrical and strange gaits in training practice.However, the signals with fixed parameters are still unable to cope with various external disturbances well, especially the fixed frequency.Therefore, the RL-based stimulus frequency policy is proposed to provide dynamic frequencies that are equivalent to the latent feature that is contained by the adaptive gaits. As a source of dynamic signals, the clock signal generator receives the clipped L-IF and R-IF and then produces the dynamic signal for the actor policy.As shown in Figure 3, the real-time signals are concatenated and sampled in a continuous frequency range [2.6π, 3.8π], and the dynamic signal S d is (3) where T n is the cumulative time of the control process, and dt is 0.001 s.According to this design, the temporal density of the dynamic signals varies with L-IF and R-IF, while the physical time remains uniform.Additionally, the initial value is 3.03 π, which means the desired stepping period is 0.66 s for each leg.Moreover, it should be noted that robots like BITeno require an appropriate stepping frequency to maintain balance because the point-footed design does not support static bipedal locomotion.So frequencies below 2.6 π are not accepted here.Furthermore, the values exceeding the upper limit can easily trigger tremors within the joints, which is obviously detrimental to the adaptive gaits.The clipped output of the stimulus frequency policy is between the lower and upper limit, which ensures the safety of bipedal gaits through a reasonable sine (cosine) stimulus.In the training process and deployment, the dynamic stimulus signal is sampled at 100 Hz, and the colors means signals with different frequencies during sampling.Additionally, the amplitude A p of a signal is a hyper-parameter. Rewards and Training Process In order to ensure sufficient exploration space, the reward composition based on simplified models and artificial locomotion is ignored in the RL training.After the actor policy acquires basic gait, our framework only focuses on the high-dimensional performance of bipedal locomotion.Therefore, we designed a specialized reward term to induce the L-IF and R-IF.When the robot can perform a stable bipedal gait and resist external disturbances well, it means that the implicit frequency is equipped with an adaptive ability.In addition, as a model-free RL framework, reference trajectories are not involved in the reward functions. In our framework, the R t is the total reward at time step t, and r n is the nth-term reward.Each term of the reward functions is weighted by β n and represents a certain preference for bipedal locomotion. When the value of R t increases, it is generally considered that the robot's performance is getting better.Of course, R t just works during training in simulation due to the use of some privileged information (e.g., an accurate torso height).Therefore, the design of the reward functions is an important factor of the sim-to-real transfer, which is also one of the reasons for the existence of β n .The details of the rewards are in Appendix A. Since the scale of the data is close to that in our previous work [30], the hyperparameter of PPO adopted similar settings in this study.As for the training process, a series of external perturbations were applied to the robot at irregular intervals, which allowed both policies to simultaneously acquire more agile skills through interactions with the environment.At the deployment stage, these RL methods also provide sufficient compatibility for the sim-to-real transfer.Additionally, the EMP of the 3D robot was extracted before the RL stage, providing a default simulation setup that follows the features of the physical robot. Results and Discussion The trained policies were successfully deployed on a physical robot using the same framework with the training process, enabling it to achieve impressive 3D bipedal locomotion with BITeno.Through the exploration of RL, BITeno acquired the skill to stably track user commands, as shown in Figure 4.Moreover, under a series of external disturbances, the point-footed BITeno suffered foot slippages during posture adjustment and eventually recovered to a stable gait, demonstrating a robust sim-to-real transfer, as shown in Figure 5. In detail, BITeno can implement stable bipedal locomotion using a normal gait on flat ground.Furthermore, different constant frequencies were used as inputs for the actor policy, as shown in Figure 6.Despite no disturbances being applied to the robot, it still generated varying step counts within a fixed period of time.Moreover, the foot contact force and the torso velocity also maintained good coupling over time, which is also a necessary basis for stable gaits.Therefore, all of these results demonstrate the adaptability of the current control framework.Additionally, because of the effect of the stimulus frequency policy, the actor policy received dynamic signals and adjusted its step frequencies continuously, showcasing versatility in different situations.As for the joint-level movements, all joints consistently maintained a frequency close to the initial value during the locomotion on the flat ground.When faced with sudden changes in the robot status, all joints responded rapidly with a brief frequency adjustment, as shown in Figure 7. Specifically, each joint performed repeated movements consisting of two support phases (or one) and one swing phase (or two) per second for normal gaits.However, in adaptive gaits, the frequency of joint movements increases to a higher level in order to maintain real-time balance. As shown in Figure 8, the normal gait of the stimulus frequency policy can achieve primary balance, which verifies the effectiveness of the sim-to-real transfer of our framework on the BITeno hardware platform even using only the static signal.Furthermore, it can be seen from the snapshots in Figure 8 that the robot only made one step as an emergency action under a usual disturbance, resulting in an insufficient dynamic performance of the robot and it falling down.In addition, the support leg should act more agile to maintain balance at this time, but the target positions of the joints did not work with suitable frequen-cies.Actually, without learning dynamic skills, the bipedal locomotion in this experiment achieved the expected performance and satisfied the upper limit of the capability of the normal gait.More importantly, concerning the movements of joints shown in Figure 8, the robot did not even show enough struggle action like our full framework after it started to fall down, which further proves the positive impact of learned stimulus signals on adaptive gaits.Through RL training and deployment, we found that normal gait stepping without dynamic stimulation is also relatively stiff, despite that it can remain balanced without disturbance.Additionally, we also found that there is a coupling relationship between the robot's link size and the natural frequency of the hardware, which is important for further research on our framework. Conclusions Through the methods and experiments presented in this paper, we verified that dynamic clock signals can improve the performance of an RL-learned actor policy.Furthermore, based on the existing gait obtained through a fixed clock, our framework provides more adaptive skills for bipedal robots through learning dynamic stimulus instincts.In detail, the experiments on a physical robot, BITeno, demonstrated both stable walking and adaptive gaits under a series of external disturbances, which also prove that our framework is suitable for sim-to-real transfer.Furthermore, the independent use of the stimulus frequency policy provides a dedicated agent for adaptive gaits, which validates a paradigm for biped robots to learn richer gaits or more complex tasks. Along this research trajectory, bipedal robots can acquire additional bionic skills through specifically designed agents.In the future, we plan to extend the stimulus frequency policy in this paper to a joint-level dynamic control.Focusing on more bionic designs, we will train the agile locomotion policy to accommodate more complex bipedal tasks through RL methods.As for the natural frequency of the hardware, it is still difficult to achieve accurate calculations based on the rigorous theory of mechanics.Therefore, using it as an implicit feature for the construction of an RL framework can be a part of the Figure 1 . Figure1.Overview of our RL framework.The learned policies are deployed on the physical robot called BITeno through learning with the embedded mechanics properties (EMP)[30], and the framework consists of two main modules.I. RL: in order to acquire the adaptive gaits, the stimulus frequency policy and the actor policy are trained together.II.Deployment: all of the policies achieve sim-to-real on physical robot BITeno, and the processes corresponding to the dotted lines do not work at this stage. Figure 2 . Figure 2. The design of BITeno platform.(Left) The mechanical design in simulation, were the feature of each link was assigned according to real materials.(Right) The physical robot with an electrical system onboard. Figure 3 . Figure 3. Clock signal generator based on sine (cosine for another leg) function.The clipped output of the stimulus frequency policy is between the lower and upper limit, which ensures the safety of bipedal gaits through a reasonable sine (cosine) stimulus.In the training process and deployment, the dynamic stimulus signal is sampled at 100 Hz, and the colors means signals with different frequencies during sampling.Additionally, the amplitude A p of a signal is a hyper-parameter. Figure Figure Stable walking (0.65 m/s) gaits.The numbers on the top right represents the time sequences of the locomotion. Figure 5 . Figure 5. Adaptive resistance gaits under weak and strong disturbances. Figure 6 .Figure 7 . Figure 6.The dynamic step style of the normal gait derived from different stimulus signals.The circled number is the sequence number of sampled touchdown. Figure 8 . Figure 8.The results of simply using initial values instead of the dynamic stimulus. Table 1 . The observation and output of each module.✓represents the observation, ▶ represents the action, and ✗ means irrelevant data.The joint state is obtained by the off-axis rotary absolute encoder assembled in each joint.
2024-05-26T15:58:51.171Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "2e9d7c4da333443ad844c85e8c3fbc314cf48039", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-7673/9/6/310/pdf?version=1716387576", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0646009110ca7e5b1325ce7823dca7fc6e1909a", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
52817614
pes2o/s2orc
v3-fos-license
A history of futures: A review of scenario use in water policy studies in the Netherlands Graphical abstract Highlights ► We reflect on 6 decades of scenarios use in Dutch water management studies. ► We observed a paradigm shift from predicting to exploring the future. ► This increased the opportunity for robust decision-making. ► The scenarios enabled learning about impacts and effectiveness of policy options and raised awareness. ► The turn-over rate from science to policy has speeded-up and further intensified. Introduction The world's river deltas are increasingly vulnerable due to pressures from climate change, relative sea level rise and population growth (Syvitski et al., 2009;Vö rö smarty, 2009). Therefore, densely populated deltas such as the Netherlands require well-designed water management for flood protection and for coping with varying water demands and availability. Water management decisions should bring solutions that will sustain for several decades, implying that they should be adequate even in case of changes in pressures. However, uncertainties about the future make decisionmaking less straightforward. Therefore, policymakers increasingly use robustness as indicator in decisionmaking. A robust strategy performs relatively well across wide range of possible futures (Lempert et al., 2006) and other uncertainties. Water management faces uncertainties arising from (1) natural uncertainties such as trends and extreme weather events; (2) social uncertainties due to shifts in human response and values; (3) technological uncertainties through modelling future states and impact (e.g. Haasnoot et al., 2011). Scenario analysis is a method for dealing with uncertainties, and aims to assess possible impacts and to design policies (e.g. Carter et al., 2007). Scenarios are coherent descriptions of alternative hypothetical futures that reflect different perspectives on past, present and future developments, which can serve as a basis for action (Van Notten, 2005). Since its first use in military planning in the 1950s (Bradfield et al., 2005;Brown, 1968;Kahn and Wiener, 1967), scenario analysis has been applied in a variety of areas, such as business development (Bradfield et al., 2005;Van der Heijden, 1996;Wack, 1985), environmental planning (Alcamo, 2009(Alcamo, , 2001Peterson et al., 2003) and climate change mitigation and adaptation (Hulme and Dessai, 2008;IPCC, 2000;Rosentrater, 2010;Wigley et al., 1980). Scenarios have also been used for robust decisionmaking in case of complex problems with deep uncertainty, such as long-term water management under changing conditions (e.g. Lempert and Schlesinger, 2000;Dewar et al., 1993; 2003; Lempert et al., 2006;Groves, 2006;Kwakkel et al., 2010or Middelkoop et al., 2004Van Asselt and Rotmans, 2002;Dessai and Hulme, 2007 for examples related to water management). To enable life in a low-lying delta, the Dutch have had a long history of controlling and maintaining the water system. In the Netherlands, scenarios have been used since the 1950s to prepare water management for the future. After six decades of experience, we reflect on scenario use in water management in the Netherlands, and identify possible improvements for future studies. This evaluation provides more insight in policymaking on water management in river deltas under uncertainty to support the current development of the next generation scenarios for climate adaptation studies. This paper provides a review of scenario use in water management studies on the Rhine-Meuse delta in the Netherlands, and evaluates the lessons that can be derived from this experience. We seek to answer the following questions: What was the evolvement of scenario use in water management? Did the scenarios provide prospect for robust decisionmaking? Did the scenarios enable learning for policymakers and/or scientists? After giving a historical perspective, we evaluate the scenario use based on two criteria: 'Decision robustness' and 'Learning success'. We end the paper with conclusions and recommendations for future water management studies. 2. Approach for evaluating the scenario use For our chronology on scenario use in water management in the Netherlands we reviewed all national water policy documents, the key research studies on climate and water, and related climate scenario studies. In addition, we used our own experience, based on participation in several water policy studies since the 1990s, and the experience of several colleagues, who were involved in earlier water policy studies or climate scenario studies. We present the studies from the Netherlands against the (inter)national context (see for overview and supplementary information for more characteristics). For our analysis we adopted two criteria used by Hulme and Dessai (2008b) in a framework for climate scenario evaluation, which we further refer to as the 'Decision robustness' and the 'Learning success'. The 'Decision robustness' criterion can be addressed with the following question: 'do the scenarios contain a sufficient representation of relevant knowable uncertainties to offer the prospect that decisions taken with support of the scenarios will be robust?' Robustness is an important criterion for good decisions under uncertainty (Rosenhead et al., 1972;Metz et al., 2001), especially by policymakers facing deep uncertainty (Lempert et al., 2006;Groves and Lempert, 2007). By including uncertainties in decisionmaking it is possible to identify strategies that perform relatively well under various different possible futures (robust strategies), or to make a wellthought-out decision on whether or not to adapt a strategy in view of a specific uncertainty. Assessing the robustness of decisions is relevant, because decisions involve large highcost investments, and can have large implications for society. Therefore, water management decisions should be costeffective for several decades, even if the future turns out to be different from what was anticipated. Intuitively, one might consider the following question as a criterion for evaluating the 'Decision robustness' (in retrospect): 'was the decision taken a 'good' decision?' However, there are some fundamental problems in answering this question. Firstly, major water management decisions have often a long implementation time, or involve strategies with a considerable life-time (e.g. tens of years). Yet, for many studies the time passed has been too short to decide whether decisions have turned out to be successful. Secondly, and more important, we can only evaluate decisions against the single past we had, which is only one realisation of all possible futures that could have evolved after the decision was taken. For example, due to inherent climate variability and the stochastic nature of the occurrence of extremes, prolonged periods can pass without extreme events, even in the case of climate change. If it was decided that anticipatory strategies were not needed, this decision would have been evaluated as 'good', as a result of the fortuitous absence of extreme events. In other -equally likelyrealisations of the future, in which some extreme events occurred, this decision would have been judged as 'bad'. So, judging a decision against a single past does not provide a sound indication of its robustness or potential success; such evaluation requires confronting the result to a range of realisations of the future. In our paper, therefore, we focus on whether the decision process -based on the scenarios considered -provided prospects for robust decisions. Indicators for the 'Decision robustness' criterion should, therefore, reflect whether relevant uncertainties are sufficiently represented. Relevant uncertainties have significant and distinguished impact on the outcomes, and consequently the decisionmaking (cf. IPCC, 2001). For water management this involves uncertainties in both water demand and availability. This means that scenarios should include uncertainties in climate, sea level and river discharges, that all affect water availability, as well as uncertainties in socio-economic and social developments (e.g. land use and the accepted flood damage), that determine societal requirements and thus the water demand. A different kind of relevant uncertainty arises from interactions between the water system, society and water management. For example, floods and droughts may raise the need for additional or new measures, or more profoundly, it may influence societal perspective (e.g. how we evaluate system and our expectations of the future), and may trigger a water policy response which may then affect the water system. The resulting water management response will then affect the water system and its future response to extremes. Uncertainty in the policy response further adds to the total uncertainty on the water system in the future. In retrospective, water management in the Netherlands has indeed strongly been driven by both floods (e.g. in 1993 and 1995) and drought events (e.g. the summer of 1976), and socio-economic trends (e.g. increasing valuation of nature and cultural heritage). For robust decisionmaking scenarios should, therefore, consider the dynamic interactions among climate, society and water management as these evolve in the course of time and influence the performance of policy options. To determine whether uncertainties were sufficiently represented for robust decisionmaking, we analysed the range and diversity of the considered scenarios using the following indicators: the number of scenarios, the variety in the range of outcomes encompassed, the variety in alternatives, and the temporal and dynamic nature of the scenarios. Using the range of a scenario as indicator for 'Decision robustness' does not mean that decisionmaking should be based only on the extremes nor that a broader range in itself is better. Instead, several alternative scenarios should be considered that encompass a relevant and plausible range of futures. Alternative scenarios go beyond the frequently used 'business as usual' scenarios derived by extrapolation of ongoing trends, and comprise changes in developments in the course of time. Regarding the temporal nature of a scenarios, scenarios can be 'snapshots' describing a moment in the future, or 'transient' scenarios describing the evolvement to a certain point in the future (Van Notten, 2005). The dynamic nature of a scenario refers to whether a scenario is essentially based on a gradual extrapolation of trends, or whether it encompasses events, discontinuities, or even surprises which change gradual developments abruptly (Van Notten, 2005). What is considered 'plausible' or 'relevant' is subject to different interpretations, and depends on one's expectations about the future and understanding of the system. A way of dealing with this type of uncertainty -often referred to as perspective-based uncertainty -is including such different perspectives in the scenarios (cf. Middelkoop et al., 2004;Van Asselt et al., 9582). The 'Learning success' criterion refers to the question: did the scenarios enable learning for policymakers and scientists? Answering this question is relevant to indicate the value of scenario analysis, and to improve future scenario use in water management studies. Although there are many definitions of learning, most theorists agree that learning is a change in knowledge or behaviour as a result of experience (e.g. Kolb, 1984;Driscoll, 1994). Although we could not provide quantitative measures, we determined indications of the learning effect from reflection and underpinnings indicated in the reports. We give some examples: (1) A policy report that mentions results of a scientific long-term water policy study as e n v i r o n m e n t a l s c i e n c e & p o l i c y 1 9 -2 0 ( 2 0 1 2 ) 1 0 8 -1 2 0 a starting point of their study ('Scenario studies show that climate change will have an impact on the hydrological water system.'). (2) A policy document mentioning a contextual development or event as a reason to adapt a policy or a scenario ('Event x raised awareness that a new scenario/approach is needed.'). (3) A research study stating that previous results showed 'X', but 'Y' is unclear, and will be studied. Therefore, we analysed the evolution of the scenario content and use, the study's subject, and the sciencepolicy interaction, and use this information in combination with our experience and the experience of our colleagues, to estimate the 'Learning success'. 3. Historical perspective on scenario use in water management studies 3.1. The emergence of concepts The emergence of concept of anthropogenic global warming has been characterised by different milestones (e.g. Peterson et al., 2008;Weart, 2010). Mid-19th century, Tyndall suggested that atmospheric changes could explain ice ages (Tyndall, 1861). Arrhenius was the first to quantify the contribution of CO 2 to the greenhouse effect (Arrhenius, 1896). In the 1950s, progress in understanding of climate cycles resulted in the Milankovitch theory, explaining cycles at glacial-interglacial time scales (Milankovitch, 1930). After 1950, tools became available for measuring greenhouse gases. Keeling (1960) showed a faster CO 2 increase than Arrhenius' estimate. Together with available data on the global temperature this led to the idea that increasing CO 2 could result in marked climate change (Revelle et al., 1965). In the 1970s, climate models were developed and used to study the combined effect of cooling through aerosols and warming through CO 2 . After warming trends, reported in the 1940s, a multidecade cooling was observed (Mitchell, 1963). Although scientific articles described both potential future warming and cooling, the media (e.g. Gwynne, 1975) mainly covered a future cooler world (Peterson et al., 2008). In the mid-1970s, the discussion in the media became dichotomous: the climate could become warmer or cooler (Mathews, 1976). The scenario concept originates from the 1950s and is ascribed to Herman Kahn at that time working at the RAND Corporation . He demonstrated with scenarios that US military planning was based on 'wishful thinking' instead of 'reasonable expectations' (Bradfield et al., 2005). In the 1970s, scenarios were used to explore the sustainability of natural resources. 'The limits to growth' of the Club of Rome is a well-known example (Meadows et al., 1972). Using scenarios and the World3 computermodel the study showed that a long-term perspective can identify problems in current policies . In business development, Shell Oil is considered the first to use scenario planning (Van der Heijden, 1996;Wack, 1985). 3.2. Towards first scenarios in water management After a millennium of adaptation in response to (flood) events, the Dutch shifted to anticipatory water management in the course of the twentieth century. The 1916 storm flood along the Zuiderzee initiated the implementation of existing plans for the Afsluitdijk, a large defence structure separating the Zuiderzee from the sea. The 1953 storm surge, which killed 1835 and affected 750,000 people, triggered a paradigm shift. Policymakers learned that the deterministic approach was inadequate. From the perspective that 'this should never happen again', they stated that the probability of occurrence of such an event should be very small. Accordingly, an a-priori accepted exceedance probability and corresponding water level were determined, resulting in design conditions for the Delta Works (Delta Committee, 1960), the large defense structures in the southwest delta. This was the first use of future conditions. A relative sea level rise based on extrapolation of measurements was included in the design of the defense structures, because of its lifetime (100-200 years) (Rijkswaterstaat, 2008). However, a potentially accelerated sea level rise due to climate change was not considered. This probabilistic approach was adopted for all primary flood defences. Along with the Delta Works the Dutch government decided for developing a national policy on water management, and to document this in a National Policy Memorandum on Water Management (PWM). As safety was ensured with the Delta Works and the Afsluitdijk, the 1st PWM focused on fresh water supply (Rijkswaterstaat, 1968). Although climate change and sea level rise were mentioned, assessments considered only an increase in water demand. Uncertainties about future developments were acknowledged, but no bandwidth was given. The document stated that 'the influence of these developments (climate change and upstream water use) on the total water availability is considered to be small. It is however important to keep monitoring these developments.' (Rijkswaterstaat, 1968, p. 137). In the 1980s, scenarios became mainstream in futures research (Moss et al., 2010). Also, in the Netherlands scenario analysis emerged. This was probably supported by the cooperation with the RAND Corporation for the PAWN-study (Policy Analysis for the Water management of the Netherlands) (RAND Corporation, 1983;Rijkswaterstaat, 1985) that provided the scientific support for the 2nd PWM (Rijkswaterstaat, 1984). In the 2nd PWM, the government stated that revision of the 1st PWM was needed due to: 'societal developments, changes in insight and stakeholders of the water system. For example, the prognoses for the future water demands for agriculture and drinking and industry water need to be revised and the importance of sectors like industry, shipping and nature has been acknowledged' (Rijkswaterstaat, 1984, p. 7). The 2nd PWM emphasised improving water management from a cost-benefit perspective. This was a paradigm shift; instead of ensuring water for all users, policy was now only implemented if the benefits were larger than the costs. Trends in water use were considered for agriculture, drinking and industry water in the policy analysis. The PAWN-study mentions that 'at places where the uncertainty in the results has an impact on the conclusions, either a sensitivity analysis is executed or different scenarios are described.' (Rijkswaterstaat, 1985, p. 138). The study concluded that even in case of the 'maximum trend scenario' for irrigation, wherein many farmers would use sprinklers, no large interventions were needed. These conclusions were adopted in the 2nd PWM. 3.3. Climate change scenarios and impact analysis on the water system (1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998) By the end of the 1980s, experiments with Global Climate Models (GCMs) indicated that the signal of anthropogenic warming would soon emerge from natural variability (Hansen, 1988;Moss et al., 2010). The International Panel on Climate Change (IPCC) published its first assessment including four scenarios in 1990 (IPCC, 1990). The scenario 'business as usual' (BaU) assumed no or few policies to limit greenhouse gas emission and was presented with a lower, best and upper estimate. The other three 'accelerated policy' scenarios described future climates after emission reduction. In the second assessment report, the BaU scenario was elaborated in the IS92 scenarios (IPCC, 1995). Dutch researchers developed the global model IMAGE for impact assessment and policy development regarding greenhouse gases (Rotmans, 1990;Alcamo et al., 1999). In this period, the first studies on climate and water appeared in the Netherlands. In a coastal defense study three sea level rise scenarios were considered, namely: the 'policy' scenario including sea level after global implementation of climate change mitigation policies; the 'anticipatory' scenario describing the best guess; and the 'unfavourable' scenario describing the best guess plus standard deviation (De Ronde and Vogel, 1988). Based on these scenarios, the subsequent ISOS (Impact of Sea level rise On Society) study quantified impacts, and identified policy options (Rijkswaterstaat and Delft Hydraulics, 1988). The study focused on safety against flooding, using scenarios on sea level rise, river discharges, wind and tidal conditions. The ISOS study was the first to include changes in river discharges in the scenarios. Socioeconomic developments were excluded because of their uncertainty. Now that safety and water supply were managed well, the government shifted its focus to water quality because: 'pollution, together with overexploitation of water and an unbalanced spatial planning have resulted in an unsustainable water system' (Rijkswaterstaat, 1988, p. 5). Accordingly, the 3rd PWM, entitled 'Water for now and the future', focused on ecological and chemical water quality provided that safety was guaranteed. The Brundtland report (Brundtland, 1987), which put sustainability high on the international political and public agenda, clearly inspired this quality focus. Policymakers defined future targets based on past conditions, and identified policy options to reach these target conditions under different scenarios. The scenarios included extrapolations of ongoing water demand trends and the intended result of environmental policy defined by the Ministry of the Environment. Although this ministry published three estimates, only the central estimate was considered. While research studies extended their scope by using integrated scenarios, policymakers were focusing on safety issues. Triggered by the 1993 and 1995 flood events and the increased attention to climate change and sea level rise, the Dutch government installed the committee Tielrooy to analyse whether current water management was sufficiently prepared for future climate change and sea level rise. This committee adopted three of the KNMI1999 scenarios, which were similar to the KNMI1997 scenarios, but ignored the 'dry' scenario, because this scenario contained complementary signals compared to the other scenarios (wetter and warmer, drier and warmer, drier and colder). Socio-economic developments were only considered in a qualitative sense. In the final report, guiding principles to prepare for climate change were explicitly put forward: 'anticipate instead of react, create more room for water, and do not only discharge, but also store water' (CW21, 2000). As an alternative for confining water in narrow zones between dikes, creating more room for water was an upcoming paradigm in river management, aiming at decreasing water levels in times of peak discharges, and enhancing nature's quality at the same time (Dienst Landelijk Gebied, 1999;Silva et al., 2000). Regarding coastal zone management, the government decided in 2000 to double the amount of sand for beach nourishment in response to new insights on longterm morphological developments (Rijkswaterstaat and IMAU, 2000). In 2003, several governmental organisations agreed in a socalled National Water Agreement (NWA) to define and implement strategies for coping with climate change and sea level rise by 2015, and to explore the necessary strategies for 2050 (Ministerie van Verkeer en Waterstaat, 2003). Water boards should adopt the guiding principles of the committee Tielrooy, and 'at least use their central estimate scenario for 2050 with an outlook to 2100 to develop measures'. Until this period, policymakers neglected 'drought' as a possible effect of climate change. In 2002, the government studied the balance between fresh water demand and supply (RIZA, 2005). The dry summer of 2003 was a welcome surprise for getting the subject on the political agenda. KNMI updated the 1999 scenarios and re-introduced a 'dry' scenario in a revised version based on RCM results (Beersma, 2001). For the analysis also land use changes were included as well. 3.4. New climate scenarios and adaptation policy in legislation (2006 to present) Based on extended and improved information of amongst others the IPCC's fourth assessment (IPCC, 2007), KNMI developed new climate scenarios; KNMI'06 scenarios (Van den Hurk et al., 2007;Katsman et al., 2008). As uncertainty due to emission scenarios was smaller than the uncertainty due to climate models, temperature was used as discriminating factor. A second relevant factor was the circulation regime. This resulted two scenarios with a moderate temperature increase (þ18C) and two with strong temperature increase (þ28C), which were further distinguished by a strong or weak change of atmospheric circulation over Europe. For sea level rise a bandwidth was given to cover the large variety in the sea level rises predicted by different climate models for different global warming scenarios. The four KNMI'06 scenarios were a problem for the water managers as this precludes the selection of a central estimate, as was prescribed in the NWA of 2003, and the adequacy of designed policy options needed to be reconsidered. The NWA was updated in 2008, and prescribed for different water related problems the use of only one of the KNMI'06 scenarios (Ministerie van Verkeer en Waterstaat, 2008). In 2009, KNMI reflected on the KNMI'06 report based on new scientific understanding and recent observations (Klein Tank and Lenderink, 2009). Although e n v i r o n m e n t a l s c i e n c e & p o l i c y 1 9 -2 0 ( 2 0 1 2 ) 1 0 8 -1 2 0 KNMI did not see the need for defining new scenarios, the scenarios with the moderate temperature changes were now considered less plausible than those with the larger changes. Consequently, again the guidelines in the NWA (Ministerie van Verkeer en Waterstaat, 2008) was outdated. For example, for studies on drought the NWA prescribed to use the 'moderate dry' scenario, while according to the update of KNMI for this kind of situations the 'stronger dry' scenario would be more plausible. In 2007, the government established the second Delta committee for identifying actions to prevent future disasters (Kabat et al., 2009;Delta Committee, 2008), as the expected future climate change and sea level rise 'can no longer be ignored' (Delta Committee, 2008, p. 5). Next to the KNMI'06 scenarios, the committee considered a high-end scenario existing of a plausible upper limit of sea level rises in 2100 and 2200 for a robustness test of policies and investments (Katsman et al., 2011;Vellinga et al., 2008). The high-end scenario learnt policymakers that the Netherlands can overcome sea level rise and climate change, but that the water system has to be adapted. The advice resulted in a Delta Act and is presently being elaborated on in the so-called Delta Programme. Climate change and sea level rise were now on the political and public agenda. In the 5th PWM (Rijkswaterstaat, 2009) climate change and sea level rise played an important role. The report had a separate chapter about dealing with uncertainties on climate change. The four KNMI'06 scenarios were described in detail, while socio-economic trends and future targets were described qualitatively. Again a scenario was prescribed for strategy development, meaning that the system should be prepared for coping with the situation described in a specific scenario. The report stated, that 'For the choice of a scenario the societal risk is important. For safety issues the risk is larger, than for drainage and water logging issues. In case of low flexibility and high societal risk, there is a preference for the upper limits of climate change.' (Rijkswaterstaat, 2009, p. 28). The report mentions the difficulties of including new scientific information: 'The availability of repeatedly new scenarios results in the risk that decisionmaking will be postponed due to the uncertainties. . .On the one hand it is strived to use most recent insights while on the other hand stable assumptions are needed for decisionmaking and implementation. New insights cannot result in new assumptions and evaluations.' (Rijkswaterstaat, 2009, p. 27). The report identified policy options to reach the described targets, and presented a planning scheme with research and decision milestones. At European level, the Flood Directive (2007/60/EC) came into force in 2007. This directive aims at mapping and reducing flood risk and, as one of the measures, mapping flood-prone areas categorised to low, medium (likely return period 100 years), and high probability. The Flood Directive refers to these categories as scenarios. The 5th PWM states that it will incorporate this Directive in the Dutch legislation in the next planning period. 3.5. Dealing with uncertainties about the future: new approaches (2006 to present) After 2000, the awareness raised that uncertainty over the future will remain and cannot be eliminated (cf. Van Asselt, 2000). More research does not automatically reduce uncertainty but may even increase it. Taleb (2007) emphasized future uncertainty with the introduction of the 'Black Swans' concept. These are unforeseen occurrences (unknown unknowns) with a low probability of occurrence but having a large impact. Although from a different field, the recent 'economic crisis' raised awareness that (unexpected) events influence our world view. New approaches for dealing with uncertainties emerged (e.g. Carter et al., 2007;Dessai and Hulme, 2004;Russill and Nyssa, 2009). Gladwell (2000) introduced the 'tipping points' concept to describe the catchiness of behaviour and ideas. Moser and Dilling (2007) used tipping points to conceptualise social change, and defined it as 'moments in time where a normally stable or only gradually changing phenomena suddenly takes a radical turn.' (Moser and Dilling, 2007, p. 492). In the Netherlands, discussions on scenario updates led to a new approach, using the systems vulnerability to define Adaptation Tipping Points (ATP) indicating whether, and under what conditions, current water management strategies will continue to be effective under different climate changes (Kwadijk et al., 2010). In case of new scenarios, only the timing of an ATP needs to be updated. Events and surprises were recognised as triggers for adaptation, societal change and learning: not only the future endpoint, but also the pathway to this point is important. Therefore, a method to explore Adaptation Pathways was developed. By exploring pathways with transient scenarios, and including the dynamic interaction between the water system and society, policymakers can identify robust and flexible pathways or identify lock-ins (Haasnoot et al., 2011, in press;Offermans et al., 2011). Also, at a policy level new concepts emerged. Recently, both the Scientific Council for Government Policy and the Advisory Council for the Ministry of Transport and Water Management advised to consider uncertainty explicitly (Van Asselt et al., 2010; Raad voor Verkeer en Waterstaat, 2009). The latter states that 'we should not only be prepared for expected but uncertain future climates, but also for unknown uncertainties, so-called Black Swans.' Accordingly, policy development should incorporate proactive adaptation by using scenarios for characterisation of uncertainties, and indicators to monitor the necessity of policy revision. The council also states that 'policy based on an extreme scenario is liable to prove unduly expensive or unnecessary' (p. 53). This statement is in contrast with the second Delta Committee. The scientific council requested attention for normative foresights including a variety of values and perspectives . The chair of the Delta Programme mentioned that: 'One of the biggest challenges is dealing with uncertainties in the future climate, but also in population, economy and society. This requires a new way of planning, which we call adaptive delta planning. It seeks to maximise flexibility; keeping options open and avoiding 'lock-in' (Kuijken, 2011). These were starting points for a new approach for scenario design (Bruggeman et al., 2011). By analysing what makes policies for safety and water supply vulnerable, four climate and land use scenarios with small and large impact were established. Originating from the 1990s, but becoming practice in the past years, is the paradigm shift occurring in the Netherlands from strategies of defence against water with hard engineer-e n v i r o n m e n t a l s c i e n c e & p o l i c y 1 9 -2 0 ( 2 0 1 2 ) 1 0 8 -1 2 0 ing structures to a more 'soft' approach using natural dynamics of the system itself (cf. Inman, 2010). The changing approach involves restoration of wetlands, beaches and natural floodplains, and is referred to as 'ecological engineering', 'building with nature' or 'green adaptation' (e.g. Aarninkhof et al., 2010;Waterman, 2008;Van Koningsveld and Mulder, 2004). These approaches are novel ways of dealing with uncertainty: instead of fighting unpredictable future events, adapting to what is happening (Inman, 2010). 4. Key findings 4.1. Did the scenarios enable robust decision-making? The central issue related to this question is whether the scenarios sufficiently represented relevant knowable uncertainties for enabling robust decisionmaking on water policies. We observed that scenarios in policy analysis shifted from describing future water demand to water availability after the 3rd PWM. For the 1st PWM policymakers expected no relevant changes in water availability. Research studies focused mainly on water availability scenarios in terms of climate change, sea level rise and river discharges. Thus, few studies included all relevant knowable uncertainties for long-term water management. Whether the relevant uncertainties were sufficiently represented can be assessed from the number, value range, temporal and dynamic nature and the amount of alternatives. Over the past decades, the number of scenarios has increased from one to multiple scenarios, thereby increasing the represented uncertainty range. All research studies included several scenarios; first only climate scenarios, later studies also included socio-economic developments. The first policy documents considered a single scenario only, while policy studies in the past 15 years used three to four scenarios. Still, the guidelines for climate adaptation following from these policy documents recommended using only one scenario for the design of water policies (Ministerie van Verkeer en Waterstaat, 2003Waterstaat, , 2008. Hence, although policymakers recognised uncertainty about the future with several scenarios, they persisted focusing on a 'best estimate' of the future climate in terms of a best prediction, until KNMI (deliberately) presented four scenarios in 2006 ( Van den Hurk et al., 2007). Thereafter, policymakers selected one of these four scenarios as 'best scenario' for strategy development for a specific problem such as safety or water supply. Thus, in practise the range of the uncertainties was not fully considered. Although an increasing number of scenarios was introduced, most scenarios remained to be extrapolations of trends. This is reflected by the scenario names. The first four policy documents merely used 'business-as-usual' scenarios called 'trend', 'autonomous developments' and 'prognoses'. Few policy studies included a 'maximum trend', 'worse case' scenario. Only a few background studies tried to include alternatives, such as the 'discontinuity' scenario for the 4th PWM. In contrast, research studies explored more alternatives by considering several scenarios such as 'worse case ', 'lower/ central/upper' estimates, 'dry' and 'cooling' scenarios. The dynamic and temporal nature of the scenarios were limited to defining a few projection horizons, in most cases the years 2050 and 2100. Scenarios described for these years were projections of climate and external context, resulting in a snapshot of the future situation beyond control of the water managers. Likewise, socio-economic drivers of water demand were considered as independent 'policy driven' or 'autonomous developments', which were gradual extrapolations of trends into the future. Adaptation options were then formulated and evaluated against external conditions at one future point. Scenario analysis for water management was, thus, a one-way pressure-impact analysis without response from society or water management, unlike global models, such as IMAGE (Rotmans, 1990). As a result, the water policy studies have ignored the dynamic path into the future with natural (year-to-year) variability, extreme events, the potentially large role of societal response to climate events and water management response to climate-associated events or changing socio-economic perspectives. It is only in recent scientific studies that this interaction is recognized, and that scenarios are becoming completed with these new relevant dimensions of time-series, dynamic interaction and surprises (Haasnoot et al., in press). The range of the values used in the scenarios is an additional indicator for the sufficient representation of uncertainty (see Figs. 2 and 3 for climate scenarios and supplementary information for socio-economic developments). The 1st and 2nd PWM used one value based on trends for water demand, but extended the range due to climate variability by analysing years with different net precipitation and discharge. Three studies translated socio-economic developments into land use maps. The projection year of these scenarios extended from 2015 to 2050 to 2100 resulting in an increase of the considered acreage change and the bandwidth for urban and nature, but not for agriculture. Regarding the climate scenarios, the bandwidth of the emission and global temperature changes in the IPCC scenarios has become larger. Previous climate scenarios for the Netherlands had similar ranges for the global temperature as the IPCC scenarios, but recent scenarios differ from the IPCC assessments. The bandwidth for global temperature rise used in the Netherlands (Fig. 2) is remarkably smaller than the IPCC scenarios at that time. This is caused by the fact that the KNMI scenarios represent approximately 80% of the total range of the output of the climate models, while IPCC scenarios presented the complete range. However, it is uncertain whether water managers and the general public in the Netherlands are aware of this difference, and only see the smaller uncertainty range. Over the years, KNMI's scenario values for summer precipitation have changed considerably, in contrast to the winter values. The introduction of the 'dry' scenarios reflects the awareness of larger uncertainty about future summer climate, as not only the magnitude, but also direction of the change differed in the scenarios. The difference in projections of sea level rise between IPCC and the Dutch scenarios is striking (Fig. 2). While the IPCC scenarios show a trend to narrower ranges and smaller values for sea level rise, the KNMI kept the same range and the values were larger than the IPCC. These differences can mainly be explained from the different uncertainties included in the scenarios (e.g. the uncertainty in the contribution of ice sheets). In the AR4 study part of the uncertainties related to ice sheets was not included in the sea level scenario values, but only described in the report. These uncertainties were, however, included in the national KNMI scenarios, together with recent (scenario and field) studies which were not available at the time of the AR4 (Katsman et al., 2011). In addition, regional differences due to variation in ocean temperature, distribution of melt water over the oceans, and -in some studies -tectonic subsidence contribute to differences between the scenario studies. For example, in the 1990s studies values were derived from the IPCC estimates, supplemented with the natural trend and subsidence of the Netherlands (Van Asselt et al., 9582). The Delta Committee included a tectonic subsidence of 10 cm/year (Vellinga et al., 2008), while the studies in the 1990s included a subsidence of 5 cm/year. The high-end sea level rise explored by the 2nd Delta Committee was discussed thoroughly among researchers and policymakers. The values were larger than in the KNMI'06 scenarios, because the Delta Committee aimed at defining an 'upper plausible' limit of sea level rise by including a wider range of uncertainties and mechanisms underlying sea level rise for the Netherlands. Remarkably, this upper level is not that much higher than the upper ends of the uncertainty ranges put forward in 1990 in the national studies. 4.2. Did the scenarios enable learning? Generally, scenario analysis in water policy studies enabled four different lessons: (1) insight in impacts of climate change and socio-economic developments, as a result of several national, but also global studies (e.g. IPCC reports, ISOS and NRP studies); (2) the need and effectiveness of policies, such the 2nd PWM or the ATP study; (3) the need for adaptation of targets and/or policies as a result of comparing scenarios with monitoring results (e.g. 2nd and 3rd PWM); and (4) awareness about possible impacts of climate and socio-economic developments. For example, the second Delta Committee widely communicated its results through readable reports and YouTube videos accessible for the general public. This received a lot of media attention, and raised the awareness In the DC study, the global temperature range included for the sea level rise was larger (dashed line) than for the climate parameters such as precipitation (solid line). In the AR4 report sea level rise values were presented for the scenarios (solid line), and additional uncertain sea level rise was described in the report (dashed line). of the importance for developing water management strategies to prepare for the future. Furthermore, their 'worst case' scenario deliberately provoked lots of discussion among water managers in the Netherlands, which enhanced the exchange of ideas, and thus involved a large degree of learning according to the chair of the committee (Veerman, 2010). Flood and drought events corresponding with the scenarios, but also the public debate about issues (e.g. climate change, credit crisis) accelerated the influence of study results in policy implementation. Both scenario analysis in water management and the science-policy interaction have clearly evolved in the past twenty years. In retrospective we can distinguish five evolutions that reflect the learning process of scientists and policymakers: 1. From flood protection to integrated water management: This shift was supported by lessons on the effectiveness of policies in scenario analysis. After the major flooding of 1953, water management focused on flood protection. However, in the course of time, and with the step-wise completion of the Delta works, attention was given to other water-related problems. In the PWMs the focus changed from water supply for economic purposes, via a cost-benefit analysis for maintaining water availability to water quality and nature, and eventually introducing the concept of 'integrated water management', which the 5th PWM extended with spatial planning issues. Also, the scientific studies show a learning process through an evolution in the studied subjects. The first research studies focused on safety against coastal flooding, which was later extended to large rivers and regional water systems and finally to impact assessments of water services. 2. Towards integrated scenarios: This shift was initiated by awareness that both water availability and water demand are relevant for water policymaking, as well as the global and European shift to integrated studies. Also, scenario studies showed the relevance of integrated studies for decisionmaking. Although coming from a different starting point, both scientific and policy studies moved towards integrated scenarios. Scientific studies first used climate scenarios. By the end of the 1990s, socio-economic developments were considered increasingly relevant. After only evaluating land use change trends and 'autonomous' socio-economic developments, integrated scenarios comprising both climate and socio-economic components were defined to explore different water management styles. The scenario content in the PWMs changed in correspondence with the purpose of the PWMs from water demand trends to climate scenarios, while at present integrated scenarios are considered. Still, the integrated scenarios are not yet fully employed for impact assessment or policy development. Furthermore, the influence of societal perspectives (e.g. on policy targets) remains to be fully incorporated in policymaking. 3. From predicting to exploring the future: While policymakers experienced that the future turned out differently than envisioned, and some events occurred as complete surprise, evidence grew that we cannot predict the future. Initially, prognoses only applied to possible changes in water demand. Estimates of future flood magnitudes -as required for the probabilistic flood protection approach -were based on autonomous developments or expert judgement. These 'predict and act' studies slowly shifted to an 'explore and anticipate' approach for which several scenarios were used. Still, the initial use of 'best guess' or 'central estimate' climate scenarios reflects the desire of predicting future conditions, although now associated with bands of uncertainty. With the IPCC-SRES and KNMI'06 scenarios, the recognition that the future is uncertain and that there is no 'most likely' future, has increasingly settled in water management. Accordingly, research and policy studies not only aimed at improving the understanding of future developments such as climate change and reducing uncertainties, but also on developing methods for dealing with uncertainties about the future. This observed shift corresponds with observations of futurists Slaughter, 2002;Van t Klooster, 2008). Both approaches, also referred to as forecasting and foresight, are still used next to each other . Also, in water management the predictive approach is still used when it comes to short term actions such as flood forecasting and determining the (long-term) design discharge. For short term drought management both forecasts and scenarios (foresights) are used. Some analysts propose to use probabilistic scenarios, but we have not observed these scenarios in the studies reviewed, but this could be initiated by the EU Flood Directive's approach, which prescribes to use scenarios with floods with low, medium and high probability. 4. Interaction science, policy and events: Most uncertainties about the future were first investigated by scientists, and later incorporated in policy, especially if events seemed to support the trends indicated by scenarios. For example, the 3rd and 4th PWM documents mentioned potentially relevant impacts of climate based IPCC results and scientific research in the preceding decades. In recent years, the turn-over rate from scientific studies to water management has speeded-up. Scientific studies involve stakeholders and while novel approaches in scenario analysis emerge briefly after being introduced in the scientific world in water management approaches as well. 5. From fighting water to accommodating and adapting to water: Since the 1960, awareness raise about potential effects of climate change as a result of scenario studies, and flood events. This awareness triggered a shift from focusing on 'hard' defensive infrastructures for flood protection to 'softer' measures for integrated water management, by using natural processes and accommodating water (e.g. 4th PWM). Thus, instead of static infrastructures with a long life time, easily adaptable policies to changing, unpredictable boundary conditions were chosen. Conclusions and recommendations This review describes the use of scenarios in water management studies in the Netherlands over the past 60 years. To identify what we have learnt from this experience, we analysed whether the scenarios enabled robust decisionmaking and learning. The opportunities for robust decisionmaking resulting from scenarios increased, but are still not fully exploited, e n v i r o n m e n t a l s c i e n c e & p o l i c y 1 9 -2 0 ( 2 0 1 2 ) 1 0 8 -1 2 0 especially in policymaking. Although the number of scenarios increased, for the strategy development often one scenario was appointed for design conditions. Rarely, all relevant uncertainties were included. Especially in the policy documents uncertainties in water demand or availability were considered, while none included social (perspective-based) uncertainty. The number of alternative futures increased, but scenarios mainly remained based on extrapolation of trends. Almost all scenarios used were snapshots at 2 or 3 time horizons, thereby ignoring pathways towards the endpoint, and disregarding the possibility that events may drastically change such pathways. All scenarios were surprise free. The 'decision robustness' can thus be improved. Differences in value range between different scenario studies can often be explained by reading details and communicating with the developers, which indicates that communication on assumptions is important for appropriate scenario use. The scenarios enabled learning about possible impacts of developments, the need and effectiveness of policies, and the need for adaptation of policies. In addition, the scenarios raised awareness about potential future problems. The historical perspective shows a clear science-policy interaction. For example, first used in research studies, the policy documents took climate change and sea level rise up, as important developments to consider in strategy development; sometimes with a little help of a flood or drought event. We observed several paradigm shifts reflecting the learning process of scientists and policymakers: (a) from flood control to integrated water management, (b) from predicting to exploring the future with integrated scenarios, and (c) from fighting water to accommodating and adapting to water. Dealing with uncertainties appears to be a struggle, given the paradox between the desire to explore potential futures using several different scenarios, and the preference of water managers to design policies based on a single scenario that is not frequently updated. However, water managers need to face that the future is inherently uncertain, and scenarios are always likely to be updated by new scenarios as they result from a process of design and construction at a specific moment and location (Hulme and Dessai, 2008b). These uncertainties should not be used as a constraint to develop adaptation measures for water management (cf. Dessai et al., 2009;Hulme and Dessai, 2008b). We provide five recommendations for improving water policy development under uncertainty: 1. For sustainable decisionmaking water managers should consider several scenarios to explore the relevant range of the uncertainties, and not selecting the most likely future or prescribing a 'design' scenario. 2. New approaches are available, which can together with scenario analysis support the development of sustainable measures. Several methods involve many computational experiments to analyse the effects of uncertain parameters (e.g. 'Exploratory Modeling ' Bankes (1993)) to seek for robust decisions (Lempert et al., 2006;Lempert and Bankes, 2003) or optimal solutions ('Info Gap' theory Ben-Haim (2001)). Walker et al. (2001) describe a planning process with different types of actions (e.g. 'mitigating actions', 'hedging actions') and signposts to monitor if adaptation is needed. Also, adaptation tipping points (Kwadijk et al., 2010) and exploring adaptation pathways with transient scenarios can be of assistance. 3. Scenario developers should clearly communicate the assumptions, purpose and limitations of scenarios, and the conditions under which the scenarios were made (process and time limits). 4. Tailored scenarios are needed to ensure relevant scenarios and appropriate use. To develop tailored scenarios water managers should assess the system's vulnerability and communicate this to scenario developers. 5. To improve scenarios and their use, evaluation of past scenarios remains useful. For this purpose, evaluation on 'Decision robustness' and 'Learning success' deserve further elaboration in terms of more explicit criteria concerning e.g. comparison with study's objectives, stakeholder involvement, pathway analysis, more precise addressing of the learning effect (who learned what and how?). 6. Instead of responding to flood and drought events, policymakers could identify triggers (Walker et al., 2001) and adaptation pathways . The triggers give signals when it is time to make a decision and the adaptation pathways allow for identifying robust options and lock-ins. Summarizing, exploring the future with several scenarios, analysing the vulnerability and good communication with scenario developers may help water managers to deal with uncertainties, and make sustainable decisions. M. Haasnoot an environmental scientist specialized in water management and environmental modelling. She works at Deltares and is affiliated to the universities of Utrecht and Twente. Over the past 15 years she was involved in projects on impact assessment of climate and humans on water systems and nature using hydrological and ecological models and integrated scenarios (i.e. land use and climate change). Currently, her research focuses on water policy making under uncertainty by exploring adaptation pathways with metamodels and transient scenarios. H. Middelkoop is professor in physical geography at the University of Utrecht. Over the past decades he has been involved in numerous projects on the impacts of climate and humans on fluvial systems. He has focused on hydrological impacts of climate on the Rhine river and the implications for water-related services. Furthermore, he has a record of research studies on the development of river floodplains over a range of time scales, as well as the fate of contaminants within floodplain areas.
2018-04-03T01:04:13.100Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "7d02ed2cd2af66bf0cc0f5432d164773b600ad61", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.envsci.2012.03.002", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fda562170233e3b2c83932c6f3d5a8fe31b7b87c", "s2fieldsofstudy": [ "Environmental Science", "History", "Political Science" ], "extfieldsofstudy": [ "Economics", "Medicine" ] }
252783284
pes2o/s2orc
v3-fos-license
Prognostic significance of modified lung immune prognostic index in osteosarcoma patients Purpose: Osteosarcoma is the most common primary malignancy of bone with a dismal prognosis for patients with pulmonary metastases. Evaluation of osteosarcoma prognosis would facilitate the prognosis consultation as well as the development of personalized treatment decisions. However, there is limited effective prognostic predictor at present. Lung Immune Prognostic Index (LIPI) is a novel prognostic factor in pulmonary cancers, whereas, the prognostic significance of LIPI in osteosarcoma has not yet been well clarified. In this study, we firstly explore the prognostic role of LIPI and further modify this predictive model in osteosarcoma. Patients and methods: A retrospectively study was conducted at Musculoskeletal Tumor Center of West China Hospital between January 2016 and January 2021. Hematological factors and clinical features of osteosarcoma patients were collected and analyzed. The area under curve (AUC) and optimal cuff-off of each single hematological factor was calculated. Results: In this study, lactate dehydrogenase (LDH), derived neurtrophil to lymphocyte ratio (dNLR), and Hydroxybutyrate dehydrogenase (HBDH) have higher AUC values. LIPI was composed of LDH and dNLR and was further modified by combing the HBDH, forming the osteosarcoma immune prognostic index (OIPI). OIPI divided 223 osteosarcoma patients divided into four groups, none, light, moderate, and severe (p < 0.0001). OIPI has a higher AUC value than LIPI and other hematological indexes in t-ROC curve. According to the univariate and multivariate analysis, pathological fracture, metastasis, NLR, platelet–lymphocyte ratio (PLR), and OIPI were associated with the prognosis; and metastasis and OIPI were independent prognostic factors of osteosarcoma patients. An OIPI-based nomogram was also established and could predict the 3-year and 5-year overall survival. In addition, OIPI was also revealed correlated with metastasis and pathological fracture in osteosarcoma. Conclusion: This study first explore the prognostic significance of LIPI in osteosarcoma patients. In addition, we developed a modified LIPI, the OIPI, for osteosarcoma patients. Both the LIPI and OIPI could predict the overall survival of osteosarcoma patients well, while OIPI may be more suitable for osteosarcoma patients. In particular, OIPI may have the ability to identify some high-risk patients from clinically low-risk patients. Introduction Osteosarcoma is the predominant primary malignant bone tumor and mainly affects adolescents and the elderly. The current standard treatment of osteosarcoma includes radical resection and neoadjuvant chemotherapy (Anderson, 2016). With the application of chemotherapy in cancer therapy, the 5-year overall survival (OS) has been improved to 50%-70% (Bielack et al., 2002). However, due to the drug resistance, distant metastasis and/or local recurrence, the outcome of osteosarcoma patients remains dismal (Yan et al., 2016). Therefore, identifying significant factors correlated with prognosis of osteosarcoma patients is urgently needed. Previous studies had reported the prognostic significance of several biomarkers in osteosarcoma and each of them has been correlated with advantages and disadvantages. Traditional prognostic factors including Enneking stage, tumor size, metastasis, and pathological fractures were instructive in making treatment decisions, but they were thought having limited power for prognosis prediction because they just cover single aspect of clinical or pathological features (Yang et al., 2020). New prognostic factors such as the micro-RNAs, long non-coding RNAs, and gene signature were significant in predicting the prognosis and the outcome of osteosarcomas patients. However, the high expenses and inconveniences of those novel factors limited their further clinical application (Liu et al., 2015a;Wang et al., 2015a;Li et al., 2015). Therefore, a simple, accurate, and inexpensive prognostic predictive factor of osteosarcoma patients is urgently required. Extensive evidences show that cancer-related inflammations play an important role in the progression of malignant tumors (Candido and Hagemann, 2013;Diakos et al., 2014). Targeting of the inflammation pathway has been confirmed as a novel treatment method in prolonging OS (Aggarwal et al., 2009). Due to the diverse roles of inflammation in malignant tumors progression, several biomarkers, including neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), serum lactate dehydrogenase (LDH), and derived neutrophil to lymphocyte ratio (dNLR), were reported valid in predicting the OS and disease-free survival in various cancers (Koh et al., 2015;Pan et al., 2015;Gu et al., 2016;Li et al., 2017). LDH acts a crucial role in tumor metastasis and proliferation and is associated with the prognosis of osteosarcoma (Augoff et al., 2015;Marais et al., 2015;Yu et al., 2017;Yin et al., 2018;Gong et al., 2019). HBDH is the isoenzyme of LDH and the value of HBDH could reflect the activity of LDH. However, the prognostic effect of HBDH in osteosarcoma patients remains unclear. Defined as absolute neutrophil count/[white blood cell concentration−absolute neutrophil count], dNLR was also a novel inflammation biomarker to measure inflammatory status in cancers (Capone et al., 2018;Mezquita et al., 2021;Yang et al., 2021). According to Mezquita et al. (2018) the combination of baseline LDH and dNLR, also named Lung Immune Prognostic Index (LIPI), is a novel index for predicting the benefit from immune checkpoint inhibitor and predicting OS or progression-free-survival (PFS) in lung cancer (Kazandjian et al., 2019;Sonehara et al., 2020). The role of LIPI was also explored in extra-pulmonary cancers Feng et al., 2021;Obayashi et al., 2022). However, as far as we know, the prognostic predictive ability of LIPI remains unclear in osteosarcoma. In this retrospectively study, we tend to explore the prognostic significance of the LIPI in osteosarcoma. Additionally, we intend to establish a modified LIPI, the osteosarcoma immune prognostic index (OIPI), for osteosarcoma patients. Patients From January 2016 to January 2021, all the cases with osteosarcoma in Musculoskeletal Tumor Center of West China Hospital were reviewed. The patients meeting the following criteria were included: 1) patients with high grade osteosarcoma diagnosed by histopathology; 2) patients who presented complete hematological test results before neoadjuvant chemotherapy; 3) patients who received standard treatment at West China Hospital. We excluded: 1) patients who had received neoadjuvant chemotherapy before their first visit in our hospital; 2) patients with hematological diseases; 3) Patients with other malignancies; 4) patients who did not receive standard treatment (patients who are misdiagnosed and inappropriately treated or fail to complete postoperative chemotherapy). Eventually, 223 patients were included in this study and each of them was followed up regularly till death or January 2022. During the follow-up, patients were recommended the outpatient visit every 3 months in the first year postoperatively; every 4 months in the second year; every 5 months in the third year; every 6 months in the fourth and fifth year and yearly thereafter. This study was approved by the ethics committee of West China Hospital and written informed consent was obtained from all participants. In addition, age, gender, tumor site, pathologic fracture status, and tumor metastasis status were collected from the patients' medical records. OS was calculated from the date of diagnosis to the date of death or last follow-up. In the overall cohort, the optimal cutoff value for each hematological marker was calculated based on the time-dependent receiver operating characteristic (ROC) curve and converted into a binary variable according to the cutoff value. In addition, age, gender, tumor site, pathologic fracture status, and tumor metastasis status were collected from the patients' medical records. OS was calculated from the date of diagnosis to the date of death or last follow-up. In the overall cohort, the optimal cutoff value for each hematological marker was calculated based on the time-dependent receiver operating characteristic (ROC) curve and converted into a binary variable according to the cutoff value. Construction and evaluation of the nomogram After the above-mentioned screening process, the prognostic factors were used to construct a nomogram for predicting the OS. For each patient, the total point was equal to the sum of the points of all factors. The link between the total points and the probability of OS were shown at the bottom of the nomogram. The discrimination ability and accuracy of nomograms were evaluated by Harrell's Concordance Index and calibration curve, respectively. The diagonal acts as a reference line and represents the best prediction. Decision curve analysis (DCA) was used to evaluate the clinical application of the nomogram by estimating the net benefits at different threshold probabilities. The clinical impact curve was also drawn to predict reduction intervention probability per 100 patients. In addition, the constructed nomogram also predicted the overall survival of the validation cohort to assess the stability of the nomogram's predictive ability. Exploration of the relationship between osteosarcoma immune prognostic index and clinical characteristics In all 223 patients, the association between the OIPI and traditional clinical characters, such as tumor site, pathological fracture, tumor metastasis status, was further explored by Spearman correlation analysis. Statistical analysis Kolmogorov-Smirnov was used to assess whether continuous variables were normally distributed, and Mann-Whitney U test or Spearman correlation analysis was used to assess differences between continuous variables according to the results. Categorical variables were evaluated using the chi-square test and the fisher's exact test based on the number of individuals in each group. All statistical analyses were conducted using R software, version 4.1.0 (Institute for Statistics and Mathematics, Vienna, Austria). p-values < 0.05 were considered to indicate statistical significance. Patient characteristics and optimal cut-off values of hematological factors Patient characteristics were shown in Table 1. A total of 223 patients were enrolled in this study including 131 male and 92 female. The age of patients ranged from 7 to 67 years with a mean age of 22 years. Tumors mainly located at the extremities (96.0%) and only 9 tumors (4.0%) located at the extra-extremities. Pathological fracture at diagnosis was found in 25 (11.2%) patients and metastasis at diagnosis was found in Establishment of osteosarcoma immune prognostic index and survival analysis of various hematological factors As shown, several hematologic markers were associated with survival outcomes in osteosarcoma, except for the LMR (Figure 2A). The low NLR group showed a better survival outcome rate than the high NLR score group (p = 0.002). The low PLR group showed a better survival outcome rate than the high PLR score group (p = 0.0016) ( Figures 2B,C). In the current study, we constructed the LIPI combined with LDH and dNLR referring to previous research (Mezquita et al., 2018). LIPI divided patients into 3 groups: the 1st group of 52 patients who presented good LIPI, 2nd group of 109 patients who presented moderate LIPI, and a 3rd group of 62 patients who presented poor LIPI ( Figure 2D). As expected, compared with other hematological, LIPI showed better predictive ability in OS ( Figure 3A). However, we found that HBDH was also an effective prognostic factor with AUC value of 164, and performed better in evaluating the OS than other single hematological factors ( Figure 3A). Thus, we combined the LIPI with HBDH and developed a new biomarker of osteosarcoma patient, OIPI. OIPI divided 223 osteosarcoma patients into 4 groups: the 1st group of 45 patients who presented none OIPI, the 2nd group of 72 patients who presented light OIPI, the 3rd group of 65 patients who presented moderate OIPI, and a 4th group of 41 patients who presented severe OIPI. OIPI has a good prognostic predictive power that is even stronger than that of LIPI ( Figure 3A). To further investigate the distinction between OIPI, Osteosarcoma Immune Prognostic Index; LMR, lymphocyte-monocyte ratio; NLR, neutrophil-lymphocyte ratio; PLR, platelet-lymphocyte ratio. Frontiers in Genetics frontiersin.org LIPI and OIPI in predicting the OS for osteosarcoma patients, we drew the Sankey with R software. As shown in Figure 3B, patients in good LIPI group were divided into none and light OPI group, while patients in the severe OIPI group were all from patients in the poor LIPI group. As it can be seen, some patients (those who survived) in the poor LIPI group were shunted to the moderate OIPI group rather than the severe OIPI group, indicating that OIPI is more precise than LIPI in identifying osteosarcoma patients with poor prognosis. Construction and validation of osteosarcoma immune prognostic indexbased nomogram In order to investigate the clinical application of OIPI, we also developed a nomogram combining OIPI with clinical characteristics in patients with osteosarcoma. The hematological indexes (OIPI, PLR, and NLR) and clinical characters (metastasis and pathological fracture) were included in this nomogram to predict the 1-, 3-, and 5-year OS probability for osteosarcoma patients. As shown, cox proportional hazards regression assigned a score based on the hazard ratio for each covariate, and the sum of the scores for each covariate was the nomogram total score ( Figure 5A). According to the calibration curve, the 3-year and 5-year OS curve were consistent with the diagonal line in calibration curve, which meant that, this nomogram could accurately predict 3-year and 5-year OS with the C-index of 0.76 ( Figure 5B). Moreover, we explored the clinical benefits of the nomogram through DCA and clinical impact curve. The result demonstrated that the combined FIGURE 1 ROC analysis of different hematological biomarkers. (A-F) The AUC and best cutoff values of dNLR, LDH, HBDH, LMR, NLR, and PLR were shown, respectively. The vertical axis represents the sensitivity and the horizontal axis represents the 1-specificity. dNLR, derived neurtrophil to lymphocyte ratio; LDH, lactate dehydrogenase; HBDH, Hydroxybutyrate dehydrogenase; LMR, lymphocyte-monocyte ratio; NLR, neutrophil-lymphocyte ratio; PLR, platelet-lymphocyte ratio. Frontiers in Genetics frontiersin.org FIGURE 2 Predictive ability of different hematological biomarkers on OS in 223 patients with osteosarcoma. (A-E) Prognostic predictive effect of different inflammatory biomarkers on OS. Cumulative hazard function was plotted by the Kaplan-Meier methodology and the p value was calculated with two-sided log-rank tests. According to the logistic regression analysis, the differences between four LIPI or OIPI groups in the survival probability were significant. OS, overall survival; LMR, lymphocyte-monocyte ratio; NLR, neutrophil-lymphocyte ratio; PLR, platelet-lymphocyte ratio; LIPI, Lung Immune Prognostic Index; OIPI, osteosarcoma immune prognostic index. FIGURE 3 Comparison of different hematological biomarkers in predicting the overall survival. The predictive ability of osteosarcoma immune prognostic index compared with clinical characters To compare the predictive ability of OIPI with clinical characters including gender, age, tumor site, pathological fracture, and metastasis, we plotted the time-dependent ROC curves. As shown in Figure 6, the predictive effect of the OIPI was significantly higher than that of the clinical characters. Association between osteosarcoma immune prognostic index and pathological fracture and metastasis Finally, we also explored the relationship between OIPI and clinical characters including pathological fracture and metastasis by Spearman correlation analysis. As demonstrated in Figure 7, OIPI was correlated with metastasis (p = 0.00684) and pathological fracture (p = 0.0346). Discussion In this study, we developed the OIPI with the combination of LDH, dNLR, and HBDH. OIPI stratify the 223 osteosarcoma patients into four groups: none, light, moderate, and severe. For example, a patient with dNLR>2.01, LDH>160IU/L, and HBDH >164 IU/L was classified as severe OIPI. The OIPI show better prognostic predictive ability over other hematological indexes and clinical features. Besides, our results also revealed that metastasis and OIPI were the independent risk factors of the prognosis in osteosarcoma patients. The significant prognosis risk factors were used to construct a nomogram which could validly predict the 3-year and 5-year OS of osteosarcoma patients. Moreover, OIPI was also closely related to the metastasis and pathological fracture of osteosarcoma patients. Therefore, our findings indicated that OIPI could act as a useful tool to predict the prognosis of patients with osteosarcoma. Osteosarcoma was the leading cause of tumor-associated mortality in adolescent and children (Ritter and Bielack, 2010). With the advancement of comprehensive treatment, the rate of OS has increased up to 60%-70% in non-metastatic osteosarcoma patients (Bielack et al., 2002). Despite of the advancement of treatment, apparent OS heterogeneity was still observed in osteosarcoma patients. Currently, the traditional clinical features including Enneking staging system, metastasis status, tumor site, histological type, and tumor grade were the main prognosis evaluation factors (Yang et al., 2020). However, those factors have gradually exposed their inaccuracy and inappropriateness during the clinical application, and discrepancy often occurs between those factors and clinical outcomes (Wang et al., 2015b). Recently, several new prognostic factors, including the micro-RNAs, long non-coding RNAs (lnc-RNA), and gene signature were reported effective in the prognosis prediction of osteosarcoma patients (Liu et al., 2015a;Wang et al., 2015a;Li et al., 2015;Li et al., 2021). Most of these biomarkers have a predictive ability, for example, our previous study demonstrated that the metabolicrelated gene pairs signature (MRGP) could reliably predict the OS with a high AUC of 0.9 in osteosarcoma patients (Li et al., 2021). Unfortunately, in osteosarcoma, the vast majority of genes have not been validated by independent cohorts and are still away from clinical application. In addition, most of these biomarkers do not have uniform detection methods, such as the expression levels of miRNAs and lnc-RNAs can be affected by the extraction and processing modes (Mathew et al., 2020;Zhong et al., 2020). Indeed, inconsistencies in miRNA and lnc-RNA expression results are frequently reported (Mathew et al., 2020;Zhong et al., 2020). More importantly, the high-cost and inconvenience of detecting these factors limit the further clinical practice. In contrast, the hematological parameters are derived from blood test results and are low-cost, simple, and convenient to detect. A large number of studies have confirmed the prognostic value of hematological parameters in patients with cancers, such as elevated LDH and ALP implying a poor prognosis in patients with osteosarcoma (Koh et al., 2015;Marais et al., 2015;Pan et al., 2015;Gu et al., 2016;Zumárraga et al., 2016;Li et al., 2017). However, due to the complexity of the tumor microenvironment, a single hematological parameter is not sufficient to fully reflect an individual's inflammatory status. Nevertheless, there is still a large gap in the predictive ability of these single hematological biomarkers compared with metastasis status. In addition, the predictive stability of these single parameters is not enough and have various clinical significance in different studies, such as the FIGURE 6 Comparison of the predictive effect between OIPI and clinical characters on OS. A larger AUC in the t-ROC means a better predictive ability. OIPI, osteosarcoma immune prognostic index. Frontiers in Genetics frontiersin.org LMR (Liu et al., 2015b;Song et al., 2021) (Figure 2A). As the growing recognition towards inflammatory response and prognosis, it is vital to develop a comprehensive index to evaluate the inflammatory status and to predict the long-term survival rate. Some attempts have been taken to integrate certain significant inflammatory factors in order to evaluate patients' clinical outcome, such as the establishment of LIPI in lung cancer (Mezquita et al., 2018). LIPI is a comprehensive inflammatory factor composed of dNLR and LDH (24). LIPI was relevant with inflammatory status and has recently been widely reported as a novel prognostic factor in lung cancer and extra-pulmonary cancer (Mezquita et al., 2018;Kazandjian et al., 2019;Sonehara et al., 2020;Auclin et al., 2021;Feng et al., 2021;Veccia et al., 2021;Xie et al., 2021;Obayashi et al., 2022). More inspiring, studies have shown that LIPI can not only predict the survival but also excellently predict the response to immunotherapy (Mezquita et al., 2018;Auclin et al., 2021;Feng et al., 2021). However, to the best of our knowledge, the prognostic predictive effect of LIPI has never been investigated in osteosarcoma yet. Based on the significant clinical implications for both lung and extra-pulmonary cancers, we hypothesized that, LIPI would also be interesting in predicting the prognosis of patients with osteosarcoma. As expected, our results suggested that LIPI had good predictive ability in predicting the OS of osteosarcoma patients ( Figure 3A). The median OS of patients having good LIPI was significantly longer than that of moderate LIPI and poor LIPI, which was consistent with the result reported by Sonehara et al. (2020);Feng et al. (2021). In addition, during the analysis process, we found that HBDH, an LDH isoenzyme, equally showed prognostic significance in osteosarcoma patients ( Figures 1B, 3A), and had a good predictive ability with the highest AUC value (0.688) among single hematological parameters ( Figure 3A). Given the excellent performance of HBDH in osteosarcoma, we introduced this metric into LIPI and constructed OIPI. We therefore hypothesized that OIPI may be more suitable for patients with osteosarcoma than LIPI. In this study, OIPI divided 223 patients into four groups, of which 45 patients had none OIPI, 72 patients had light OIPI, 65 patients had moderate OIPI, and 41 patients had severe OIPI ( Figure 2E) (p < 0.001). Compared with traditional prognostic factors such as metastasis, OIPI divided osteosarcoma patients more evenly; suggesting that OIPI may be able to identify poor prognosis high-risk patients whose metastatic features are not identifiable (poor prognosis in the initial absence of metastasis) ( Figure 6). Our findings also elaborated that OIPI performed better than other hematological factors such as LDH, dNLR, NLR, and PLR in predicting OS in osteosarcoma patients ( Figure 3A). Most importantly, OIPI does have a higher predictive power than LIPI, as expected ( Figure 3A). Compared with LIPI, OIPI is more accurate in identifying patients with poor prognosis. Our results revealed that some of the patients who survived in poor LIPI were redistributed into moderate OIPI group instead of severe OIPI group, while all patients who died in poor LIPI were distributed into severe OIPI group ( Figure 3B). This led to the hypothesis that, OIPI is more likely to identify osteosarcoma patients who have a real poor prognosis. Moreover, the combination of dNLR, LDH, and HBDH can further reduce the potential bias, as each individual indicator may be affected by various factors. Our results suggested that OIPI is indeed more suitable for osteosarcoma patients than LIPI. In the other hand, OIPI has the advantage of being low cost and is as easily accessible as other hematological factors. Therefore, we believe that OIPI may be more suitable for clinical application than other hematological factors. Inflammation related to cancers has been recognized as the seventh landmark of cancers (Mantovani et al., 2008). Inflammation predisposes to tumor development and promotes various stages of tumor initiation, growth, progression and metastasis (Greten and Grivennikov, 2019). Through engaging the dynamic and extensive interactions with cancer cells and surrounding stromal, inflammatory cells participate in the formation of the inflammatory tumor microenvironment (Greten and Grivennikov, 2019). The dual role of neutrophils in inhibiting or promoting cancer cell growth and metastatic spread remains controversial. But in general, neutrophils are associated with the metastasis at nodal site, tumor grade, and tumor stage for its high FIGURE 7 Association between OIPI and clinical characters including metastasis and pathological fracture. (A,B) The Spearman's rank analysis showed that OIPI was related to the metastasis and pathological fracture. OIPI, osteosarcoma immune prognostic index. Frontiers in Genetics frontiersin.org intra-tumoral density in solid tumors (Masucci et al., 2019). In contrast, lymphocytes in solid tumors are thought to participate in antitumor immunotherapy by secreting cytokines and inducing apoptosis of tumor cells, and there have been lots of studies evaluating their predictive value in different immunotherapies and chemotherapies (Teixidó et al., 2015;Ingold Heppner et al., 2016;Tas and Erturk, 2017). Platelets protect circulating tumor cells from lethal attack by the immune system or other proapoptotic stimuli, and provide signals to establish a pro-metastasis niche environment, ultimately promoting tumor growth and metastasis (Haemmerle et al., 2018). As a classical prognostic factor, LDH could reflect systemic cancer burden and predict the outcomes of numerous cancers, in which an elevated LDH was correlated with the poor prognosis of osteosarcoma patients (Walenta and Mueller-Klieser, 2004). dNLR is a more responsive indicator of systemic inflammatory status than NLR as dNLR includes monocytes and other granulocytes. The predictive potential of dNLR has been demonstrated in a variety of cancers (Capone et al., 2018;Mezquita et al., 2021;Yang et al., 2021). In non-colorectal gastrointestinal cancer, Li et al. (2020) reported that higher level of dNLR was associated with reduced OS in patients with non-colorectal gastrointestinal cancer. To our knowledge, this study is the first to explore this biomarker in osteosarcoma. Our results suggested that elevated dNLR (>2.01) was also correlated with the poor outcome of osteosarcoma patients ( Figures 1A, 2A). As the basic components of OIPI, the elevated LDH, dNLR, and HBDH are associated with the poor outcomes in osteosarcoma. It must be acknowledged that our study has some limitations. First, this was a single-center study, which was retrospective and may have caused selection bias. Second, this study did not fully explore the predictive potential of OIPI. To our knowledge, two studies with large sample sizes have affirmed the prognostic value of LIPI in predicting response to immunotherapy in non-small cell lung cancer. Therefore, it is reasonable to assume that OIPI may be able to predict the response to immunotherapy in osteosarcoma. However, as the first to explore the prognostic ability of LIPI and OIPI in osteosarcoma, this current study lays a foundation for evaluating LIPI and OIPI in predicting the response to immunotherapy in osteosarcoma. Finally, the prognostic value of HBDH in osteosarcoma still needs further validations. This study preliminarily explored the prognostic value of HBDH, an isoenzyme of LDH, a classical marker for predicting the prognosis of cancer patients. Surprisingly, HBDH performed better than LDH in our cohort. However, studies on the prognostic value of HBDH in cancer patients are very scarce. In osteosarcoma, only our study has reported the prognostic value of HBDH. Further studies are therefore needed to clarify the predictive power of HBDH in patients with osteosarcoma or even cancer. Conclusion In conclusion, this present study is the first to construct an OIPI that may be more suitable for osteosarcoma patients based on LIPI and practical hematological markers in osteosarcoma. Our results revealed that both LIPI and OIPI could predict the overall survival of osteosarcoma patients well, and OIPI had a better predictive ability than other hematological parameters. In particular, OIPI may have the ability to identify some high-risk patients from clinically low-risk patients. Further studies are needed to validate our conclusions, especially the value of LIPI versus OIPI in predicting response to immunotherapy in osteosarcoma patients. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding authors. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of West China Hospital, Sichuan University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
2022-10-11T13:29:23.900Z
2022-10-11T00:00:00.000
{ "year": 2022, "sha1": "613d54e14ad7dd407f2a1854dc31221350fe8520", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "613d54e14ad7dd407f2a1854dc31221350fe8520", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237467487
pes2o/s2orc
v3-fos-license
Prevalence of Symptoms of Anxiety Among Residents of Kabul During Pandemic of COVID-19: A Report from Capital of Afghanistan Purpose This study was conducted to evaluate the prevalence of symptoms of anxiety among residents of Kabul during the present COVID-19 pandemic. Methods This descriptive cross-sectional, community based survey was conducted in Kabul, Afghanistan, during the COVID-19 pandemic. Data were collected from July 11, 2020, to July 16, 2020. A bilingual (Dari and English) questionnaire was used for data collection. The first section of the questionnaire collected sociodemographic information of the respondents and the second comprised a self-report standardized scale, i.e. Generalized Anxiety Disorder-7 (GAD-7) to assess symptoms of anxiety. The survey form was distributed through online platforms. All residents of Kabul who used social apps such as WhatsApp and Facebook were eligible to participate in the study and participation was voluntary and non-commercial. Results Altogether 1135 complete responses were received. The majority of them were males and aged 18–34 years. Almost 18% were healthcare workers. Overall, 28.2% of the respondents reported symptoms of moderate to severe anxiety, 38.8% reported symptoms of mild anxiety, and nearly one third of the respondents reported no symptoms. Female participants reported significantly higher levels of anxiety compared with males (39.7% versus 25.6%; p = 0.0001). No significant association was noted between anxiety levels and age groups, occupations, and healthcare workers versus non-healthcare workers. Conclusion The findings suggest that a gender-specific psychosocial protocol needs be integrated into the public health emergency plans to fight against the current pandemic. Introduction The Corona Virus Disease -2019 (COVID- 19) was first diagnosed in Wuhan, China in late December 2019 and has spread rapidly throughout the world. 1 The accelerated increase in the cases of COVID-19, has posed different challenges to global public health, research and medical communities. 2 According to the World Health Organization (WHO), a total of 176,156,662 confirmed cases of COVID-19 have been reported globally which included 3,815,486 deaths as of 16 June 2021. 3 The first confirmed case of COVID-19 in Afghanistan was announced on February 24, 2020. 4 As of May 28, 2021, there were a total of 69,130 confirmed cases and 2,881 deaths due to COVID-19 in the country. 5 However, these figures do not seem to tally with the actual speed of the disease transmission. According to a survey report of the Ministry of Public Health of Afghanistan, around 10 million people might have been infected. 6 Factors such as limited resources and testing capacity and the lack of a national database for deaths could have contributed to this under-estimation of the confirmed cases and deaths of the COVID-19 in Afghanistan. 7 Soon after the emergence of the outbreak, the government of Afghanistan adopted several public health measures, such as compulsory quarantine for people returning from Iran, closure of wedding halls, schools and universities, and the shutdown of non-essential services to mitigate the risk of spread of the infection. 8 Despite the fact that Afghanistan was facing an ongoing food security crisis which was being aggravated by an economic downturn caused by COVID-19, the people were confined at home as much as possible in order to slow down the rate of infection. 9 During previous outbreaks of infectious diseases there was generalized fear among the public and increased fear-related behaviors and anxiety. 10 The current pandemic also raised many uncertainties with a possibility of a fatal outcome. Studies have reported on the level of distress, depression, anxiety, and insomnia in general populations. 11 Anxiety is defined as a feeling of tension, worry, and physical changes such as an increase in blood pressure and/or pulse rate, sweating, trembling, and dizziness. 12 Anxiety may lead to weakness of the immune system if triggered above the normal level, which could increase the risk of getting infection. Moreover, anxious reactions such as rushing to stores, healthcare centers, and pharmacies could disrupt social order and as a result of these, the healthcare service provision might be affected. 13 Increasing evidence suggest that practices such as effective self-care and mental health provision need to be integrated into the preparedness plans so as to reduce the burden of adverse mental health conditions associated to the COVID-19. 14 In Afghanistan, the ongoing political conflicts have already created major challenges to various aspects of the lives of the people. Uncertainty and rapid spread of COVID-19 can possibly further aggravate the condition and make the residents feel stressed, anxious, and upset, among other emotional reactions. 15 Therefore, this survey was conducted to investigate the prevalence of anxiety symptoms among residents of Kabul in order to make appropriate recommendations to policymakers with regard to mental health management during the pandemic. Study Design and Participants Due to existing lockdown restrictions, this cross-sectional, population-based study was conducted online using a bilingual (Dari and English) questionnaire. The first section of the questionnaire gathered sociodemographic information of the respondents and the second consisted of a self-report standardized scale, Generalized Anxiety Disorder-7 (GAD-7) which assessed symptoms of anxiety. All residents using WhatsApp and Facebook were requested to take part in the survey. The online questionnaire was voluntary and non-commercial. Sampling Technique For the purpose of this study, non-probability convenience sampling method was used. Participants were asked to provide their informed consent before answering the questions. Respondents were able to quit their participation from the survey at any time without any consequence. Inclusion and Exclusion Criteria Participants who were 18 years old or more and had access to the internet were included in the survey, whereas, those who did not provide their consent to take part in the study and those who filled out the survey form incompletely were excluded from the study. Data Collection Data were collected anonymously. All respondents provided information about their demography and filled out a questionnaire which was designed to assess the symptoms of anxiety. To ensure the quality of the data, the purpose of the survey was explained to participants and they were encouraged to give answers to questions carefully. After excluding five incomplete questionnaires, a total of 1,135 complete survey forms were included in the final analysis. Ethical Consideration Ethical approval for this study was obtained from the The employment section composed of five categories: (1) Office and management; (2) Business and Free occupation; (3) Teachers or students, both in schools or universities; (4) Healthcare workers including doctors, nurses, technicians, and support staff; (5) Opportunities such as freelancers, retirees, social activities, and relevant fields were included in an 'others' category. Generalized Anxiety Disorder Scale We used the Generalized Anxiety Disorder-7 (GAD-7) scale to investigate the symptoms of anxiety among study participants. GAD-7 has been used extensively to detect and screen the symptoms of anxiety. It is a valid instrument with a desirable level of internal consistency in reported results. 16 Moreover, it can be completed in only 3 minutes with easy scoring. For this study, the Dari translation of the scale was used. To ensure the Dari translation was appropriate for data collection, it was sent to three experts who were proficient in both languages, i.e. English and Dari, for critical review. Their comments/corrections were applied in the Afghanistan version of the scale. Furthermore, the scale was pilot-tested on 30 random people to check whether the questions were comprehensible. The participants of pilot-testing were satisfied with the content of the GAD-7 scale. The GAD-7 contains seven questions, each representing one core symptom of anxiety and investigates the frequency of the symptoms during the last two weeks. Participants were required to choose one option from a 4-item Likert rating scale ranging from 0 to 3. The 0 indicated that the concerned symptom did not occur at all, whereas the 3 indicated that the symptom occurred almost every day. The participants were then given a score of 0-3 for each question accordingly. The minimum score was 0 and the maximum was 21. 16 We considered a total score of 10 points or more as the presence of anxiety among the study population. To classify mild, moderate and severe levels of anxiety, cut-off values of 5, 10, and 15 were applied. 17 Statistics Descriptive statistics were used to describe the demographic variables and frequency distribution of participants according to different levels of anxiety. The prevalence and severity of anxiety were classified based on age, sex and employment. The Chi-square test was used to investigate any possible associations between dependent and independent variables. Statistical Package for Social Sciences version 26 used to analyze the data. A p-value of < 0.05 was considered statistically significant at 95% confidence interval. Table 1 shows the sociodemographic information of respondents. From a total of 1,135 participants, the majority were males and aged 18-34 years (81.5% and 83.2%, respectively). Less than half (44.8%) of the participants were either teachers or students and almost 18% were healthcare workers. Table 2 shows various levels of anxiety symptoms among study participants. Overall, 28.2% of the participants reported symptoms of moderate to severe anxiety, 38.8% reported symptoms of mild anxiety, and nearly one third (33%) of the respondents reported no symptoms. Table 3 shows the association of age, sex, and employment with the prevalence of symptoms of anxiety. Female participants reported significantly higher anxiety symptoms compared with males (39.7% versus 25.6%; p = 0.0001). No significant association was found in the prevalence of anxiety symptoms between healthcare workers and non-healthcare workers. Likewise, age and other occupation groups did not show any significant associations with the prevalence of anxiety symptoms. Discussion This is the first report on the prevalence of anxiety symptoms among the general public in Kabul, Afghanistan during the COVID-19 pandemic. The findings of this survey demonstrate that more than a quarter of people in Kabul have experienced moderate to severe anxiety and that anxiety was more prevalent and severe among women than men. Studies conducted in China, 18 the Philippines, 11 and Saudi Arabia 19 reported similar findings, i.e. one-fourth of the participants reported to suffer from moderate to severe anxiety and female gender was affected more than male. Another study conducted in India during the pandemic found that 25.1% of the subjects experienced moderate to severe depression, 28% were affected with anxiety, and 11% were affected with stress. 20 A systematic review and meta-analysis reported that the overall prevalence of anxiety among the population under study was 28-38% and also reported that female gender, increased risk of contact with COVID-19 patients, lower socio-economic status, loneliness, and spending more time watching COVID-19 related news were common risk factors for adverse mental health effects. 21 These findings further support our results. Contrary to our findings, a study conducted in China reported no significant difference in the prevalence of anxiety symptoms between male and female participants. 22 Furthermore, studies conducted in Iran 23 and India 20 reported higher levels of anxiety symptoms in male participants compared with female participants. Healthcare workers have been at the center of the fight against COVID-19. Extended working hours, increased risk of contraction, shortage of personal protective equipment, particularly in situations similar to Afghanistan, 24 loneliness, exhaustion, and separation from families and friends make them at greater risk of adverse mental health conditions. 11 However, there was no difference between healthcare workers and non-healthcare workers in terms of the prevalence of anxiety symptoms in this study. The findings of a systematic review conducted by Min Luo et al., 2020 is similar to that of our study. 21 This can be due to their strong sense of duty and ability to adapt to crisis. 11 Analysis of our survey data showed that the symptoms of anxiety were equally distributed among different age categories and various occupational groups. This may suggest that people of all ages and occupational groups were equally concerned about COVID-19 and its consequences. This finding conforms to the results of studies carried out in Turkey 25 and India 20 which reported no significant association between age groups and anxiety symptoms. Likewise, a recently published study in China did not report any significant association between the prevalence of anxiety symptoms and different occupational groups. 22 However, a study conducted in Jordan reported higher anxiety scores in older age, 26 whereas, studies conducted in Iran 13 and China 22 reported that younger people tend to experience anxiety symptoms more than older participants. Due to unavailability of data on the status of anxiety symptoms among general population before the COVID-19 pandemic, we were not able to compare the occurrence and severity of anxiety symptoms before and during the pandemic. However, a study from Swiss reported that the overall attendance of psychiatric patients during the COVID-19 pandemic decreased by 17.5% compared to pre-pandemic period and that majority of them were patients with more severe conditions. 27 Limitations This study had some limitations. Firstly, our sample was from one province and not representative of all Afghanistan. Future studies will improve the study design by recruiting more participants from different provinces. Secondly, this study investigated only the symptoms of anxiety among participants. Other mental health conditions such as depression and post-traumatic stress disorder were not included. Researchers may consider more psychological impacts and factors associated with COVID-19 in future studies. Thirdly, the results would have been more concrete and perhaps more useful if a prospective study on the same study group was conducted. Unfortunately, we were not able to collect personal information of the respondents due to ethical restrictions. Thus, we could not perform a prospective study on the same population. Fourthly, the proportion of a particular group, i.e. teachers or students was larger than other groups. This prevents the findings of our study being generalized to the entire population, particularly to those with lower level of education. Finally, the self-reported results of anxiety may not always be the same as the results diagnosed by mental health experts. However, the findings of this study provide an important insight into the mental health status of people in Kabul that could help policymakers in designing more effective plans in controlling the negative consequences of the current pandemic at various levels. Conclusion This study highlights that more than one-fourth of the population in Kabul were suffering from symptoms of moderate to severe anxiety and that the symptoms were significantly higher in female participants compared with male counterparts. Therefore, it seems necessary to integrate a gender-specific psychosocial protocol to public health emergency plans in the fight against the current crisis.
2021-09-09T20:36:42.848Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "a78a0529778c27b20718f9aae0e863ac20eebf2d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=73107", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7a220d1bbc9586361e15d3e28efb96e67bc33b3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
84507446
pes2o/s2orc
v3-fos-license
Targeting a Surface Cavity of α1-Antitrypsin to Prevent Conformational Disease* Conformational diseases are caused by a structural rearrangement within a protein that results in aberrant intermolecular linkage and tissue deposition. This is typified by the polymers that form with the Z deficiency variant of α1-antitrypsin (Glu-342 → Lys). These polymers are retained within hepatocytes to form inclusions that are associated with hepatitis, cirrhosis, and hepatocellular carcinoma. We have assessed a surface hydrophobic cavity in α1-antitrypsin as a potential target for rational drug design in order to prevent polymer formation and the associated liver disease. The introduction of either Thr-114 → Phe or Gly-117 → Phe on strand 2 of β-sheet A within this cavity significantly raised the melting temperature and retarded polymer formation. Conversely, Leu-100 → Phe on helix D accelerated polymer formation, but this effect was abrogated by the addition of Thr-114 → Phe. None of these mutations affected the inhibitory activity of α1-antitrypsin. The importance of these observations was underscored by the finding that the Thr-114 → Phe mutation reduced polymer formation and increased the secretion of Z α1-antitrypsin from a Xenopus oocyte expression system. Moreover cysteine mutants within the hydrophobic pocket were able to bind a range of fluorophores illustrating the accessibility of the cavity to external agents. These results demonstrate the importance of this cavity as a site for drug design to ameliorate polymerization and prevent the associated conformational disease. Conformational diseases arise when a protein undergoes a change in size or a fluctuation in shape, which results in self-association and tissue deposition (1). This process is now recognized to underlie a whole range of diseases including the amyloidoses, prion encephalopathies, glutamine repeat diseases, and Alzheimer's and Parkinson's disease (2). The paradigm for the conformational diseases was provided by the serpinopathies, which result from mutations in members of the serine proteinase inhibitor or serpin superfamily. The most well characterized of these is the severe plasma deficiency that is associated with the Z allele of ␣ 1 -antitrypsin (3). ␣ 1 -Antitrypsin is the most abundant circulating proteinase inhibitor and the archetypal member of the serpin superfamily (4,5). Most individuals have two M ␣ 1 -antitrypsin alleles, but ϳ1 in 2000 are homozygous for the Z variant. The Z mutation results from a Glu 3 Lys substitution at amino acid 342 (6) and leads to retention of ␣ 1 -antitrypsin as inclusion bodies within the hepatocyte. These inclusions predispose to neonatal hepatitis, juvenile cirrhosis, and adult hepatocellular carcinoma (7)(8)(9). The resulting secretory defect accounts for the low circulating plasma level of ␣ 1 -antitrypsin, which is only 15% of normal in the Z homozygote. This plasma deficiency exposes the lungs to uncontrolled proteolytic attack that in turn causes early onset panacinar emphysema particularly in Z ␣ 1 -antitrypsin homozygotes who smoke (10). The structure of ␣ 1 -antitrypsin is based on a five-stranded ␤-sheet A and a mobile reactive center loop (11)(12)(13). Our previous studies have shown that the Z mutation promotes opening of ␤-sheet A to facilitate a sequential interaction between the reactive center loop of one molecule and ␤-sheet A of a second, resulting in polymer formation (3, 14 -16). These polymers tangle within the rough endoplasmic reticulum of hepatocytes to form the periodic acid-Schiff-positive inclusions that are associated with liver disease (3,17). The significance of the reactive loop-␤-sheet linkage was underscored by two other ␣ 1 -antitrypsin variants, Siiyama (Ser-53 3 Phe) and Mmalton (⌬52 3 Phe) that also resulted in hepatic inclusions and severe plasma deficiency of ␣ 1 -antitrypsin. Both of these mutants spontaneously formed polymers in vivo (18,19). Moreover, this linkage accounts for the mild plasma deficiency observed with both S (Glu-264 3 Val) and I (Arg-39 3 Cys) ␣ 1 -antitrypsin (20,21). Further support for polymer formation as the mechanism responsible for the retention of mutant ␣ 1 -antitrypsin within hepatocytes came from studies utilizing the Xenopus oocyte expression system. Point mutations that attenuated polymerization of Z ␣ 1 -antitrypsin in vitro (14,22) increased the secretion of Z ␣ 1 -antitrypsin in vivo (23). Our understanding of the mechanism underlying polymerization has allowed the design of strategies to prevent polymer formation (3,16). To date, however, these have been based on peptides that bind to ␤-sheet A and as a consequence inactivate ␣ 1 -antitrypsin as a proteinase inhibitor. A more useful strategy would be to identify cavities in ␣ 1 -antitrypsin that can bind peptides, or their mimetics, and block polymerization without a loss of inhibitory activity. Our high resolution crystal structure of ␣ 1 -antitrypsin revealed a large hydrophobic cavity bounded by strand 2 of ␤-sheet A and helices D and E (Fig. 1). The cavity is present in monomeric ␣ 1 -antitrypsin but is obliterated during polymerization (24). This cavity could provide an ideal target for drug design to prevent polymer formation and the associated liver disease (11,13,24). We have used site-directed mutagenesis to explore the role of this surface cavity in the conformational transitions of ␣ 1 -antitrypsin in vitro and in vivo. EXPERIMENTAL PROCEDURES Mutagenesis, Expression, and Purification of Recombinant ␣ 1 -Antitrypsin-Pittsburgh ␣ 1 -antitrypsin (Met-358 3 Arg) was used as the wild type protein. Replacement of the P 1 methionine to arginine renders the protein a potent inhibitor of thrombin rather than neutrophil elastase. It otherwise has the same biophysical properties and rate of polymerization as Met-358 ␣ 1 -antitrypsin (15,25). The ␣ 1 -antitrypsin mutants (Leu-100 3 Phe, Thr-114 3 Phe, Gly-117 3 Phe, Leu-100 3 Phe/Thr-114 3 Phe, Leu-100 3 Cys/Cys-232 3 Ser, and Thr-114 3 Cys/Cys-232 3 Ser) were prepared by site-directed mutagenesis, and the sequences were confirmed by automated DNA sequence analysis. The ␣ 1 -antitrypsin variants were cloned into the pET16b plasmid and transformed into BL21(DE3) Escherichia coli, and expression was induced with 0.4 mM isopropyl-␤-D-thiogalactopyranoside. Recombinant ␣ 1 -antitrypsin was extracted from the crude E. coli extract by Q-, then zinc chelating-, and finally glutathione-Sepharose chromatography (11). However this method was not efficient for purification of the cysteine cavity mutants. Therefore, the ␣ 1 -antitrypsin cavity mutants were also expressed in the pQE31 vector that contained an aminoterminal MRSHHHHHH tag. Recombinant proteins were then purified from the soluble fraction of E. coli lysate by HiTrap Ni-chelating and Q-Sepharose column chromatography as detailed previously (26). The proteins were dialyzed into 50 mM Tris, 50 mM KCl, pH 7.4, and purity was confirmed by 12% (w/v) SDS-PAGE. Characterization of Recombinant Wild Type and Mutant ␣ 1 -Antitrypsin-The recombinant proteins were characterized by non-denaturing and 0 -8 M transverse urea gradient PAGE. Inhibitory activity was calculated by incubating bovine ␣-chymotrypsin (5 pmol) of known active site (27) with increasing concentrations of ␣ 1 -antitrypsin (estimated active site concentration of 0.1 M) in a total volume of 100 l with 0.03 M sodium phosphate, 0.16 M NaCl, 0.1% (w/v) PEG 4000, pH 7.4, reaction buffer. The reaction proceeded for 10 min at room temperature, and the residual proteolytic activity was determined by the addition of the substrate succinyl-L-alanyl-L-alanyl-propyl-L-phenylanalyl-p-nitroanilide to a final concentration of 0.16 mM (18). The change in the A 405 over 3 min was observed. Active site values were obtained by plotting residual proteolytic activity against the volume of ␣ 1 -antitrypsin and extrapolating to the x intercept (28). Binary complexes were formed by incubating 50 -100-fold molar excess of the antithrombin 12-mer peptide (P 14 -3 ; Ac-Ser-Glu-Ala-Ala-Ala-Ser-Thr-Ala-Val-Val-Ile-Ala-OH) or ␣ 1 -antitrypsin 6-mer peptide (P 7-2Ј ; Ac-Phe-Leu-Glu-Ala-Ile-Gly-OH) with each ␣ 1 -antitrypsin variant at 0.5 mg/ml in 50 mM Tris, 50 mM KCl, pH 7.4, at 37°C for up to 48 h. Samples at different time points were assessed on a 7.5% (w/v) non-denaturing gel containing 8 M urea (16). All proteins were visualized by Coomassie Blue or silver staining. The melting temperature and far ultraviolet (250 -195 nm) CD spectrum were obtained for each ␣ 1 -antitrypsin mutant as described previously (14) Assessment of Polymerization of Recombinant Wild Type and Mutant ␣ 1 -Antitrypsin-Polymer formation was assessed by incubating each of the recombinant ␣ 1 -antitrypsin variants at 0.1 mg/ml in 50 mM Tris, 50 mM KCl, pH 7.4, at 52°C. The samples were then separated by 7.5% (w/v) non-denaturing PAGE, and the protein was visualized by silver staining. Loss of intensity of the monomeric protein band was determined by densitometry (Quantity One, Bio-Rad). The half-life for polymer formation was calculated from the semi-log plot of the ln fractional loss against time in seconds. Assessment of ␣ 1 -Antitrypsin Secretion from the Xenopus Oocyte-The cavity mutants Leu-100 3 Phe, Thr-114 3 Phe, and Gly-117 3 Phe were inserted into the sp64T plasmid containing either M or Z ␣ 1 -antitrypsin by site-directed mutagenesis, and the sequences were confirmed as before. The in vitro transcription and assessment of ␣ 1antitrypsin secretion from the Xenopus oocyte were undertaken as described previously (23). Fluorophore Labeling of Recombinant Wild Type and Mutant ␣ 1 -Antitrypsin-The cysteine cavity mutants were labeled with a 20-fold molar excess of either tetramethylrhodamine-5-iodoacetamide (5-TM-RIA), 1 at both 20 and 37°C, for up to 72 h at pH 7.4 according to the manufacturer's instructions (Molecular Probes Inc., Eugene, OR). The reaction was terminated by the addition of 1 l of 14.3 M ␤-mercaptoethanol, and the labeled protein was separated from excess label on a NAP-10 gel filtration column equilibrated in 50 mM Tris, 50 mM KCl, pH 7.4. RESULTS Three residues of ␣ 1 -antitrypsin were selected for site-directed mutagenesis in order to explore the role of the cavity in polymer formation. Leu-100 on hD and Thr-114 on s2A have side chains that point into the hydrophobic pocket, whereas Gly-117 is located at the base of the cavity on s2A (Fig. 1B). Introducing large phenylalanine residues, or cysteine residues that could be labeled with bulky fluorophores, at these sites was predicted to fill the cavity and mimic the effect of binding a small molecule inhibitor. The one naturally occurring cysteine at position 232 in ␣ 1 -antitrypsin was replaced by a serine residue to ensure that only the newly introduced cavity cysteine was available for labeling. The Cys-232 3 Ser mutation has no effect on the inhibitory activity or polymerization of recombinant ␣ 1 -antitrypsin (15,29,30). Purification of Cavity Mutants-Wild type, Leu-100 3 Cys/ Cys-232 3 Ser, Thr-114 3 Cys/Cys-232 3 Ser, and Gly-117 3 Phe ␣ 1 -antitrypsin were purified from the supernatant following lysis of the E. coli (11). However, this approach relies on glutathione affinity chromatography, and the ␣ 1 -antitrypsin cavity mutants Leu-100 3 Cys/Cys-232 3 Ser and Thr-114 3 Cys/Cys-232 3 Ser were unable to bind to the glutathione resin. Wild type, Leu-100 3 Cys/Cys-232 3 Ser, Thr-114 3 Cys/Cys-232 3 Ser, Leu-100 3 Phe, Thr-114 3 Phe, Gly-117 3 Phe, and Leu-100 3 Phe/Thr-114 3 Phe ␣ 1 -antitrypsin were therefore cloned into an expression vector containing the MR-SHHHHHH tag at the amino terminus. They were expressed in E. coli and purified to homogeneity by Ni-chelating-and Q-Sepharose column chromatography. Subsequent experiments were performed with purified recombinant ␣ 1 -antitrypsin variants containing an amino-terminal His tag. All the recombinant proteins migrated as a characteristic doublet on 7.5% (w/v) non-denaturing PAGE and as a single band on 12% (w/v) SDS-PAGE apart from Leu-100 3 Phe/Thr-114 3 Phe ␣ 1antitrypsin which contained a minor contaminant (data not shown). All the mutants had an unfolding profile that was similar to wild type ␣ 1 -antitrypsin on a 0 -8 M transverse urea gradient gel with the exception of Gly-117 3 Phe, which unfolded at a higher urea concentration (ϳ4 M) indicating increased stability (data not shown). Finally the far UV CD spectrum of all the variants was similar to that of wild type ␣ 1 -antitrypsin confirming that the mutations did not cause a significant perturbation in the overall structure of the molecule (data not shown). Thermal Stability of Recombinant Wild Type and Mutant ␣ 1 -Antitrypsin-The melting temperature (T m ) of wild type and mutant ␣ 1 -antitrypsin was examined by circular dichroic spectroscopy. The amino-terminal histidine tag increased the melting temperature of all the recombinant ␣ 1 -antitrypsin variants by 6°C when compared with the proteins purified from E. coli lysate that lacked the histidine tag (data not shown). All the mutations introduced into strand 2A elevated the T m of the protein. In particular, Gly-117 3 Phe ␣ 1 -antitrypsin had a melting temperature that was more than 8°C higher than the wild type protein. Helix D mutations had differing effects according to their size. Leu-100 3 Phe lowered the melting temperature by 3.5°C, whereas a cysteine residue at the same position did not significantly alter the T m when compared with the wild type control. Furthermore, the addition of a second phenylalanine residue within the cavity at position 114 (Leu-100 3 Phe/Thr-114 3 Phe) resulted in an increase in thermal stability, reversing the effect of Leu-100 3 Phe mutation alone (Table I). Thermal stability was also assessed by incubating recombinant wild type or cavity mutants of ␣ 1 -antitrypsin between 30 and 100°C, at increments of 10°C for 15 min, and assessing the samples by 7.5% (w/v) non-denaturing PAGE. Leu-100 3 Cys/Cys-232 3 Ser and Leu-100 3 Phe ␣ 1 -antitrypsin had thermal stabilities similar to that of wild type protein. However, Gly-117 3 Phe ␣ 1 -antitrypsin was the most thermostable as it remained monomeric following incubation at 60°C for 15 min, which was 10°C higher than wild type ␣ 1antitrypsin. Thr-114 3 Cys/Cys-232 3 Ser, Thr-114 3 Phe, and Leu-100 3 Phe/Thr-114 3 Phe ␣ 1 -antitrypsin all had intermediate thermal stabilities (data not shown). Thus, the differences in melting temperatures were mirrored in the thermal stability of wild type and mutant ␣ 1 -antitrypsin when assessed by heating and separation on non-denaturing PAGE. Polymerization of Recombinant ␣ 1 -Antitrypsin Variants-Polymerization was assessed at 0.1 mg/ml and 52°C for up to 7 days as these conditions led to polymer formation of histidinetagged recombinant ␣ 1 -antitrypsin that could be visualized by non-denaturing PAGE. The rate of polymer formation was calculated from the loss of intensity of the monomeric protein band using densitometry scanning (Table I). Wild type ␣ 1antitrypsin almost completely polymerized within 24 h when heated at 52°C (Fig. 3a). Leu-100 3 Phe ␣ 1 -antitrypsin accelerated polymer formation in keeping with its lower melting temperature (Fig. 3b). However, replacing Leu-100 with a cysteine residue (Fig. 3c) or introducing another bulky phenylalanine residue at position 114 (Fig. 3d) within the cavity reversed this effect as these mutants polymerized at a rate similar to wild type ␣ 1 -antitrypsin (Table I). Interestingly, all the mutations introduced on to s2A, independent of size, slowed polymer formation as would be predicted from their melting temperatures (Fig. 3, e-g) ( Table I). The most thermostable mutant Gly-117 3 Phe ␣ 1 -antitrypsin dramatically impeded polymer formation, as polymers were evident only after incubating at 52°C for 72 h. Binary Complex Formation between Recombinant Wild Type and Mutant ␣ 1 -Antitrypsin and Exogenous Reactive Loop Peptides-A 12-mer peptide corresponding to the reactive center loop of antithrombin (P 14 -3 ) was used to assess the patency of ␤-sheet A. Binary complexes were formed by incubating recombinant ␣ 1 -antitrypsin (0.5 mg/ml) with a 50 -100-fold molar excess of amino-terminal acetylated 12-mer peptide at 37°C for 48 h. Samples were examined on 7.5% (w/v) non-denaturing PAGE containing 8 M urea. Recombinant wild type ␣ 1 -antitrypsin and the cavity mutants all formed a binary complex with the 12-mer peptide with a 1:1 stoichiometry (Fig. 4, a-g). Gly-117 3 Phe ␣ 1 -antitrypsin formed a binary complex with the peptide at a rate faster than wild type ␣ 1 -antitrypsin (Table II and Fig. 4b), whereas binary complex formation was significantly retarded by Leu-100 3 Phe ␣ 1 -antitrypsin (Fig. 4c). The other cavity mutations all similarly slowed annealing of the peptide to ␤-sheet A. Neither recombinant wild type nor Leu-100 3 Phe ␣ 1 -antitrypsin was able to form a binary complex with the 6-mer peptide, corresponding to P 7-2 of the reactive loop of ␣ 1 -antitrypsin, under the same conditions at 24 h (data not shown). Secretion of Recombinant ␣ 1 -Antitrypsin from the Xenopus Oocyte Expression System-The effects of the mutants were then assessed on the polymerization of the Z variant of ␣ 1antitrypsin. This mutant is too unstable to be expressed as a recombinant protein, and the mutants were therefore assessed for their effect on the secretion of Z ␣ 1 -antitrypsin in vivo. 62% (S.E. Ϯ 4%) of the wild type protein was secreted from the FIG. 1. A, 2-Å crystal structure of monomeric ␣ 1 -antitrypsin illustrating the mobile reactive loop (red) and the ␤-sheet A (green). The hydrophobic surface cavity of interest (arrow) is bounded by strand 2 of ␤-sheet A (s2A), helix D (hD) and helix E (hE). This area is obliterated during conformational transitions that involve reactive loop insertion into ␤-sheet A as demonstrated by the cleaved conformation (24). B is a model of the interior of the hydrophobic cavity displaying the position of the amino acid side chains (blue). The residues Leu-100 on hD and Thr-114 and Gly-117 on s2A were chosen as sites to introduce cavityfilling mutations. oocytes compared with 10% (Ϯ2%) of Z ␣ 1 -antitrypsin (p ϭ 0.0001, Student's t test with Welch correction). Gly-117 3 Phe and Leu-100 3 Phe had little effect on the secretion of Z ␣ 1 -antitrypsin (17 Ϯ 5 and 18 Ϯ 5%, respectively). However Thr-114 3 Phe more than doubled the secretion of Z ␣ 1 -antitrypsin to 23 Ϯ 4% (p ϭ 0.0018 compared with Z ␣ 1 -antitrypsin). The results are the mean of 5-9 separate experiments. Fluorophore Labeling of Recombinant ␣ 1 -Antitrypsin Cysteine Cavity Mutants-The accessibility of the cavity was examined by labeling the cysteine variants Leu-100 3 Cys/Cys-232 3 Ser and Thr-114 3 Cys/Cys-232 3 Ser ␣ 1 -antitrypsin with a number of fluorophores having different length side chains (Table III). The labeling reactions were performed in the dark at either 20 or 37°C for up to 72 h. Incubation of Leu-100 3 Cys/Cys-232 3 Ser ␣ 1 -antitrypsin with 5-TMRIA at 20°C resulted in 14, 24, 20, and 20% labeling when incubated for 12, 24, 48, and 72 h, respectively. Likewise, incubation of Thr-114 3 Cys/Cys-232 3 Ser ␣ 1 -antitrypsin with 5-TMRIA at 20°C resulted in 15, 20, 26, and 24% labeling when incubated for 12, 24, 48, and 72 h, respectively. These results imply that maximal labeling of the cavity cysteine residues with 5-TMRIA was achieved within 24 h. Other fluorescent probes were assessed for their ability to label the cysteine variants in an attempt to improve the labeling efficiency (Table III). The addition of the reducing agent tris-(2-carboxyethyl)phosphine and raising the reaction temperature both increased the amount of protein labeled with 5-IAF but promoted labeling of other susceptible residues (histidines, methionines, and lysines) as evidenced by multiple bands on non-denaturing PAGE (data not shown). This experimental artifact could be overcome by raising the pH to 8.5 and limiting the reaction time to 2 h at 37°C. With this method 35% of Leu-100 3 Cys/Cys-232 3 Ser ␣ 1antitrypsin and 28% of Thr-114 3 Cys/Cys-232 3 Ser ␣ 1antitrypsin were labeled with 5-IAF (Table III). As only approx- a This represents the cysteine variants maximally labeled with 5-IAF in 50 mM Tris, pH 8.5, at 37°C for 2 h. The inhibitory activity was determined against bovine ␣-chymotrypsin, and melting temperature was calculated from circular dichroic spectrum analysis at 222 nm. Polymer formation was assessed by incubating each ␣ 1 -antitrypsin variant at 0.1 mg/ml and 52°C for 24 h. The t1 ⁄2 was calculated from the loss of intensity of the monomeric protein band using densitometry scanning. The results are the mean Ϯ S.D. of three measurements. Aliquots were removed and snap-frozen prior to assessment on 7.5% (w/v) non-denaturing PAGE. All lanes contain 4 g of ␣ 1 -antitrypsin. N represents native ␣ 1 -antitrypsin and P polymers of ␣ 1 -antitrypsin. imately a third of the protein labeled with fluorophore, it was difficult to interpret the effect of these agents on the rate of polymerization. DISCUSSION Our high resolution crystal structure of recombinant M ␣ 1antitrypsin demonstrated a hydrophobic cavity bounded by s2A, hD, and hE that was present in the monomeric structure but predicted to reduce in size by Ͼ70% during polymer formation (24). As such this cavity provides a target for rational drug design to prevent polymerization and ameliorate the associated disease. In order to explore the role of the cavity in polymer formation, three residues, whose side chains border the cavity, were selected for site-directed mutagenesis (Fig. 1B). Introducing large phenylalanine residues at Leu-100 on hD and Thr-114 and Gly-117 on s2A is likely to fill the cavity. A detailed assessment has been undertaken to determine the effect of these mutations on polymer formation in order to determine whether the hydrophobic cavity would be a suitable target for rational drug design. All the cavity mutants had a normal far UV circular dichroic spectrum, were active as proteinase inhibitors, and formed SDS-stable complexes. They varied in specific activity when assessed against bovine ␣-chymotrypsin indicating differing stoichiometry of inhibition, but overall the data show that the point mutations did not lead to a significant change in the structure of ␣ 1 -antitrypsin. All of the mutations introduced onto s2A elevated melting temperature and significantly slowed the rate at which ␣ 1 -antitrypsin formed loop-sheet polymers, particularly the introduction of bulky phenylalanine residues at either position 114 or 117. These observations are in keeping with our previous studies (14). Moreover, the addition of a phenylalanine at position 114 restored the rate of Leu-100 3 Phe ␣ 1 -antitrypsin polymerization to that of the wild type protein. Furthermore, the s2A mutations all increased thermal stability of ␣ 1 -antitrypsin which is in accordance with their effect on polymerization. These data provide strong evidence that filling the cavity with mutants on s2A stabilizes ␣ 1antitrypsin and retards polymer formation without compromising inhibitory function in vitro. Polymerization results from the sequential insertion of the reactive center loop of one molecule into ␤-sheet A of another (15). Point mutations that favor polymerization are predicted to open ␤-sheet A to facilitate incorporation of an exogenous reactive center loop (31). The accessibility of ␤-sheet A was assessed in the mutants by measuring the rate at which they annealed an exogenous reactive loop peptide to form a binary complex (32). Gly-117 3 Phe ␣ 1 -antitrypsin formed a binary complex with the peptide at a rate faster than wild type protein. This was unexpected as the mutant significantly retarded polymer formation. Moreover, the most polymerogenic mutant, Leu-100 3 Phe ␣ 1 -antitrypsin, formed a binary complex with TABLE III The labeling efficiency of cysteine cavity mutants of ␣ 1 -antitrypsin with different fluorophores The fluorophores were incubated with recombinant ␣ 1 -antitrypsin at 20 or 37°C between 2 and 16 h in the dark. Labeled protein was separated from excess fluorophore by gel filtration, and the labeling efficiency was calculated spectrophotometrically. ND, not done. The abbreviations used are as follows: FL IA, 4,4-difluoro-5,7-dimethyl-4-bora-3a,4a-diaza-sindacene-3-propionyl)-NЈ-iodoacetylethylenediamine; FL C 1 -IA, N-(4,4difluoro-5,7-dimethyl-4-bora-3a,4a-diaza-s-iodoacetylethylenediamine; TCEP, tris-(2-carboxyethyl)phosphine. the peptide at one of the slowest rates (Table II and Fig. 4). Such dichotomy between polymer formation and speed at which a mutant of ␣ 1 -antitrypsin accepts exogenous reactive loop peptides has also been observed for the naturally occurring Z and Mmalton deficiency variants (19). Indeed the explanation for Z ␣ 1 -antitrypsin has become apparent recently (16). The Z mutation lies at that head of strand 5 of ␤-sheet A and the base of the reactive center loop, where it opens ␤-sheet A to allow partial incorporation of its own reactive loop. This fills the upper portion of ␤-sheet A and consequently retards admission of an exogenous reactive loop 12-mer peptide. However, the lower part of ␤-sheet A remains patent, and this permits the Z variant to accept either an exogenous loop to form polymers or a 6-mer peptide that is homologous to the P 7-2 residues of the reactive loop (16). The effect of the Leu-100 3 Phe mutation is likely to be similar to that of Z ␣ 1 -antitrypsin. Leu-100 3 Phe retards insertion of the exogenous reactive loop peptide by restricting the opening of the top of ␤-sheet A. The lower part of the A sheet must remain accessible to account for the accelerated rate of polymerization. Nevertheless, unlike Z ␣ 1 -antitrypsin, it does not accept the 6-mer peptide, which is likely be due to the more distal location of the Leu-100 3 Phe mutation in ␤-sheet A. Conversely the Gly-117 3 Phe mutation almost certainly closes the lower portion of ␤-sheet A to explain its slowing effect on polymer formation, which leaves the top of the sheet readily available for insertion of an exogenous reactive loop peptide. Surface cavities contribute to the metastability of ␣ 1 -antitrypsin that is essential for its inhibitory function (33). Although filling of these cavities increases thermal stability, it is often associated with a loss of inhibitory activity (34 -37). Our results show that filling specific surface cavities can stabilize the molecule, attenuate polymerization, and yet still retain inhibitory activity. We have shown that the Z variant of ␣ 1antitrypsin adopts a different conformation from the wild type protein (16). Thus, the Z mutation itself may distort the structure of the hydrophobic pocket that has been selected as a target for drug design. This cannot be assessed in vitro, as Z ␣ 1 -antitrypsin is too unstable to be prepared as a recombinant protein in E. coli. To overcome this problem the effect of the mutants on Z ␣ 1 -antitrypsin was characterized in vivo using the Xenopus oocyte expression system. This system reproduces the way in which hepatocytes handle mutants of ␣ 1 -antitrypsin (19,23,38). The importance of these studies is highlighted by the Gly-117 3 Phe mutation, which markedly slowed the polymerization of wild type M ␣ 1 -antitrypsin in vitro without affecting the secretion of Z ␣ 1 -antitrypsin in vivo. However, the Thr-114 3 Phe mutant, which also slowed the polymerization of wild type M ␣ 1 -antitrypsin in vitro, significantly increased the secretion of Z ␣ 1 -antitrypsin from the Xenopus oocyte expression system. Thus the cavity is likely to have a different conformation in the Z ␣ 1 -antitrypsin than in the wild type protein, or has a different structure when glycosylated. Neither Leu-100 3 Cys/Cys-232 3 Ser nor Thr-114 3 Cys/ Cys-232 3 Ser ␣ 1 -antitrypsin bound to the glutathione-Sepharose column, implying that the cysteine residues were buried within the cavity. The accessibility of the cavity to small mimetics was assessed by fluorophore binding to Leu-100 3 Cys/ Cys-232 3 Ser and Thr-114 3 Cys/Cys-232 3 Ser ␣ 1 -antitrypsin. Both of these mutants bound a range of fluorophores indicating that the cavity was accessible to external agents. The fluorophores had no effect on polymer formation, which implies that more than 30% of molecules must have their cavities filled if a mimetic is to impede polymerization. Taken together our data show that the conformational change in the hydrophobic cavity bounded by s2A, hD, and hE is important in the polymerization of ␣ 1 -antitrypsin. Inhibiting polymer formation is an important therapeutic goal, and several approaches have been explored including chemical chaperones that stabilize protein folding (39,40) and the use of small peptides that bind to ␤-sheet A (16). Chaperone and ␤-strand blockers have also been used in other conformational pathologies such as Alzheimer's and Huntington's disease (41)(42)(43). Although the peptide approach is promising, it is problematic as blocking ␣ 1 -antitrypsin polymerization, through binding to ␤-sheet A, invariably results in the inactivation of ␣ 1 -antitrypsin. The surface cavity bounded by s2A, hD, and hE is ideal for rational drug design as it is accessible to external agents that can block polymerization without an accompanying loss of inhibitory activity. Thus inhibition of ␣ 1 -antitrypsin polymerization within hepatocytes will prevent the liver disease associated with Z ␣ 1 -antitrypsin. Moreover, an increase in the amount of circulating active ␣ 1 -antitrypsin may offer a treatment for the associated emphysema.
2019-03-21T13:03:57.174Z
2003-08-29T00:00:00.000
{ "year": 2003, "sha1": "d27dad90eab9aa6e1ff70185705b58a51beb276e", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/35/33060.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "54074cd1152069f44d7ff043b7ed73339ebb7d19", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
222224172
pes2o/s2orc
v3-fos-license
Platelet-rich plasma and platelet-derived lipid factors induce different and similar gene expression responses for selected genes related to wound healing in rat dermal wound environment Although platelet-rich plasma (PRP) is the plasma fraction that contains higher levels of platelet-sequestered proteins such as growth factors and chemokines, it is also abundant in bioactive lipids whose role in wound healing has not been well characterized. This study provides a preliminary evaluation for the effect of the lipid component of PRP on selected genes related to wound healing. Sprague-Dawley rats were classified into four groups after induction of full thickness excisional wounds: the lipid fraction (LF) (lipid extract from PRP) group, PRP group, dimethyl sulfoxide group, and sham group. Subsequently, relevant groups were topically treated with test preparations. Healing wounds were collected on 3rd, 7th and 14th days, and expression levels of 12 genes were determined using qPCR. LF treatment-induced gene expression signature distinct from that induced by PRP treatment, although there are some overlaps in LF- and PRP-responsive genes. Differentially expressed all eight genes (Cxcl5, Cxc11, Egfr, Tgfb1, IL10, Tgfa, Mmp1, and Mmp7) to LF response were significantly down-regulated at either 3rd, 7th, or 14th days. Also, the comparison between LF- and PRP-treatment groups showed that the LF significantly decreased expression of Cxcl11, Mmp7, and Tgfa mRNA on day 7 of healing. This study revealed that PRP and its LF induced different and similar gene expression responses of the skin during the repair of full thickness excisional wounds. Identifying mRNA response to LF treatment at whole transcriptome level can be beneficial for comprehensive understanding of the role of platelet-derived lipid factors in wound healing processes. INTRODUCTION Reports on the experiments with various designs support the use of platelet-rich plasma (PRP) to enhance wound healing [1][2][3]. Its beneficial effects have been solely attributed to MBRC http://mbrc.shirazu.ac.ir 146 platelet-derived growth and bactericidal factors [4]. Platelets as a main component of the PRP, contain more than 1100 different proteins, which can participate in tissue repair and wound healing. Furthermore, stimulated platelets are characterized by a highly active lipid metabolism as well as enzymatic systems producing various bioactive lipids [5,6]. There is mounting evidence demonstrating that some bioactive lipids in platelets play important roles in skin health as components of structural lipids, precursors of bioactive mediators, signalling molecules and regulators of gene expression [6][7][8]. Based on the observations that the chronic wound microenvironment involve increased levels of several proteinases, which could have deleterious effects on the ability of various peptide growth factors to function within this environment, and the lipids are very resistant to hydrolytic enzymes [9,10], it has been proposed that the beneficial effect of PRP on wound healing may be derived from its lipid component [11]. Hoeferlin et al. [11] tested this in an in vitro model and they demonstrated a direct role for the peptide-free lipid fraction (LF) of PRP in biological mechanisms related to wound healing. Our previous in vivo study also showed that the lipid component of PRP enhanced the healing capacity of skin wounds by positive effects, although not as much as PRP [12]. Healing process in the damaged tissue is a very complex process in which networks of cellular and biochemical interactions take place. Today, we know that nearly 100 genes are highly regulated in dermal wound microenvironment following wound damage [13,14]. Interest in studies of differentially expressed mRNAs of the healing-impaired wounds has also increased in recent years, because it is hoped that such studies may provide important clues for understanding the molecular mechanisms that control the wound repair [15][16][17][18]. Given our previous histological findings showing that lipid fraction has a different wound healing capacity compared to PRP, it is natural to hope that the modulation of gene expression by LF may be different from that by PRP in the wound microenvironment. However, its role in modulating gene expression in the skin wound environment during healing has not been evaluated. Therefore, this study aims to evaluate the effect of LF on the expression of wound healing genes in a rat model with full-thickness skin defect by the analysis of the expression of selected 12 genes from previous studies related to PRP treatment. MATERIALS AND METHODS Experimental Protocol and Wound Creation: This experimental study was carried out with 20 adult female Sprague-Dawley rats (weight 200-240 g). All animals were kept under the same environmental conditions, i.e. at a room temperature of 21-24˚C, with an artificial light cycle (lights: 08:00-24:00 h), and were left for one week for adaptation. This experimental study was carried out with the approval of the Bezmialem Vakıf University Experimental Animal Studies Local Ethics Committee, Istanbul, Turkey (no:110/2017). Before the experimental procedures, the rats were anesthetized with ketamine (50-100 mg/kg) and xylazine (10 mg/kg) by intraperitoneal injection. The dorsal skin of the animals was shaved and disinfected using 70% ethanol. Then, full-thickness, equidistant, and 12 mm diameter dorsal skin excisions were created. The animals were divided into 4 groups (5 rats per group) as follows: PRP group -PRP treated group, LF group -LF treated group, DMSO group-DMSO applied group as control of LF-treated group, and sham group-not treated group with the agent. Except for the experimental groups, additionally 8 rats were used to obtain the PRP and LF samples. Collection of Blood Samples: To collect the blood, rat's chest was opened by surgical method and blood (5-7 mL) was collected via cardiac puncture by using the 18-20 G needle. The blood samples were placed in tubes containing 3.2% sodium citrate. Preparation of PRP: The upper layer of blood (PRP) following centrifugation (400 × g for 10 minutes) was transferred to a tube. Then, the PRP sample was centrifuged at 800 × g for 10 minutes and platelet poor plasma (PPP) was removed from the upper layer. Platelet count determinations were performed by using Cell-DYN C1600 (Abbott Pharmacuetical Co., Ltd., Lake Bluff, IL, USA), and PPP was added to each sample to provide 1 × 10 6 platelets/ µL. Extraction of LF from PRP: Some of the PRP samples (1×10 6 platelets/μL) were used to obtain LF. After activation with 1 U human thrombin and 10 mM CaCl2, these samples were incubated at 37°C for 30 minutes. After incubation, the mixture was centrifuged (1200 × g for 15 minutes), and its supernatant was removed and mixed with absolute ethanol in a 1:5 ratio, followed by agitated stirring until homogenous. Then, the supernatant following centrifugation (12000 × g for 20 minutes at 4°C) of the mixture The dried lipids dissolved in 25% DMSO were adjusted by UV-spectrophotometry method at 208 nm to have the total amount of lipid in equal concentrations. Prepared samples were kept at 4°C. The treatment and biopsy excision: The wounds were treated with an equal volume (50 µL) of PRP, LF, or DMSO on 0 (wounds creation day), 3, and 7 days after wounding, and left open and undressed. The biopsy samples were taken from the wounds on days 3, 7 and 14, cleaned with isotonic NaCl solution, and stored at -80°C. Tissue Handling, RNA Manipulation, and Real-Time Quantitative PCR (RT-qPCR): Total RNA was extracted using a Direct-zol ™ RNA MiniPrep (Zymo Research Corporation, Irvine, CA, USA) from biopsy samples. Quantity and purity of the RNA aliquots was assessed with ratios of A260/A280 and A260/A230 by NanoDrop 2000c UV-Vis Spectrophotometer (Wilmington, USA). Reverse-transcribed cDNA from RNA was generated by a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, CA, USA) following the supplied protocol. The expression levels of selected 12 RNAs were determined using the RT-qPCR Detection System (Bio-Rad Laboratories, USA). RT-qPCR was conducted by using the Master Mixes (Qiagen, Hilden, Germany) according to the manufacturer's protocols. Amplification conditions for all reactions included an initial denaturation at 95°C for 10 min, and then 40 cycles of denaturation at 95°C for 15 s, annealing at 58°C for 20 s and extension for 20s at 72°C (see Table 1 for primer sequences). The GAPDH and Actb housekeeping genes were used as an internal control. The differences of the ΔCt values for each gene between the rat groups were analysed using SPSS software 18.0, version (SPSS Inc., Chicago, IL, USA) with Mann-Whitney U and Kruskal-Wallis tests. The statistical significance was set at p<0.0042 to compensate for multiple testing error (0.05/12 (the number of analysed genes) = 0.0042) and the FC difference was accepted equal or greater than five fold (FC < (2 -5 = 0.031), or FC>(2 5 =32). RESULTS In this study, mRNA expression changes of selected 12 genes from previous studies related to PRP treatment, were determined in LF and PRP -treated groups at 3, 7, and 14th days after wounding, and were compared between them, and with their control groups (and DMSO or sham). On day 3 after wound damage, the fold regulation analysis indicated that 5 mRNAs in both LF-treated and PRP-treated wounds were differentially expressed at a level of greater than 5fold in comparison to control wounds. However, the gene expression profile was not the same in both treatment groups. All genes (Cxcl11, Cxcl15, Egfr, IL10, and Tgfb1) displayed at least a fivefold difference in 3 days were down-regulated in the LF-treated group, while 3 genes (Angpt1, Col1a1, and Col3a1) and 2 genes (Cxcl15 and Egfr) were upregulated and downregulated in PRP-treated wounds, respectively. Comparison of the ΔCt values calculated from measuring the treatment and control groups showed that all differentially expressed genes in both treatment groups were significantly different at p<0.0042 (with multiple testing correction) ( Table 2). When mRNA gene expression changes were assessed between LF and PRP-treatment groups on 3rd day after wounding, it was seen that there were no significant changes between both groups (Table 3, Fig. 1, Fig. 2). Fold Change values ((FC)=2 -ΔΔCt ) were obtained from triplicate measurement, p values were determined using the ΔCt of both groups. The statistically significance was accepted for fivefold FC (FC<(2 -5 =0.031), or FC>(2 5 =32)), and p<0.0042. Up-and downregulated genes were indicated in bold, statistically significant p values were presented in bold italic. On the 7th day after wounding, there were 9 mRNAs in the LF-treated group (Ctgf, Cxcl11, Cxcl15, Egfr, IL10, Mmp1, Mmp7, Tgfa, and Tgfb1), and 5 mRNAs in the PRPtreated group (Angpt1, Col1a1, Cxcl15, Egfr, and Tgfb1) were differentially expressed 5-fold or more in comparison to control wounds. All of the differentially expressed genes were downregulated in the LF-treated group, while 2 of 5 mRNAs (Angpt1, and Col1a1) were downregulated in the PRP-treated group. In this case, comparison of ΔCt values obtained on the 7th day indicated that all differentially expressed genes passed the Bonferroni-corrected significance threshold of p<0.0042 (Table 2). On the other hand, when LF-and PRPtreatment groups are compared among themselves, the fold change values of 4 mRNAs (Cxcl11, Mmp1, Mmp7, and Tgfa) were downregulated, and only one mRNA (Egfr) was upregulated (FC<0.031, or FC>32). However, down-regulated Mmp1 and upregulated Egfr genes with comparison of the ΔCt values did not achieve the p<0.004 level ( Table 3, Fig. 1, Fig. 2). In the 14th day after wounding, although the same number of genes were differentially expressed in the wounds in both treatment groups when compared to their control wounds, their gene expression profile was not the same. The fold regulation analysis showed that 9 mRNAs in both LF-treated and PRP-treated wounds were differentially expressed at a level of greater than 5-fold in comparison to control wounds. All of these 9 mRNAs (Ctgf, Cxcl11, Cxcl15, Egfr, IL10, Mmp1, Mmp7, Tgfa, and Tgfb1) were downregulated in the LF-treated group, while one mRNA (Col1a1) was upregulated, and 8 mRNAs (Cxcl11, Cxcl15, Egfr, IL10, Mmp1, Mmp7, Tgfa, and Tgfb1) were downregulated in the PRP-treated group (FC<0.031, or FC>32, and p<0.0042). The downregulated Mmp1 was not to change significantly in the PRP-treated group (p>0.0042) ( Table 2). When compared among themselves, there were no significant changes between LF-and PRP-treatment groups on the 14th day after wounding (Table 3, Fig. 1, Fig. 2). Fold Change values ((FC)=2 -ΔΔCt ) were obtained from triplicate measurement, p values were determined using the ΔCt of both groups. The statistically significance was accepted for fivefold FC (FC < (2 -5 = 0.031), or FC>(2 5 =32)), and p < 0.0042. Up-and down-regulated genes were indicated in bold, statistically significant p values were presented in bold italic. DISCUSSION The present study provides the first report on a preliminary comparative evaluation with PRP for the effect of the lipid component on the expression of 12 selected genes. Not surprisingly, using more stringent criteria (5-fold cut off and p<0.0042), our data revealed that the LF-and PRP-treatment induced distinct and overlapping expression patterns for the evaluated genes in wound microenvironment during a period of 14 days. Interestingly, all of the differentially expressed genes by the LF treatment were significantly downregulated at either 3, 7, or/and 14 days. In addition, significantly overlapping expression-related genes were also downregulated by both treatments (Fig. 1). On day 3 after wound damage, three downregulated genes (Cxcl11, IL10, and Tgfb1) were identified as LF-responsive genes. In addition, one of the remarkable points at 3 days was that Col1a1, Col3a1, and Angpt1 mRNA levels were upregulated in the wounds exposed to PRP, but not observed in those with the LF treatment. These results associated with the early stages of wound healing show that the injury environment exposed to the LF reduced the expression of IL10, known to be an anti-inflammatory cytokine, and TGF-beta, a growth factor involved in various stages of wound healing. In relation to normal healing process, the expression of Cxcl11 (an angiostatic chemokine) is quiescent or low at the third day of the skin healing process, which compromises the inflammatory phase in rats [20]. The most striking alterations for differently expressed genes with the LF treatment during excisional wound repair were observed at day 7. On this day corresponding to the inflammation and cell proliferation phases, 9 out of a total of 12 genes were downregulated in the LF treatment group, and 3 of these were also found to be downregulated in PRP group, meaning that Tgfa, Ctgf, Mmp1, Mmp7, IL10, and Cxcl11 genes were differentially expressed in response to the LF treatment. The presence of matrix metalloproteinases among LF responsive genes suggests that LF-induced downregulation may not only be limited to suppression of growth factor-and cytokine-related genes (Figure 1 B). We also found that the LF significantly decreased expression of Cxcl11, Mmp7, and Tgfa mRNAs, as compared to PRP-treated wounds. On day 14 post wounding, two genes (Ctgf and Mmp1) were identified as responsive genes to the LF-treatment. Therefore, the observed gene expression changes from the treatment with two different preparations may show that they modulate common and differential pathways and mechanisms in mediating the healing during the repair of full-thickness excisional wounds. Basically, dermal wound microenvironment may exhibit different gene expression responses to the specific combination of growth factors with bioactive lipids, compared with only bioactive lipids. Since our previous histological findings with same experimental setup showed that the lipid component has a lower healing capacity, we may raise the question of whether downregulation of these associated genes by LF have an impact on wound healing. Simply, taking into account the generally accepted functions of the associated genes in various stages of the wound healing process, and our results related to the downregulation of several transcript expressions, it can be expected that topically applied LF may cause a negative effect on wound healing than PRP treatment by inhibiting several critical genes for healing. One possible explanation is simply that cellular components and growth factor content of PRP may exert a better effect for wound healing by synergistically significant promoting effects on the gene expression. In addition, the presence of high levels of certain lipids in LF may also have a negative effect on wound healing compared with PRP by differentially modulating or inhibiting the expression of the associated genes for healing, since Hoeferlin et al. proposed that it may also impair the normal progression of wound repair by manipulating different dynamic processes of the healing [11]. Therefore, further research is needed to explain the regarded gene expression differences between platelet-derived lipid factors-and PRP-treated wounds. In conclusion, this study showed that and PRP and the lipid component of PRP induced both distinct and different expression patterns for evaluated genes in skin wound environment, suggesting that they could modulate differential and common pathways and mechanisms in mediating wound healing.
2020-10-09T07:21:24.737Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "804cdc45461b3ef5e85026839336769ecb3a411b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "24fc2d7d4c4128a79fe023196e9653a253270d28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
222109045
pes2o/s2orc
v3-fos-license
Experimental Investigation of the Impact of CO, C2H6, and H2 on the Explosion Characteristics of CH4 Gas explosions are destructive disasters in coal mines. Coal mine gas is a multi-component gas mixture, with methane (CH4) being the dominant constituent. Understanding the process and mechanism of mine gas explosions is of critical importance to the safety of mining operations. In this work, three flammable gases (CO, C2H6, and H2) which are commonly present in coal mines were selected to explore how they affect a methane explosion. The explosion characteristics of the flammable gases were investigated in a 20 L spherical closed vessel. Experiments on binary- (CH4/CO, CH4/C2H6, and CH4/H2) and multicomponent (CH4/CO/C2H6/H2) mixtures indicated that the explosion of such mixtures is more dangerous and destructive than that of methane alone in air, as measured by the explosion pressure. Furthermore, a self-promoting microcirculation reaction network is proposed to help analyze the chemical reactions involved in the multicomponent (CH4/CO/C2H6/H2) gas explosion. This work will contribute to a better understanding of the explosion mechanism of gas mixtures in coal mines and provide a useful reference for determining the safety limits in practice. INTRODUCTION Gas explosions are ruinous disasters in coal mines. 1−4 Explosions of fuel−air mixtures are characterized by specific parameters, such as explosive limits, maximum explosion pressure, maximum rate of pressure rise, and time to reach the maximum explosion pressure. These parameters reflect explosion intensity and destructiveness. A number of experimental studies on methane (CH 4 ) explosion characteristics can be found in the literature in the last decades. 5−9 Coward and Jones 10 and Zabetakis 11 investigated the flammability characteristics of combustible gases and vapors under a variety of environmental conditions. It is worth noting that the experimental results depend on certain factors of the investigated process, such as the size and type of explosion chambers, energy and type of ignition source, initial pressure and temperature, and flammable mixture flow. 12,13 The explosion characteristic tests of the flammable gases were conducted from the mid-1980s through the 1990s at the Pittsburgh Research Laboratory (PRL) in different volume chambers (8,20,120,25, and 500 L). 14−17 The experimental data reported include lower explosive limit (LEL), upper explosive limit (UEL), peak explosion pressures, and the maximum rate of pressure rise. The tests were performed at ambient temperature and pressure under both quiescent and turbulent conditions. Initial temperature and pressure have tremendous effect on explosion parameters. 18−22 The experimental results show that the explosive limits of methane/ natural gas can be significantly extended at high temperatures and high pressures, and the UEL is more sensitive than the LEL as pressure and temperature increase. 23 The peak explosion pressure is slightly reduced at high-temperature conditions and gradually increases with the initial pressure. 23,24 Moreover, many scholars have investigated the influence of initial ignition energy and initial turbulence on the explosion behavior of methane/air mixtures. 16,17,25,26 The scholars proved that the level of initial ignition energy significantly impacts the flame and explosion characteristics and also extends the explosive limits of methane. 27 The gas flow turbulence increases the maximum explosion pressure and burning velocity. 28 Because of the great catastrophe caused by coal dust explosions, many researchers have made great efforts to explore the mechanism of CH 4 /coal dust mixtures in recent years. The researchers found that the presence of coal dust with methane not only increases the explosion pressure but also accelerates the time of the explosion. 29 The explosion risk of hybrid CH 4 /coal dust is much higher than that of pure coal dust explosion. 30 Furthermore, coal mine gas is a multicomponent gas mixture, with methane being the dominant constituent. However, most reported studies treated the mine gas as pure methane without considering carbon monoxide, hydrocarbons, hydrogen, and other flammable gases; as a consequence, the results may significantly deviate from the reality in coal mines. Hydrogen (H 2 ), although appearing in small amounts in coal mine gas, has a wide explosion range with a low minimum ignition energy, thus posing a high explosion risk. 31−34 Several studies have reported the effects of H 2 addition on the explosion characteristics of hydrocarbon fuel streams or natural gas, in particular an increase in the laminar burning velocity 35−38 and also a decrease in the laminar flame thickness. 36 Jackson et al. 39 carried out a combined experimental and numerical investigation on the effects of H 2 addition to lean premixed CH 4 flames. The results indicate a significant enhancement of lean flammability limits for relatively small amounts of H 2 . The effects of hydrogen concentration on spherically propagating laminar hydrogen/ methane/air flames were studied at different equivalence ratios at atmospheric pressure by Okafor et al. 40 The results showed that an increase in hydrogen concentration in the binary fuel led to an increase in laminar burning velocity. Yu et al. 41 investigated the effects of hydrogen addition on the propagation characteristics of methane/air premixed flames at different equivalence ratios in a venting duct. The results indicated that the tendency toward flame instability increased with the fraction of hydrogen, and the premixed hydrogen/ methane flame underwent a complex shape change with the increasing hydrogen fraction. Using a standard 20 L spherical explosion vessel, the explosion characteristics of H 2 /CH 4 /air and CH 4 /coal dust/air mixtures were investigated by Li et al. 42 The results showed that the presence of molecular hydrogen would significantly increase the maximum explosion pressure and the rate of pressure rise of H 2 /CH 4 /air mixtures. Some research was performed on the explosion characteristics of binary mixtures such as CO/CH 4 , C 2 H 4 /CH 4 , and C 2 H 6 /CH 4 43−45 or ternary systems such as H 2 /CH 4 /CO in the context of the nitrogenous fertilizer industry. 46 Experimental results revealed that the flammability limits of the mixture gas are related to many factors, such as the inherent properties of the flammable gases, the state of the mixture gas (temperature, pressure, and composition), ignition energy, size and geometry of the container, and the flame spread direction. Zheng et al. 47 proposed a BP neural network model to predict the minimum and maximum explosive limits of a flammable gas mixture containing H 2 , CH 4 , and CO based on experimental data. It is worth noting that the flammable gas mixtures investigated belong to rich H 2 /lean CH 4 fuel, which is quite different from the coal mine gas. With respect to the explosion characteristics of coal mine gas, existing results are still largely based on single-or binarycomponent flammable gases. Beyond binary mixtures, few studies were conducted with gas compositions that are of interest to the chemical industry but not necessarily relevant to coal mines. In coal mines, the composition of the flammable gases varies according to the conditions in the coal mine. Besides CH 4 , CO is the main product of low-temperature oxidation of coal, just like C 2 H 6 , C 3 H 8 , C 2 H 4 , and other hydrocarbon gases or the product of degradation and cracking of coal after the temperature reaches a certain threshold. 48,49 C 2 H 6 is a gaseous hydrocarbon produced during the low-temperature oxidation of coal and accounts for a large proportion, especially in the coalbed gas of kerosene symbiotic mines. 49 H 2 is mainly the product of degradation and cracking of coal after the temperature reaches the threshold value. 49 The H 2 content is generally small, but H 2 has the highest explosion risk in coal mine gas. Therefore, the content and composition of the flammable gases are not fixed but vary with time and location. As mentioned above, the explosion characteristics of multicomponent flammable gases including CH 4 , CO, C 2 H 6 , and H 2 have been reported in the literature. However, existing studies are either focused on binary mixtures, for example, CH 4 /C 2 H 4 , 43 CH 4 /CO, 43,44 and CH 4 /C 2 H 6 , 45 or the mixture composition is very different from that of typical gases in coal mines, for example, Zheng et al. 47 studied a CH 4 /H 2 /CO mixture relating to the rich H 2 /lean CH 4 fuel. Against this background, the purpose of the present study is to help bridge this knowledge gap by experimentally investigating the effect of multicomponent flammable gases on the explosion characteristics of methane in the context of coal mines. Therefore, three flammable gases (CO, C 2 H 6 , and H 2 ) were selected and their individual explosion properties were studied, prior to the exploration of the binary-(CH 4 /CO, CH 4 /C 2 H 6 , and CH 4 / H 2 ) and multicomponent (CH 4 /CO/C 2 H 6 /H 2 ) mixtures. The novelty of our work is thus about the explosion characteristics of CH 4 /CO/C 2 H 6 /H 2 mixtures, with the compositions relevant to the situations in coal mines. The quantitative results of the explosive limits and pressures of such mixtures have not been reported before. Furthermore, a self-promoting microcirculation reaction network was proposed to help analyze the chemical reactions involved in the multicomponent (CH 4 /CO/C 2 H 6 /H 2 ) gas explosion. The reaction network suggests that chain initiation and chain-branching reactions in CH 4 /CO/C 2 H 6 /H 2 /air mixtures could happen more easily and faster than in CH 4 /air mixtures, which could help further study the explosion reaction mechanism of multicomponent flammable gases. RESULTS AND DISCUSSION 2.1. Parameters for Assessing Explosion Intensity. The explosion experiments at particular gas concentrations will generate pressure−time curves, which normally show the pressure increases to a peak value before it falls. This peak pressure is designated the maximum explosion pressure (P max ). P max is specific to the concentration of the flammable gas, and thus we can obtain the extreme explosion pressure (Max P max ), that is, Max P max = Max{P max1 , P max2 , P max3 , ..., P maxn }, where 1, 2, 3, ..., n are the different gas concentrations investigated in the experiments. The corresponding gas concentration at Max P max is the most dangerous concentration (C m ), which is normally slightly greater than the stoichiometric concentration for this gas−air reaction. Another measure used is the explosion risk degree (F), proposed by Kondou and Rock: 50 F = 1 − (L/U) 0.5 , where U and L are the UEL and LEL, respectively, of a particular flammable gas. 2.2. Explosive Limits of CH 4 , CO, C 2 H 6 , and H 2 . The explosive limits of CH 4 , CO, C 2 H 6 , and H 2 were measured and given in Table 1, which also show that the explosion risk degree of CO, C 2 H 6 , and H 2 is higher than that of methane. Therefore, it is necessary and significant to study how the presence of a small amount of CO, C 2 H 6 , or H 2 affects the explosion characteristics of methane in the context of coal mines. 2.3. Impact of CO, C 2 H 6 , and H 2 on Methane Explosion Characteristics. In typical coal mine gases, the CO concentration is no more than 3.0% and the C 2 H 6 and H 2 concentrations less than 2.0%. Therefore, CO of four different concentrations (0.5, 1.0, 2.0, and 3.0%) and C 2 H 6 and H 2 of four different concentrations (0.5, 1.0, 1.5, and 2.0%) were added to CH 4 /air mixtures for experimentation, respectively. The explosive limits of flammable gas have been well investigated by Coward, Hughes and Raybould, and Zabetakis. 5,10,11 Le Chatelier's formula is widely used to determine the explosive limits of the flammable gas mixtures. For hydrocarbon−air mixture, the prediction of Le Chatelier's formula is relatively accurate, but for the gas mixture containing H 2 or CO, this does not fit. 10,43,47,51,52 The explosive limits of the binary mixed gases CH 4 + CO and CH 4 + C 2 H 4 were determined by Deng et al. 43 The results show that there is a certain gap between the value calculated by Le Chatelier's formula and the experimental data. Furthermore, the UELs of binary fuel mixtures of hydrogen with methane, ethylene, and propane in air were determined experimentally at elevated temperatures by Wierzba and Ale. 52 It was shown then that the experimental limits of hydrogen−methane mixtures deviate slightly from those calculated using Le Chatelier's rule and the UELs of hydrogen−ethylene mixtures deviate significantly from those calculated using Le Chatelier's rule over the range of temperatures tested and at a residence time of 10 min. It was suggested that the narrowing of the UEL is due to surface reactions on the stainless steel wall during the waiting time that tends to change the mixture composition just prior to spark ignition. The explosive limits of these mixtures exposed to longer residence times do not obey Le Chatelier's rule. Hence, the explosive limits of CH 4 /CO, CH 4 /C 2 H 6 , or CH 4 /H 2 are measured. The impacts of CO, C 2 H 6 , and H 2 on the explosive limits of methane are presented in Table 2. The experimental data show that the addition of CO, C 2 H 6 , and H 2 tends to reduce the LEL of CH 4 in varying degrees. This is especially the case for C 2 H 6 : when the added C 2 H 6 reaches 2.0%, the LEL of CH 4 significantly reduces from 5.05 to 2.25%. Adding CO and C 2 H 6 decreases the UEL of CH 4 , whereas adding H 2 increases the UEL of CH 4 first before it reduces it. Tables 3−5 and Figures 1−3 illustrate the impact of CO, C 2 H 6 , and H 2 on P max for different CH 4 concentrations, whereas the results on Max P max and C m are shown in Table 6. It should be noted that C m of pure CH 4 is ∼10%. After adding CO, C 2 H 6 , or H 2 , the C m value of CH 4 moves toward LEL at different levels with an increased amount of the other gas added; this effect is particularly pronounced for C 2 H 6 . With the same amount of the other gas added, the impact on C m of binary-component mixtures is as follows: C O m C H 2 2 6 . The P max value of binary mixtures (CH 4 /CO, CH 4 /C 2 H 6 , CH 4 /H 2 ) is higher than that of pure CH 4 when the concentration of CH 4 is between LEL and C m ; on the other hand, when the CH 4 concentration is between C m and UEL, P max of binary mixtures is lower than that of pure CH 4 . Furthermore, after adding CO, C 2 H 6 , or H 2 , the Max P max value of the binary-component mixtures is higher than that of pure CH 4 . Increasing the amount of the added CO, C 2 H 6 , or H 2 leads to a more significant rise of Max P max . With the same amount added, the impact on Max P max of binary-component mixtures is as follows: 6 . In summary, the explosion intensity and destructive power of binary-component mixtures (CH 4 /CO, CH 4 /C 2 H 6 , and CH 4 / H 2 ) are significantly higher than that of pure CH 4 . Normally, the explosions in low gaseous mines belong to oxygen-rich explosion, and 5.0% has been viewed as the critical value for CH 4 concentration as it is approximately the LEL. However, our experimental results show that the LEL of CH 4 will be significantly reduced in the presence of CO, C 2 H 6 , or H 2 , exposing the increasing risk even at a low CH 4 concentration. Meanwhile, the most dangerous concentration (C m ) for CH 4 , producing the highest explosion overpressure, also reduces in the presence of CO, C 2 H 6 , or H 2 . Therefore, the effects of CO, C 2 H 6 , or H 2 on the explosive limits and C m of CH 4 must be fully considered, and the alarm threshold of CH 4 needs to be lowered accordingly in coal mine gas monitoring and early alarm system. 2.4. Impact of CO/C 2 H 6 /H 2 Mixtures on Methane Explosion Characteristics. Usually CO, C 2 H 6 , H 2 , and CH 4 coexist in coal mines. To gain further understanding of the explosion behavior of multicomponent flammable gases, CO, C 2 H 6 , and H 2 were added to CH 4 /air mixtures with four different ratios, as given in Table 7. The results are shown in Tables 7 and 8 and Figure 4. When CO, C 2 H 6 , and H 2 were added together to CH 4 /air mixtures, both the LEL and UEL of CH 4 decreased. Note that the C m value of pure CH 4 is approximately 10%; this C m of CH 4 shifts toward the LEL with the increase of the CO/C 2 H 6 / H 2 amount. With regard to the maximum explosion pressure, when the CH 4 concentration is between LEL and 9.0%, P max of the CH 4 /CO/C 2 H 6 /H 2 mixtures is higher than that of pure CH 4 , whereas for the CH 4 concentration between 11.0% and UEL, P max of the CH 4 /CO/C 2 H 6 /H 2 mixtures is lower than that of pure CH 4 . With the increase of the added CO/C 2 H 6 / The reaction mechanism of multicomponent flammable gases is complicated in coal mines. Other flammable gases (CO, C 2 H 6 , and H 2 ) with a slight change in concentration could make a significant impact on the explosion characteristics of CH 4 . Moreover, if a gas explosion occurs, a certain amount of H 2 , CO, and other flammable gases may be produced. This is likely to trigger the second explosion which would be more dangerous than the first one. 53 Therefore, the likelihood and risk of the second gas explosion should be fully assessed in the emergency rescue system, in particular, with regard to a small change of other flammable gases. Furthermore, the explosion characteristic parameters of multicomponent flammable gases such as explosive limits, maximum explosion pressure, and C m may not be obtained by Figure 1. Impact of CO addition on P max for different concentrations of CH 4 . Figure 2. Impact of C 2 H 6 addition on P max for different concentrations of CH 4 . simply superimposing the values from single-or binarycomponent flammable gases. Theoretical Analysis of the Impact of CO, C 2 H 6 , and H 2 on the Explosion Characteristics of CH 4 . From the perspective of the chemical reaction kinetics, we proposed a self-promoting microcirculation reaction network of the multicomponent flammable gases ( Figure 5) Reaction 1 would be the primary chain initiation reaction in the initial reaction period, as reaction 1 is easier to be triggered than reaction 2 according to the reaction activation energy. 54 With the reaction progressing, the temperature rises, which would make reaction 2 the main chain initiation reaction. Reactions 3 and 4 are the main branching chain reactions, and reactions 5−11 are the main elementary reactions in the multicomponent flammable gas reaction system. Either reaction 1 or 2 provides H • radicals that develop a radical pool of OH • , O • , and H • by the chain reactions 3 and 4. The reactions 1−4 are of great importance in the oxidation reaction mechanisms of hydrocarbon in that they provide the essential chain-branching and propagating steps as well as the radical pool for fast reactions. Moreover, the reactions 5−11 will produce new H 2 , H • , and OH • radicals which further accelerate the rates of reactions 1−11. Thus, as illustrated in Figure 5, the above reactions may lead to a self-promoting microcirculation system and a positive feedback mechanism. With the progress of the reaction, the reaction rate, the heat release, and the pressure will increase constantly until the explosion pressure reaches the maximum value. It is important to realize that any high-temperature hydrocarbon mechanism involves H 2 and CO oxidation kinetics and that most, if not all, of CO 2 that is formed results from reaction 9. However, experimental evidence indicates that the oxidation of CO to CO 2 comes late in the reaction scheme 7 because reaction 9 is slower than the reaction 6 or 11. Hence, the chain initiation of H 2 can produce highly active H • and OH • radicals, and CO may be mainly involved in the later reaction of CH 4 , which will further accelerate the reaction speed of the main reactant CH 4 . In reaction 2, M is the usual third body. CO may participate in molecular collisions as the usual third body to help produce H • radicals. Therefore, CO may increase the collision frequency and make the chain initiation reaction of CH 4 /CO/C 2 H 6 /H 2 / air mixtures easier than CH 4 /air mixtures. It is worth to note that the chain initiating reaction of methane/air mixtures is difficult and slow. However, in the presence of OH, O, and H radicals, the reactions 5−7 that involved methane are all fast. Furthermore, the positive feedback mechanism of the selfpromoting microcirculation may make the reaction rate of CH 4 /CO/C 2 H 6 /H 2 /air mixtures faster than CH 4 /air mixtures, resulting in more heat release and higher explosion pressure. C 2 H 6 oxidizes much more slower than hydrogen, and very small quantities of hydrogen will increase the rate of CO oxidation substantially. 7 Therefore, the influence of C 2 H 6 on the explosion process is likely smaller than that of CO and H 2 . Chain initiation and chain branching reactions initiated by H 2 in multicomponent flammable gas mixtures are easier and faster than that in the methane/air mixture. CO and C 2 H 6 also may accelerate the chain initiation reaction as the third body. Thus, the rate of CH 4 oxidation is substantially faster than that of the pure methane reaction system. The above theoretical analysis gives further support to the observations in the experiments that binary-(CH 4 /CO, CH 4 /C 2 H 6 , and CH 4 /H 2 ) and multicomponent (CH 4 /CO/C 2 H 6 /H 2 ) mixtures are more dangerous, and the resulting explosion is more destructive than that by pure CH 4 . CONCLUSIONS Three representative gases, CO, C 2 H 6 , and H 2 , were selected to investigate the impact of their presence on CH 4 explosion characteristics. The explosion strength and explosion destructive power are higher for binary-(CH 4 /CO, CH 4 /C 2 H 6 , and CH 4 /H 2 ) and multicomponent mixtures (CH 4 /CO/C 2 H 6 / H 2 ) than for pure CH 4 by measuring the explosion pressure P max and Max P max . Because of the decrease of the LEL and C m of CH 4 in the presence of CO, C 2 H 6 , and H 2 , the impact of other flammable gases on the explosion characteristics of CH 4 must be fully considered, and the alarm threshold of CH 4 needs to be lowered accordingly in coal mine gas monitoring and early alarm system. Meanwhile, other flammable gases (CO, C 2 H 6 , and H 2 ) with a slight change in concentration could make a significant impact on the explosion characteristics of CH 4 . Experimental results indicate that the characteristic explosion parameters of multicomponent flammable gases such as explosive limits, maximum explosion pressure, and C m may not be obtained by simply superimposing the values from single-or binary-component flammable gases. Experiment is still the primary way to obtain these parameters. The experimental data will also potentially provide guidance for the further study of the reaction mechanism of multicomponent gas explosion. Furthermore, a self-promoting microcirculation reaction network of the multicomponent flammable gases (CH 4 /CO/C 2 H 6 /H 2 ) was proposed, combining the theory analysis with experimental data. This reaction network reflects the impact of CO, C 2 H 6 , and H 2 on the explosion characteristics of CH 4 and aids to reasonably infer the explosion reaction mechanism of multicomponent flammable gases. For multicomponent flammable gases, the dynamics of the reaction and the interactions between components can become quite complex. The investigation on the explosion microscopic reaction mechanism of multicomponent flammable gases and the influence of temperature, pressure, ignition energy, and turbulence on multicomponent flammable gas characteristics will be conducted in future work. Furthermore, the scale of the experiments is comparatively small in relation to large industrial scales, and advanced computational tools combined with experiments should indeed be welcomed. EXPERIMENTAL METHODS Experiments were performed in a 20 L spherical closed vessel which consists of an explosion vessel, a gas distribution system, an ignition system, and a measurement system, as shown in Figure 6. The explosion vessel (designed and produced by the Chongqing Branch of China Coal Research Institute, China), which can withstand a maximum pressure of 3.0 MPa, is made of stainless steel and is nearly spherical. The approximate dimensions are 34 cm in height, 30 cm in diameter, and 19,900 cm 3 in effective volume. The gas distribution system is composed of bottles of pure CH 4 , CO, C 2 H 6 , and H 2 , an air compressor, a vacuum pump, and a pressure gauge. The flammable gases used in this experiment were provided by the Shanghai Pujing Gas company. The purity of each flammable gas was higher than 99.99%. The partial pressure method was used for mixture preparation, with a high-accuracy sensitive pressure transducer. The ignition source for the experimental setup was a detonating pyrotechnic ignition device (supplied by Liuyang Wenchi Electric Ignition Co., China) with a calorimetric energy of 5 J. The ignition position is in the center of the vessel. For the measurement of the static pressure, an NTS-2A precise digital pressure gauge (produced by NTS Co., Japan) was used. The measurement of the dynamic explosion pressure was achieved using a CY-DB 1303-type pressure sensor (produced by Baoji Huarui Sensor Institute, China) and a multifunction explosive reaction controller (produced by the Chongqing Branch of China Coal Research Institute). The explosion characteristics were determined at ambient temperature and pressure. The electric igniter was placed at the center of the reactor, and the explosion vessel was evacuated and purged with fresh air three times. Then, the required mixture of flammable gases and air was injected into the vessel using the partial pressure method, waiting for at least 5 min to allow the gas to fully mix in the reactor. Afterward, the igniter was ignited by the ignition controller, and the pressure data were recorded and saved into the computer. Both the data acquisition instrument and ignition controller were connected with a synchronizer trigger to ensure the synchronization of ignition and data acquisition. A minimum of three experiments were performed for each initial condition of the flammable mixtures. The maximum explosion pressure listed in the tables is the maximum value among the three experiments.
2020-10-03T05:08:03.219Z
2020-09-16T00:00:00.000
{ "year": 2020, "sha1": "1c2a203329b26022cdc7287e618491b726b635aa", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c03280", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c2a203329b26022cdc7287e618491b726b635aa", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
146810270
pes2o/s2orc
v3-fos-license
A case of combined 21‐hydroxylase deficiency and CHARGE syndrome featuring micropenis and cryptorchidism Abstract Background 21‐hydroxylase deficiency (21‐OHD) is caused due to CYP21A2 gene variant. In males, the excess androgens produce varying degrees of penile enlargement and small testes. CHARGE syndrome (CS) has a broad spectrum of symptoms. In males, genital features such as micropenis and cryptorchidism are found in 48% of CS. There are no reports of patients with combined 21‐OHD and CS; therefore, it is unknown whether the external genitalia shows penile enlargement or micropenis with/without cryptorchidism. Case A boy, born at 37 weeks and 5 days of gestational age with no consanguineous marriage, was admitted to our hospital due to congenital cleft lip, cleft palate, micropenis, cryptorchidism, and a ventricular septal defect. He had severe hyponatremia and hyperkalemia on day 10. He was diagnosed to have 21‐OHD and CS. His external genitalia demonstrated both cryptorchidism and micropenis, but not penile enlargement. Methods DNA was extracted from peripheral leukocytes using standard procedures. Sanger sequence was performed in CYP21A2. Exome sequence was performed, and then, Sanger sequence was performed around variant in CHD7. Results Genetic screening for CYP21A2 gene was performed and compound heterozygous variants of c.293‐13A/C>G (IVS2‐13A/C>G) and c.518T>A (p.I172N) were detected in chromosome 6p21.3. His mother had been heterozygous variant of c.293‐13A/C>G, and his father had been heterozygous variant of c.518T>A. Simultaneously, a de novo splicing acceptor alteration in c.7165–4 A>G, in chromodomain helicase DNA binding protein‐7 (CHD7), located in chromosome 8q12 was detected, and the patient was diagnosed with 21‐OHD and CS. Conclusion Although these two disorders exhibit different modes of inheritance and their co‐morbidity is extremely rare, we encountered one male patient who suffered from both 21‐OHD and CS. | INTRODUCTION 21-hydroxylase deficiency (21-OHD) (OMIM 201910) is caused due to CYP21A2 gene variant in chromosome 6p21.3 in an autosomal recessive manner. 21-OHD is further subclassified as follows: (a) salt wasting form (SW), (b) simple virializing form (SV), and the (c) nonclassical form (NC). The symptoms of SV in boys are progressive penile enlargement, small testes, and rapid growth. The symptoms of NC are basically null. The typical symptoms of SW, the most severe type of 21-OHD, include vomiting, failure to thrive, and skin pigmentation during the neonatal period (Nimkarn, Gangishetti, Yau, & New, 2016). Laboratory findings associated with SW include hyponatremia, hyperkalemia, hypoglycemia, increased adrenocorticotrophic hormone (ACTH), and decreased cortisol. Dysfunction of 21-hydroxylase in adrenal glands induces excess androgen and 17-hydroxyprogesterone (17-OHP) (Nimkarn et al., 2016). In females, excess androgens result in symptoms such as various degrees of clitoral enlargement, labioscrotal fold fusion, and formation of a urogenital sinus. In males, the excess androgens produce varying degrees of penile enlargement and small testes (Nimkarn et al., 2016). CHARGE syndrome (CS) (OMIM 214800) has a broad spectrum of symptoms, such as coloboma, heart defects, atresia of choanae, retarded growth and development, genital abnormalities, ear anomalies, and/or hearing loss defects (van Ravenswaaij-Arts & Martin, 2017). The causative gene is mainly chromodomain helicase DNA binding protein-7 (CHD7) located in 8q12.1. CS usually occurs in an autosomal dominant manner. Family histories of CS are rare, and 97% of CHD7 variants are de novo (Sanlaville & Verloes, 2007). In males, genital features such as micropenis and cryptorchidism were found in 48% of CS (Shoji et al., 2014). To our knowledge, there are no reports of patients with combined 21-OHD and CS; therefore, it is unknown whether the external genitalia show penile enlargement or micropenis with/without cryptorchidism. Although these two disorders exhibit different modes of inheritance and their comorbidity is extremely rare, we encountered one male patient who suffered from genetic conditions of both 21-OHD and CS. | Case report The male patient was born at 37 weeks and 5 days of gestation to nonconsanguineous healthy parents. His birth weight was 2,712 g (−0.45 SD) and birth height was 46.8 cm (−0.52 SD). At the time of birth, a cleft lip, cleft palate, micropenis, cryptorchidism, and ventricular septal defect (VSD) were detected in the patient. Seven days after birth, heart failure developed due to VSD, and diuretic agents were started; furthermore, the patient exhibited severe hyponatremia, hyperkalemia, and hypoglycemia. Neonatal mass screening revealed that his 17-OHP was elevated to 8.9 ng/ml (reference value <3.5 ng/ml) at 1 day after birth and 18.3 ng/ml at 5 days after birth. He was clinically diagnosed with 21-OHD. Because his elder brother had previously been diagnosed with 21-OHD, with compound heterozygous variants of c.293-13A/C>G (IVS2-13A/C>G) and c.518T>A (p.I172N) in CYP21A2 (Figure 1a), genetic screening was performed for this newborn. His mother had been heterozygous variant of c.293-13A/C>G, and his father had been heterozygous variant of c.518T>A. Hydrocortisone and fludrocortisone were started for 21-OHD due to hyponatremia and hypokalemia on day 10. At that time, his ACTH level was 67.8 pg/ml and his cortisol level was not measured. He failed to pass an automated auditory brainstem response (AABR) test for both ears on day 7; however, advanced examination of his ears was not performed due to his generally unstable condition. At 2 months of age, he underwent VSD closure, foramen ovale closure, and arterial ligation. Thereafter, his general condition stabilized. For genetic diagnosis of 21-OHD, DNA was extracted from peripheral leukocytes using standard procedures. Samples were subjected to Sanger sequencing that showed compound heterozygous variants of c.293-13A/ C>G and c.518T>A in CYP21A2, same as his elder brother (Figure 1b). He was referred to our hospital at age 3 months due to a change of residence. Bronchoscopy was performed at age 7 months, and showed a flattened epiglottis, pharyngeal softening, and split laryngeal softening. AABR was performed again at age 4 months, and showed bilateral hearing loss, therefore he started using a hearing aid and his bilateral ear anomaly was detected from 6 months of age. He was discharged from our hospital at 6 months. Then, he was repeatedly admitted to our hospital due to respiratory infections and endocarditis. Oral ingestion was difficult due to his flattened epiglottis, pharyngeal softening, and split laryngeal softening. Therefore, nasal feedings were needed. He frequently developed aspiration pneumonia due to gastro-esophageal reflux. At 1 year of age, he underwent a Nissen's fundoplication. At 15 months of age, his height was −2.0 SD and weight was −1.3 SD, resulting in a diagnosis of short stature. At 2 years of age, he underwent cryptorchidism. Developmental delays were observed and included standing with help at the age of 15 months and walking alone at 18 months. Cleft palate, micropenis, cryptorchidism, VSD, and bilateral hearing loss indicated CS. The genetic test for CS was performed at 5 years old. Exome sequence was performed, and then, Sanger sequence was performed around variant in CHD7. He was diagnosed as CS with a de novo splicing acceptor alteration (NM017780.3:c.7165-4 A>G) (Katoh-Fukui et al., 2018) (Figure 1c). Of note, our patient was diagnosed with two novel gene variants of 21-OHD and CS. This study was approved by the Institutional Review Board Committees of the National Research Institute for Child Health and Development, National Hospital Organization Kyoto Medical Center, and Kurume University. Written and informed consent was obtained from the family of the patient before reporting this case. | DISCUSSION Our case was genetically diagnosed with 21-OHD and CS. In Japan, the prevalence of 21-OHD is 1 in 18,000 (Tsuji et al., 2015), and the prevalence of CS is at least 1 in 10,000 (Issekutz, Graham, Prasad, Smith, & Blake, 2005). The likelihood of a patient having both diseases is extremely rare. The relevance of these two diseases is unknown. According to previous reports of comorbidities associated with 21-OHD, only two females were reported with Turner's syndrome (Larizza et al., 1994;Montemayor-Jauregui, Ulloa-Gregori, & Flores-Briseno, 1985) and ornithine transcarbamylase (OTC) deficiency (Kim et al., 2013). In both cases, the relevance of the complicated disease was also unknown. On the other hand, according to previous reports of diseases that can coincide with CS, there was one male case with micropenis who was reported as demonstrating Marfan syndrome with an FBN1 gene variant (Chiu, Thakuria, & Agrawal, 2010). Combinations of either 21-OHD or CS and other genetic or chromosomal diseases are thought to be rare. Male patients with 21-OHD often show increased penis length due to excess adrenal androgen exposure. (El-Maouche, Arlt, & Merke, 2017) Meanwhile, cryptorchidism and micropenis are often observed in patients with CS because of insufficient gonadotropin hormones and androgen (Wheeler, Quigley, Sadeghi-Nenad, & Weaver, 2000). The patient suffering from both 21-OHD and CS and showed cryptorchidism and micropenis. These findings indicate that the androgen level after 10 weeks of gestation was presumably a less-than-normal level, even though he suffered from 21-OHD. Similar case reports are required to validate these observations, and female cases with both 21-OHD and CS may offer interesting examples of how external genitalia can be affected. Three forms of 21-OHD are SW, SV, and NC. We diagnosed this case as SW from the symptoms of hyponatremia, hyperkalemia, hypoglycemia, increased ACTH. The poor feeding, weight loss, failure to thrive, vomiting, dehydration, and hypotension might not be shown due to cardiac failure and treatment of intravenous infusion, diuretic medication, hydrocortisone, and fludrocortisone. Penile enlargement was not seen. The symptoms of CS are coloboma, heart defects, atresia of choanae, retarded growth and development, genital abnormalities, ear anomalies, and/or hearing loss defects, which were fully shown in this case. This case had symptoms of both 21-OHD and CS; however, micropenis and cryptorchidism appeared as the feature of CS.
2019-05-08T13:27:18.670Z
2019-05-06T00:00:00.000
{ "year": 2019, "sha1": "1d529c2c00af51dcab68ed7efcba08bb22185eed", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mgg3.730", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f233633c10fddb33e258e301d1f62ffedb1dca1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15119461
pes2o/s2orc
v3-fos-license
Intra-articular delivery of adipose derived stromal cells attenuates osteoarthritis progression in an experimental rabbit model Introduction Cell therapy is a rapidly growing area of research for the treatment of osteoarthritis (OA). This work is aimed to investigate the efficacy of intra-articular adipose-derived stromal cell (ASC) injection in the healing process on cartilage, synovial membrane and menisci in an experimental rabbit model. Methods The induction of OA was performed surgically through bilateral anterior cruciate ligament transection (ACLT) to achieve eight weeks from ACLT a mild grade of OA. A total of 2 × 106 and 6 × 106 autologous ASCs isolated from inguinal fat, expanded in vitro and suspended in 4% rabbit serum albumin (RSA) were delivered in the hind limbs; 4% RSA was used as the control. Local bio-distribution of the cells was verified by injecting chloro-methyl-benzamido-1,1'-dioctadecyl-3,3,3'3'-tetra-methyl-indo-carbocyanine per-chlorate (CM-Dil) labeled ASCs in the hind limbs. Cartilage and synovial histological sections were scored by Laverty's scoring system to assess the severity of the pathology. Protein expression of some extracellular matrix molecules (collagen I and II), catabolic (metalloproteinase-1 and -3) and inflammatory (tumor necrosis factor- α) markers were detected by immunohistochemistry. Assessments were carried out at 16 and 24 weeks. Results Labeled-ASCs were detected unexpectedly in the synovial membrane and medial meniscus but not in cartilage tissue at 3 and 20 days from ASC-treatment. Intra-articular ASC administration decreases OA progression and exerts a healing contribution in the treated animals in comparison to OA and 4% RSA groups. Conclusions Our data reveal a healing capacity of ASCs in promoting cartilage and menisci repair and attenuating inflammatory events in synovial membrane inhibiting OA progression. On the basis of the local bio-distribution findings, the benefits obtained by ASC treatment could be due to a trophic mechanism of action by the release of growth factors and cytokines. Introduction Osteoarthritis (OA) is one of the most common and widespread rheumatic disease among adults with a significant and negative impact on patient quality of life [1]. OA affects the whole joint and is characterized by inflammation, bone remodeling and progressive destruction of the articular cartilage components with following functional disability [2]. In OA, the earliest changes in cartilaginous tissue appear at the joint surface where mechanical forces are greatest. Chondrocytes in OA cartilage, especially those arranged in clonal clusters, exhibit cytokine and chemokine receptors, increase production of matrix proteins and matrix degrading enzymes leading to a modulation of inflammatory and catabolic responses [3]. The altered homeostasis of the extracellular matrix (ECM) macromolecules in cartilage tissue during OA leads to an increased enzymatic activity of metalloproteinases (MMPs) [4] and enhances the synthesis of pro-inflammatory molecules [5,6] with following pain and joint instability. The currently available treatments for OA are effective only on a short term and there is a largely unmet medical need for durable disease-modifying treatments [7]. Established therapies for OA include mainly preventive measures such as weight control, exercise or pharmacologic approaches which usually consist of analgesic therapy, including acetaminophen, salicylates and non-steroidal anti-inflammatory drugs [8][9][10]. The feasibility of using mesenchymal stem cells (MSC) from bone marrow or other tissue sites, based on their capacity to influence and regulate different stages of cartilage repair, is a challenge of considerable appeal to clinicians. In particular, studies using animal models have shown promising results following MSC therapy for the treatment of musculoskeletal injuries [11]. In recent years, several studies indicate that the therapeutic properties of MSC are due not only to their capacity to differentiate but also to their ability to release growth factors that regulate the immune response in a paracrine manner [12,13]. Other multipotent cell types have been found in different compartments, including adipose tissue, also known as adipose derived stromal cells (ASCs) [14]. ASCs can easily be obtained from liposuction waste in large quantities with little donor site morbidity and are known to differentiate along selected lineage pathways in response to specific growth factors and environmental cues [15][16][17][18]. ASCs represent valid candidates to promote cartilage and menisci healing due to their ability to release biologically chondrogenic active factors, such as transforming growth factor-β1 (TGF-β1) and bone morphogenetic protein 4 (BMP-4) [19], anti-fibrotic and anti-apoptotic growth factors [20,21]. Moreover, some studies have provided insights into the role of ASCs in suppressing immunoreactions as MSC [22,23], suggesting their possible use in decreasing local inflammation in several musculoskeletal diseases. Different musculoskeletal treatments with ASCs have already been reported with encouraging results for the regeneration of cartilage and bone, even if the mechanism of action is not clearly defined yet [24]. Up to date, functional data on the role of ASCs in the care of osteoarthritis are, however, still scarce [25][26][27][28]. The main aim of the current study was to explore the efficacy of an intra-articular injection of ASCs in preventing cartilaginous and menisci damages and attenuating inflammation in synovial membrane following the onset of OA in a rabbit model. In general, our findings revealed a positive effect of ASCs in promoting cartilage and menisci healing and contrasting inflammatory processes in the synovial membrane. Rabbits Adult male New Zealand rabbits (age: 12 months old, body weight: 4 ± 0.5 Kg) were used. European and Italian laws on animal experimentation were strictly followed throughout the study. OA was induced surgically by bilateral anterior cruciate ligament transection (ACLT) [29,30]. Adipose tissue was harvested from the inguinal zone for ASCs isolation. A total of 2 × 10 6 and 6 × 10 6 ASCs were re-suspended in 4% rabbit serum albumin (RSA) (Sigma Aldrich, St. Louis, MO, USA) and administered by an intra-articular injection into the hind limbs after OA onset, eight weeks from ACLT. Four percent RSA was used as a control group. A small number of animals were designed to monitor the fate of the cells at 3 and 20 days after ASC injection. Animals were sacrificed respectively at short-(16 weeks) and long-(24 weeks) term follow-ups from ASC administration to investigate their effect on different compartments of the knee joint. To this end, femoral condyles, meniscal and synovial tissues were harvested. Table 1 shows the groups involved in the experimental study. Osteoarthritis model For the ACLT procedure, a 2 cm skin and capsular incision was carried out and right and left ACLs were exposed through a medial para-patellar cut. To achieve optimal visualization of the ACL, the patellar bone was displayed laterally and the knee was placed in full flexion. To avoid spontaneous reattachment, the ACLT was associated to the removal of a small fragment of tissue between the two ligament stumps. The incision was sutured in a routine fashion. After each operation, an antibiotic (Flumequine, Sigma) and analgesic (Ketoprofene, Rhone-Poulenc-Rorer, Sanofi Aventis, Strasbourg, France) therapy was administered immediately after surgery and for two days thereafter. All surgical procedures were performed under general anesthesia and sterile conditions. ASC isolation and growth OK Adipose tissue was harvested from the rabbit inguinal zone and treated with 0.4 U/ml NB4 collagenase standard grade (Serva Electrophoresis, GmbH, Heidelberg, Germany) to isolate ASCs. The stromal vascular fraction (SVF) containing ASCs was re-suspended in α-MEM (Gibco, Carlsbad, CA, USA) supplemented with 1 U/ml heparin (Sigma, St Louis, MO, USA), 2% platelet growth factor enriched plasma (PGFEP) [31] and 0.05 g/ml penicillin G (Gibco). Initially, ASCs were plated at a density of 4,000 cells/cm 2 and cultured for a few days. Cells were then harvested and seeded at a density of 2,000 cells/cm 2 for the expansion. Viability was evaluated at SVF isolation and expansion by the Trypan Blue dye exclusion method. The selection of ASCs was carried out on the basis of their ability to adhere to the plastic, to form colonies and to differentiate into chondrogenic and osteogenic lineages [15,32]. The number of population doublings (PD) was calculated to verify rabbit ASCs growth during the culture period. ASCS administration A total of 2 × 10 6 (cell density: 6 × 10 6 cells/ml) and 6 × 10 6 (cell density: 18 × 10 6 /ml) autologous ASCs at passage 1 in 4% RSA were prepared upon sterile conditions in 1 ml syringes and delivered by an intra-articular injection into the hind limbs at OA onset (eight weeks). The needle was inserted into the knee joint posterior to the lateral edge of the patella at the junction of the femur and tibia to avoid damage to the articular cartilage. The sample was injected into the joint capsule and the knee was flexed. The rabbit was held in this position for a few minutes before recovery. Post-operative course and long-term adverse events were monitored. Local bio-distribution of ASCS The fate of the ASCs was monitored by the local biodistribution evaluation at 3 and 20 days from ASC administration. ASCs were labeled in vitro by 6 μM chloro-methyl-benzamido-1,1'-dioctadecyl-3,3,3'3'-tetramethyl-indo-carbocyanine per-chlorate (CM-Dil) dye (Molecular Probes, Carlsbad, CA, USA) [33] as indicated by the manufacturer. A total of 6 × 10 6 labeled-ASCs were injected in the hind limbs and un-labeled cells in the contra-lateral ones, as the control. Cells were monitored in vitro in parallel with the in vivo experiments (3 and 20 days) to evaluate ASCs' viability, their doubling time and their differentiation potential. Animals were sacrificed and the different tissues (femoral condyle, tibial plateau, synovial membrane, menisci, ligament and articular capsule) were processed for histology by paraffin embedding. Sections were analyzed by epi-fluorescent microscopy Eclipse 90i (Nikon, Melville, NY, USA) by using 4',6-diamidino-2-phenylindole (DAPI) and TRITC filters to evaluate nuclear component and labeled-ASCs, respectively. Macroscopic imaging and histopathology Macroscopic assessment of the knee joints from the animals that underwent ACLT was performed by India ink staining (Higgings Waterproof Drawing Ink, Eberhard Faber, Lewisburg, TN, USA) to assess cartilage lesions. Macroscopic assessments were also performed in the groups treated with 4% RSA and ASCs at 16 and 24 weeks [34]. The histo-morphometric evaluations were performed by image analysis Qwin v 2.4.4 software (Leica, Imaging Systems, Cambridge, UK) on osteo-chondral specimens embedded in metacrylate. Quantitative measurements of cartilage thickness (CT) and fibrillation index (FI) in ASCtreated groups and 4% RSA were carried out at both experimental times. All the analyses were performed by two blinded investigators according to the indications provided by Papaioannou and Pastoureau [35,36]. To perform histological analysis, synovial membrane, menisci and femoral condyles were placed in 10% neutral buffered formalin and osteo-chondral specimens were decalcified for three weeks at room temperature (RT). Specimens were paraffin embedded and thin sections (5 μm) were taken. Hematoxylin/Eosin and Safranin-O/Fast Green (Sigma) stainings were used to assess general morphology and proteoglycans/collagen content in synovial, meniscal and cartilaginous tissues respectively. Semi-quantitative analyses using appropriate scoring systems were used to evaluate the cartilaginous and synovial tissues [37]. In particular, the assessment of cartilage tissue was carried out with Laverty's scoring system which takes into account four parameters: Safranin-O/Fast Green staining, cartilage structure, chondrocyte density and cluster formation. It has a range from 0 to 24 where 0 translates to healthy cartilage while 24 to severe cartilage lesions. The assessment of synovial tissue was performed with a semi-quantitative scoring system that comprises the histological features of synoviopathy in OA, including the synoviocytes (proliferation, hypertrophy), the inflammatory state and the synovial stroma (hyperplasia, proliferation of blood vessels, proliferation of fibroblasts, cartilage/bone detritus). It has a range from 0 to 30 where 0 translates to a normal white, semi-translucent smooth tissue and 30 indicates severe proliferation, hypertrophy, inflammation and hyper-vascularity. All the evaluations were performed by two blinded researchers with an Eclipse 90i microscope (Nikon). Immunohistochemistry The analyses were carried out to evaluate type I and II collagens, MMP-1, MMP-3 and TNF-α on cartilage, synovial and meniscal specimens. Appropriate un-masking procedures using specific treatments, including hyaluronidase and pronase for collagens and citrate buffer solution for MMPs and TNF-α, followed by blocking steps were carried out. Fixed samples were incubated at RT with mouse monoclonal antibodies directed against type I collagen (2 μg/ml) (Sigma), type II collagen (2 μg/ml) (Hybridoma Bank, Department of Biological Sciences, University of Iowa, Iowa City, IA, USA), MMP-1 (5 μg/ml) (Chemicon, Temecula, CA, USA), MMP-3 (5 μg/ml) (Chemicon) and TNF-α (2 μg/ml) (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) respectively. Biotinylated secondary antibody and alkaline-labelled streptavidin (Biocare Medical, Walnut Creek, CA, USA) were used. The reactions were developed using Fast Red substrate (Biocare Medical). Negative controls were performed either by omitting the primary antibodies or using an isotype-matched control. Six microscopic fields (100 × magnification) relative to the anterior, central and posterior regions in cartilage tissue were used to perform a semi-quantitative analysis of immunohistochemistry. A semi-quantitative method that assigns immunohistochemistry values as a percentage of positive cells (collagen I, MMPs, TNF-α) or positive area (extracellular matrix) (collagen II) was provided for a complete assessment of protein expression, with maximum scoring being 100%. The analysis was performed by two blinded investigators using Red/Green/Blue (RGB) with Software NIS-Elements and Eclipse 90i microscope (Nikon). Statistical analysis Statistical analysis was carried out using the Statistical Package for the Social Sciences (SPSS Inc., Chicago, IL, USA) software version 15.0 (SPSS Inc.). Data for Laverty scores, CT, FI, number of cluster formation and immunohistochemical quantifications were expressed in terms of 95% confidence intervals (CI) of the mean and/or mean ± standard deviation (SD). The general linear model (GLM) with Sidak correction for multiple comparisons was used to assess the influence of the kind of treatment and follow-up on the different parameters listed above. The Wilcoxon signed rank test was used instead for paired comparisons in order to evaluate the effects exerted by each group at the different experimental times. Development of mild grade of osteoarthritis All the animals developed a mild grade of OA at eight weeks after ACLT without any re-attachment of the ligament. Macroscopic analysis gave evidence of varying degrees of OA features in the OA group, with evident black patches due to the uptake of India ink for cartilage softening and fibrillation processes ( Figure 1B) [38]. Histological findings revealed a series of OA changes, including cartilage fibrillation and erosion, peri-articular osteophytosis and moderate reduction in meta-chromatic staining in the cartilage tissue ( Figure 1D) [30,38]. As concerns synovial tissue, thickening of the lining layer and presence of inflammatory cells were observed by histological analysis at eight weeks ( Figure 1F). The investigations on menisci revealed fibrillation processes in the area of femoral attachment and an increased presence of cell clusters by means of histological analysis ( Figure 1H). Local biodistribution of CM-Dil labeled ASCS into the OA knee joint The CM-Dil dye provided in vitro a uniform labeling of the cells (90%) as assessed by fluorescence microscopy. The staining did not affect the ASCs' viability (95%), cell doubling (2.09 PD at passage 1 and 1.7 PD at passage 3) and differentiation potential into chondrogenic and osteogenic lineages (Figure 2). Regardless of the bio-distribution in vivo, ASCs were clearly detected in the synovial membrane and medial meniscus at 3 and 20 days (Figure 3). In particular, the presence of ASCs in the lining layer of synovial membrane and a diffuse distribution along the thickness of fibro-cartilaginous and vascular areas of the medial meniscus were noticed. No cell engraftment was observed in the anterior cruciate ligament, in the articular cartilage from the medial or lateral femoral condyle and in the tibial plateau at any of the experimental times evaluated (data not shown). ASCS treatment prevents cartilage destruction A glossy, white cartilage with no degenerative noticeable macroscopic evidence was observed in the ASCS-treated groups at both experimental times. Cartilage softening and fibrillation were instead observed in the OA and 4% RSA groups particularly in the medial regions (data not shown). Histo-morphometric analyses provided further information on the status of the structure of cartilage tissue. Among the pathological changes recorded during OA, an increase of FI was observed in OA group compared to 4% RSA at both 16 and 24 weeks (P < 0.01). A significant reduction of FI was noticed in both ASCs treated groups at 16 weeks compared to 4% RSA at both experimental times (P < 0,01), indicating a protective role of ASCs in reducing fibrillation processes ( Figure 4A). The intra-articular delivery of 2 × 10 6 ASCs revealed an increase of CT values respect the OA group and 4% RSA at 16 and 24 weeks (P < 0.01) ( Figure 4B). Based on visual assessment of the histological staining, there were and proteoglycan loss, fibrillation and delamination processes. A progression of the pathology, was also noticed in the 4% RSA group at 16 and 24 weeks compared to OA (P < 0.01). In general, ASC administration determined a decrease of Laverty's scores compared to 4% RSA (P < 0.01) at both experimental times. ASCS treatment inhibits the thickening of lining layer in synovium and reduces cluster formation in menisci Synovial tissue after ACLT in OA group displayed at eight weeks thickening of the lining layer and evidence of infiltration by inflammatory cells. A progression of the pathology was detected in the 4% RSA group at 16 and 24 weeks with an increase of Laverty's score compared to the OA group (P < 0.01). ASC administration (2 × 10 6 and 6 × 10 6 ) significantly decreased Laverty's score at 16 and 24 weeks compared to the 4% RSA and OA groups (P < 0.01). The protective effects exerted by the ASCs were noticed overall in reducing the thickness of the lining layer and the infiltration of inflammatory cells in the sub-synovium. In particular, best results were noticed in the 6 × 10 6 ASC-treated group at 16 weeks compared to 24 weeks (P < 0.01) ( Figure 6A, B). An histological analysis on menisci was performed in order to determine if ASC treatment had an effect on the meniscal compartment. A significant increase of cell cluster was noticed in the 4% RSA at 16 and 24 weeks compared to the OA group (P < 0.01). The 2 × 10 6 and 6 × 10 6 ASC-treated groups displayed a well organized tissue with a low number of cell clusters at 16 and 24 weeks compared to the OA and 4% RSA groups at both experimental times (P < 0.01) ( Figure 7A, B). ASCS treatment reduces matrix degrading enzymes and TNF-a in cartilage matrix Since the degradation of cartilage matrix represents a key event in the development of OA, we decided to test the effect of ASC treatment on catabolic and inflammatory molecules involved in the OA onset. We first investigated the typical hyaline marker, collagen type II, detecting a decrease of this molecule in the 4% RSA at 24 weeks in respect to the OA group (P < 0.01). ASC treatment gave evidence of a chondro-protective effect, promoting the expression of a great amount of type II collagen in the cartilage tissue in respect to the OA and 4% RSA groups (P < 0.01). In particular, high percentages of positive areas in the 2 × 10 6 and 6 × 10 6 ASC-treated groups at 16 and 24 weeks were detected. A reduced expression of collagen type II was noticed in 4% RSA, 2 × 10 6 and 6 × 10 6 ASCs at 24 weeks compared to 16 weeks (P < 0.01) ( Figure 8A). Collagen type I, a fibro-cartilaginous marker, reported an intense positivity at cellular level in the OA group, particularly at the superficial level in cartilage matrix. The 4% RSA group displayed an increased expression of type I collagen particularly at 24 weeks compared to the OA group (P < 0.01). A reduction of collagen type I was detected in the ASC-treated groups at short-and long-term follow-ups with respect to the 4% RSA group (P < 0.01) ( Figure 8B). A moderate expression of MMP-1 was noticed in the OA group, particularly in the superficial layer of cartilage. An increased expression of this protein was observed in the 4% RSA group. By contrast, ASC-treated groups showed a low expression for MMP-1 compared to the OA group and 4% RSA group at 16 and 24 weeks (P < 0.01) ( Figure 9A). The OA group displayed a moderate expression of TNF-α which increased progressively in 4% RSA at 16 and 24 weeks (P < 0.01). A reduction of TNF-α expression was detected in both the ASC-treated groups at 16 weeks and 24 weeks with respect to the OA and 4% RSA groups (P < 0.01) ( Figure 9B). ASCS treatment inhibits MMP-1 and TNF-a expression in synovial membrane and menisci The effect of ASC treatment on synovial membrane and menisci was investigated by immunohistochemical analyses. The OA group displayed a mild positivity for MMP-1, particularly in the lining layer of the synovial membrane at eight weeks. A moderate expression of MMP-1 was noticed in the 4% RSA group at short-and long-term follow-ups compared to the OA group (P < 0.01). ASC treatment reduced the expression for MMP-1 at both experimental times with respect to the 4% RSA group (P < 0.01) ( Figure 10A). As concerns TNF-α, the OA group showed a mild positivity in the synovial lining at eight weeks. An intense positivity for TNF-α was observed in the 4% RSA at 16 and 24 weeks with respect to the OA group (P < 0.01). ASC treatment decreased the expression of TNF-α compared to the 4% RSA group at 16 and 24 weeks (P < 0.01) ( Figure 10B). A series of investigations was also focused on the status of medial meniscus. The OA group showed a moderate positivity for MMP-1 at the cellular level, particularly in the superficial layer in the meniscal compartment at eight weeks. An increase of MMP-1 expression was noticed in the 4% RSA group at 24 weeks compared to the OA group (P < 0.01). By contrast, a reduction of MMP-1 was observed in the ASC-treated groups at 16 weeks compared to the OA group (P < 0.01). A slight increase in the expression of this protein was detected in all the groups analyzed at 24 weeks compared to 16 weeks (P < 0.01) ( Figure 11A). The OA group showed a moderate positivity for TNF-α at the cellular level at eight weeks. The 4% RSA group displayed a strong expression of TNF-α at 16 and 24 weeks with respect to the OA group (P < 0.01). A slight positivity was detectable in the ASC-treated groups at both experimental times compared to the 4% RSA group (P < 0.01) ( Figure 11B). Discussion The employment of ASCs in regenerative medicine is a rapidly growing area of research and some evidence of therapeutic success using these cells for osteochondral defect has been reported [39,40]. Recently, this cell therapy is being used as a valid therapeutic tool also in the treatment of OA. Beneficial effects of ASCs were reported in the care of this pathology in some experimental animal models [25][26][27][28]. However, most of these studies were limited to macroscopic and histological observations of articular cartilage after ASC treatment. No information was provided on the effects of these cells on catabolic and inflammatory processes which occur during OA in synovial membrane and menisci. This study, using a rabbit model of OA, was designed to determine the role of ASCs in the OA setting and their behavior on inflammatory environment within the affected articular joint. The ACLT model proposed is widely validated in investigating OA disease, because it determines biomechanical and pathological changes similar to those seen in humans [29]. It has been reported from our group that eight weeks after ACLT, cartilage damages in rabbits occur mainly in the medial femoral condyle, showing a wide spectrum of OA changes, including fibrillation and delamination processes, altered cellular arrangement and proteoglycan depletion [30]. The investigations performed on synovial membrane and menisci revealed that the ACLT procedure at eight weeks leads to a thickening of the lining layer in the synovial membrane associated with the presence of some inflammatory elements and to an increase in cell cluster formation in menisci. Direct intra-articular injection of cells is technically the simplest approach to the use of cells for OA therapy [13,41]. In the current study, an intra-articular injection of ASCs was delivered in the hind limbs of the rabbits after OA induction. No animals showed swelling at the injection sites, signs of distress, or hyperalgesia after ASC administration. There were significant overall effects of ASC treatment in the cartilaginous, synovial and meniscal tissues at different levels. Our investigations gave evidence of the beneficial effect for the 2 × 10 6 ASC group, particularly at 16 weeks, showing a well-organized tissue with a low Laverty's score, an increased cartilage thickness compared to the OA group. Both cell doses provided good results showing a high expression of type II collagen in the cartilage matrix at 16 and 24 weeks. A positive contribution of the intra-articular delivery of both ASC concentrations was also noticed in the meniscal compartment at both experimental times, leading to a decrease of the number of cell clusters the fibro-cartilaginous area. Cell treatment inhibits the progression of OA, providing a reduction of the fibrillation index, Laverty's score and type I collagen in cartilage and favors the anabolic processes addressed to the formation of a new tissue. Moreover, ASC administration inhibits the development of thickening of the lining layer in the synovial membrane with major evidence at short-term followup. Since the release of cartilage matrix proteins in the articular environment contribute to cartilage damage through the production of inflammatory cytokines, chemokines and MMPs [3], we decided to test the effects exerted by ASCs. To this regard, clear benefits of ASC treatment were observed with the reduction of the inflammation in cartilage tissue in terms of a decreased pattern of expression for TNF-α at both experimental times [42]. In close correlation with the reduction of TNF-α, a decreased level of MMP-1 in cartilage tissue, responsible of proteoglycan degradation, was noticed in ASC-treated groups at 16 and 24 weeks [42,43]. A timedependent effect of ASC treatment was noticed for MMPs and TNF-α expression, particularly in the 2 × 10 6 ASC group providing the best results at short-term follow-up. Cell treatment inhibits the progression of OA, leading to a reduction of TNF-α and MMPs in menisci and synovial membrane at both experimental times. In general, the 4% RSA group displayed OA progression in the different compartments as indicated by an increased expression of catabolic and inflammatory markers. Both ASC doses gave evidence of the healing potential in cartilage, synovial membrane and menisci, even if the lowest cell concentration is more effective on cartilage repair at 16 weeks. The underlying mechanisms responsible of the best effects of 2 × 10 6 ASCs in cartilage are not fully understood. It could be related to the release of cytokines and other growth factors by ASCs that at low and high concentrations can determine contrasting biological effects on the immune system and/or on catabolic and inflammatory events. A time-dependent effect by ASCs was observed in the analysis performed particularly on cartilage tissue, detecting the best findings at shortterm follow-up. These findings could be justified by the lack of repair of the anterior cruciate ligament, which could slow and/or inhibit some signaling pathways involved in the repair processes. The investigations performed on the local bio-distribution of ASCs by using a fluorescent tracking dye (CM-Dil) open some prospectives in the understanding of the mechanism of action by ASCs. The key advantage of CM-Dil and its derivatives is to represent a nontoxic fluorescent tracking system ready in a few hours, able to label ASCs without altering their multipotential nature as observed by in vitro differentiation protocols and avoiding genetic manipulation of the cells [33]. Nevertheless, this system shows some disadvantages such as the loss of fluorescence signal over time during cell replication rendering difficult the cell tracking at long term followup, the transfer of the fluorescent dye to other cells. After having injected labeled cells, no engraftment was noticed in the anterior cruciate ligament and in the cartilaginous tissue from the tibial plateau and femoral condyles. The homing of these cells was unexpectedly detected in the synovial and medial meniscus, probably due to the expression of specific receptors or ligands able to facilitate trafficking, adhesion and infiltration of ASCs to these sites and/or to the presence of a vascular fraction which could promote ASC migration. The distribution of ASCs in the lining layer of synovial tissue could also be due to the release of chemokines by macrophages located within the lining layer [44]. Different hypotheses could be provided to explain the chondro-protective and healing effects exerted by ASCs. Cell administration could contribute to enhance anabolic signaling pathways and inhibit catabolic ones. These processes could be reasonably induced by growth factors and cytokines released by ASCs rather than their differentiation potential since biodistribution data showed the localization of ASCs in the synovial and meniscal tissues. Another mechanism exerted by ASCs could be due to the inhibition of the release of catabolic and inflammatory molecules by macrophages in synovium or chondrocytes in cartilage. Experimental evidence supports this hypothesis, reporting a decrease of the catabolic and inflammatory molecules predominantly in cartilage. Other possibilities could be projected on the regulation of the immune system by ASCs as already observed for MSC [37,45]. Several authors have shown how the inflammatory environment is an important parameter to consider, since it would seem to influence the behavior of ASCs by enhancing their immunosuppressive potential [46][47][48][49]. This last consideration provides more attention on the pattern of molecules secreted by ASCs, which could have important implications in the resolution of inflammatory diseases. Further analysis of the molecules secreted by ASCs could be useful to identify key elements involved in repair processes. In conclusion, the findings of this study demonstrate that an intra-articular injection of ASCs exerts a chondro-protective role promoting a series of anabolic processes in order to allow the maintenance of a good collagen and proteoglycan network, and at the same time inhibiting catabolic events responsible for degenerative events in cartilage, synovial and meniscal tissues. ASC therapy could, therefore, represent a novel therapeutic tool for the treatment of osteoarthritis. Conclusions The current study was the first to focus on the effect of ASCs in the OA setting and their behavior on the inflammatory environment within the cartilaginous, synovial and meniscal tissues in a rabbit model of osteoarthritis. Our data demonstrated a healing effect by ASCs on cartilage and menisci and an inhibition of OA progression in synovial membrane. Biodistribution of ASCs in medial meniscus and synovium provides some suggestions on their possible role suggesting a paracrine mechanism of action. Competing interests The authors declare that they have no competing interests.
2018-04-03T00:47:59.557Z
2013-01-29T00:00:00.000
{ "year": 2013, "sha1": "87de37ea80d3793935ba4a1b50f0e7d7926d845e", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar4156", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb3eade53910fe71ef499c66b23eeb80b4d24c21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252070820
pes2o/s2orc
v3-fos-license
Costs and Benefits of the Paris Climate Targets The temperature targets in the Paris Agreement cannot be met without very rapid reduction of greenhouse gas emissions and removal of carbon dioxide from the atmosphere. The latter requires large, perhaps prohibitively large subsidies. The central estimate of the costs of climate policy, unrealistically assuming least-cost implementation, is 3.8-5.6\% of GDP in 2100. The central estimate of the benefits of climate policy, unrealistically assuming constant vulnerability, is 2.8-3.2\% of GDP. The uncertainty about the benefits is larger than the uncertainty about the costs. The Paris targets do not pass the cost-benefit test unless risk aversion is high and discount rate low. Introduction International targets for climate policy are political. The upper limit of the temperature target of the 2015 Paris Agreement under the United Nations Framework Convention on Climate Change (UNFCCC) can be traced back to an old and flawed report by an advisory council (WBGU, 1995, cf. Tol (2007), but the lower limit cannot even claim dubious support. Some of the countries that have adopted the Paris Agreement require cost-benefit analysis of policy decisions, but this requirement does not extend to international treaties. The Intergovernmental Panel on Climate Change (IPCC) does not offer cost-benefit analysis either, indeed has shied away from reviewing the relevant academic literature on this matter (Tol, 2022a). This paper therefore reports a cost-benefit analysis of the targets in the Paris climate agreement. This is surely not the first cost-benefit analysis of climate policy. That honour goes to Nordhaus (1982). Since then, there have been many attempts to balance the costs and benefits of greenhouse gas emission reduction (e.g., Nordhaus, 1992, Peck and Teisberg, 1994, Tol, 1999, Keller et al., 2004, Tol, 2012, Millner, 2013, Crost and Traeger, 2014, Barrage, 2020, Van Den Bremer and Van Der Ploeg, 2021. All of these papers support greenhouse gas emission reduction but few (Hänsel et al., 2020) advocate 100% reduction of carbon dioxide emissions, which is needed to stabilize its atmospheric concentration and so halt anthropogenic climate change. The reason lies in the structure of cost-benefit analysis, which equates the marginal costs of emission reduction to its marginal benefits. Costbenefit analysis rarely recommends a corner solution. Although the emergence of negative carbon energy and direct air capture imply that a 100% emission reduction is not a corner solution in cost, it is in benefit: If climate would not change any more, the impact of climate change would be zero, and the marginal impact of emissions would be close to zero. Stringent emission reduction thus reduces and eventually removes the justification for even more stringent emission reduction. There are two exceptions to this. First, some modellers assume a backstop technology (Nordhaus, 2014), which, at a finite carbon tax, would fully and irreversibly decarbonize the economy, even if that carbon tax is subsequently withdrawn. This is a strong assumption. The other exception is the assumption that climate rather climate change damages the economy (Burke et al., 2015), in which cases any stable climate that is warmer than preindustrial would have marginal damages substantially above zero. This assumption too is hard to support (Newell et al., 2021). This paper contributes a cost-benefit analysis of the two temperature targets of the Paris Agreement. Instead of finding the optimal temperature, I assess whether these targets pass the cost-benefit test. I do this on the basis of (i) the latest IPCC estimates of the costs of emission reduction (Rogelj et al., 2018, Riahi et al., 2022 and (ii ) a new meta-analysis of the impact of climate change , Howard and Sterner, 2017, Nordhaus and Moffat, 2017. The paper proceeds as follows. Section 2 assesses just how ambitious the Paris targets are. Section 3 reviews the costs of climate policy and Section 4 its benefits. Section 5 reports a cursory cost-benefit analysis. Section 6 concludes. Figure 1 shows global carbon dioxide emissions from fossil fuel combustion for the period 1965-2021, the period for which we have good data. Emissions rose by 2.1% per year on average, but growth slowed to 0.7% in the most recent decade. An annual emission reduction of 15% for 28 years would reduce global emissions to close to zero. Global net zero emissions by 2050 is needed to meet the 1.5 • C Paris target. The scale of ambition Note that China aims at net zero emissions by 2060, and India by 2070. Global net zero by 2050 therefore means net negative emissions in the OECD. Figure 1 also shows the components of the Kaya Identity. Population growth was 1.5% per year over the full period but slowed to 1.1% in the last decade. See Table 1. Per capita income grew by an annual 1.7% between 1965 and 2021; this slowed to 1.5% after 2011. Assuming that these two components are largely beyond the remit of climate policy, emission reduction has to come from improvements in energy efficiency and carbon intensity. Over the full period, energy intensity, the amount of primary energy needed to generate one dollar of value added, fell by 0.9% per year, accelerating to 1.1% in the last ten years. Carbon intensity, the amount of carbon dioxide emitted per primary energy unit used, fell by 0.3% per year over the whole period and by 0.8% in the last decade. If emissions are to fall by 15% per year while the economy continues to grow by 2.5% per year, the sum of energy and carbon intensity has to fall by 17.5% per year, up from 1.9% in the last ten years, the period of most intense climate policy. At first sight, the scale of ambition in international climate policy is momentous. Renewables are one of the drivers of the slower growth of emissions. Integrating nondispatchable electricity becomes more expensive as its share in power generation grows. Electricity is probably the easiest sector to decarbonize. It is more difficult for transport, heating, industry, and agriculture. That is, an order of magnitude increase in the decarbonization rate requires more, much more than a tenfold increase in the policy effort. The low-hanging fruit has already been picked. Furthermore, the energy sector is characterised by long-lived capital. A lot of the buildings, power plants, steel mills and chemical plants we use today will still be around in 2050, and even some of the machinery and vehicles (Davis et al., 2010, Tong et al., 2019. That is why the target is net zero. Gross zero would require capital destruction at a large scale, with bankruptcies, lay-offs, and claims for compensation. Net zero emission requires afforestation-large plantations of rapidly growing trees-negative carbon energy-electricity generated from biomass with carbon capture or storage-and direct air capture-removing carbon dioxide with artificial photosynthesis. Scale and speed are the problems with afforestation. Agricultural lands are already converting back to nature in Europe and North America. This can be accelerated but not by much, and only at the expense of diverse forests, including slow-growing species. Scale is also the problem with bioenergy. Cheap biofuel requires large, heavily mechanized monoplantations. The acreage needed to supply the required energy is infeasibly large (Wise et al., 2009). Direct air capture is a proven technology at a small scale (House et al., 2011); both scaling up and safe storage of large volumes of carbon dioxide are problematic. The costs of emission reduction Emission reduction costs money (Weyant, 1993, Clarke et al., 2014. Models agree that a complete decarbonization of the economy can be achieved at a reasonable cost if policies are smart, comprehensive and gradual and if targets are sensible. Models disagree on how much emission reduction would cost; estimates vary by an order of magnitude or more. Riahi et al. (2022) reports that the global average carbon tax needed to meet the 1.5 • C temperature target ranges between $30/tCO 2 and $1,100/tCO 2 in 2030 and between $110/tCO 2 and $14,000/tCO 2 in 2100. That target would cost somewhere between 0.5% and 6.0% of GDP in 2030, and up to 10% in 2100. Barker et al. (2007) and Clarke et al. (2009) found that the 2 • C target is infeasible for physical, technical, economic or political reasons. Modellers have met the political demand for more stringent targets by expanding options for negative emissions (Clarke et al., 2014, Riahi et al., 2022. As the market for carbon dioxide is typically saturated, negative emissions require a carbon subsidy (and deserve one, as this is a negative externality). Tol (2019a) finds that the central estimate of these subsidies amount to 4% of world income by the end of the century, with one model putting it at almost 17%. These are global averages-net positive emissions in Asia after 2050 would have to be offset by net negative emissions in Europe and North America. The cost estimates cited above typically assume cost-effective implementation of climate policy. Under ideal conditions, first-best regulation is straightforward: The costs of emission reduction should be equated, at the margin, for all sources of emissions (Baumol and Oates, 1971). Governments routinely violate this principle, with different implicit and even explicit carbon prices for different sectors and for differently sized companies within sectors. Although climate change is a single externality, emitters are often subject to multiple regulations on their greenhouse gas emissions (Boehringer et al., 2008, Boehringer andRosendahl, 2010). Regulations are often aimed at a poor proxy for emissions (e.g., car ownership) rather than at emissions directly (Proost and Van Dender, 2001), and instrument choice may be suboptimal (Webster et al., 2010). Conditions are not ideal. Optimal policy deviates from the principle of equal marginal costs to accommodate for market power (Buchanan, 1969), for multiple externalities (Ruebbelke, 2003, Parry andSmall, 2005), and for prior tax distortions (Babiker et al., 2003). Such deviations are subtle and context-specific, and rarely observed in actual policy design. All this makes that actual climate policy is far more expensive than what is assumed in models. While the above problems are well-known to those who know it well, there is another issue that has attracted little or no attention. The cost of greenhouse gas emission reduction is typically reported as a drop in GDP. Although GDP is not a welfare measure, this is fine as first sight as Gross Domestic Product is theoretically equal to Gross Domestic Income, and income is roughly proportional to consumption, which is in turn a closely related to utility. However, climate policy not only changes the size of GDP but also its composition. Particularly, stringent temperature targets require removing carbon dioxide from the atmosphere. This is an economic activity and so contributes to GDP. As a defensive expenditure that does only serves to prevent welfare loss, carbon dioxide removal does not count towards the Indicator of Sustainable Economic Welfare (Nordhaus and Tobin, 1972) or the Environmental Net Product (Hartwick, 1990). The number cited above, 4% of GDP, is based on net negative emissions; assuming a continued use of fossil fuels, gross negative emissions and so subsidies / defensive expenditures would be larger. Unfortunately, the AR6 scenario database has yet to be released. The scenario database for the IPCC Special Report on 1.5 • C (Rogelj et al., 2018) is available, allowing for a closer inspection of results than the graphical summaries in IPCC reports. 1 Figure 2 shows the efficacy of carbon pricing according to eight different models. Tax efficacy is here defined as the reduction in carbon dioxide emissions in 2030 divided by the assumed carbon prices in the 2020s. Tax efficacy varies over two orders of magnitude, from 0.04% for message to 1.15% tCO 2 /$ for gcam. 2 Figure 2 also shows tax efficacy as measured by four ex post studies, that econometrically estimate the impact of carbon taxes on emissions (Sen and Vollebergh, 2018, Best et al., 2020, Metcalf and Stock, 2020, Rafaty et al., 2020, see Tol (2022a for a discussion of ex post studies for a broader set of climate policies.). The results are diverse. Three studies find significant effects, but one does not. One study is well in line with 6 of the 8 ex ante studies, one finds larger tax efficacy, and two find a lower tax efficacy. Comparison between ex ante and ex post estimates is not one-on-one, as the former assume first-best policy implementation. For example, the carbon taxes studied by Metcalf and Stock are imposed on hard-to-abate sectors outside the EU ETS. The two sets of studies agree, however, that current estimates are not exactly firm. The benefits of emission reduction Tol (2022b) reviews 39 papers with 61 published estimates of the total economic impact of climate change. Figure 3 shows the histogram of published estimates. Estimates are published for a range of climate change scenario. Tol (2022b) fits seven alternative impact functions to these estimates and uses the fit as weights in the weighted average impact function. This model average is used here to scale all estimates to 2.5 • C warming. The histogram includes all 61 estimates, but weighted such that each of the 39 papers contributes 1/39 to the total frequency. Some 60% of estimates show moderate damages, between 0 and 2% of GDP (Pearce et al., 1996, Arent et al., 2014. The central estimate of the welfare change caused by a century of climate change is comparable to the welfare loss caused by losing a year of economic growth. Tol (2022b) finds that these ex ante estimates are not inconsistent with ex post econometric studies of the impact of weather shocks on economic growth-for those studies that relate economic growth to temperature change. Econometric studies that relate economic growth to temperature levels show much larger impacts, positive or negative, but suffer from both econometric problems (Newell et al., 2021) and conceptual ones-notably the implication that climate change would have a permanent effect on economic growth, a form of climate determinism that contradicts all empirical evidence. The uncertainty about the central estimates is rather large, however, and benefits cannot be excluded, even for high warming. About 12% of estimates show benefits rather than damages. These benefits are due to reduced costs of heating in winter, reduced cold-related mortality and morbidity, and carbon dioxide fertilization, which makes plants grow faster and more resistant to drought. Negative impacts, such as summer cooling costs, infectious diseases, and sea level rise, dominate the central estimate. The uncertainty about the welfare impact of climate change is not just large, it is also right-skewed. Around 38% of estimates show more considerable damage than one year of economic growth. Negative surprises are more likely than positive surprises of similar magnitude. Feedbacks that accelerate climate change are more prevalent than feedbacks that dampen warming, and the impacts of climate change are more than linear in climate change. Figure 3 illustrates this: The most pessimistic estimate is twice as large as the most optimistic one. Estimates are not only uncertain but incomplete too. Some impacts-on violent conflict for example-are omitted altogether because they resist quantification. Other impacts are dropped because they do not fit the method-higher-order impacts in the enumerative method, non-market impacts in computable general equilibrium models. Assumptions about adaptation are stylised, either overly optimistic-rational agents with perfect expectations in markets without distortions-or overly pessimistic-dumb farmers doggedly repeating the actions of their forebears. Valuation of non-market impacts is problematic too as benefit transfer, the extrapolation of observed (or rather inferred) values to unobserved situations, has proven difficult (Brouwer, 2000) yet is key to predicting how future people would value risks to health and nature. Comparing the sectoral coverage of various estimates, Tol (2022b) finds an average underestimate of 63%. The benefits of climate policy are the avoided impacts of climate change. The impact function described above predicts economic damages for alternative temperature trajectories, with and without climate policy, or with different intensities of climate policy. The difference between those impact trajectories constitute the estimate of the benefit of climate policy. A cost-benefit analysis Section 3 reviews the costs of greenhouse gas emission reduction, Section 4 its benefits. I here put the two together in a cursory benefit-cost analysis of the temperature targets of the Paris Agreement. Temperature trajectories are build with the carbon cycle model of Maier-Reimer and Hasselmann (1987) and the climate model of Schneider and Thompson (1981), as parameterized in Tol (2019b). In the baseline scenario the global annual mean surface air temperature reaches 4.8 • C in 2100. This is hot, too hot probably, and so overestimates the benefits of climate policy. Figure 4 summarizes the key findings. The top (bottom) panel shows the costs and benefits of meeting the 2 • C (1.5 • C) target. The costs of the less ambitious target are just below 4% of GDP in 2100, rising to just above 5.5% of GDP for the more ambitious target. This is the average across models and scenarios in the IPCC 1.5 • C Special Report database (Rogelj et al., 2018). The range shown is plus and minus the standard error across models. Recall that these results assume first-best policy implementation. Even simple policy imperfections, such as a failure to equate carbon prices between countries, would readily double the costs of climate policy (e.g., Boehringer et al., 2009). Figure 4 also shows the benefits, here defined as the difference between the SSP5-8.5 scenario and the respective policy scenarios. The baseline scenario is unrealistically hot (Srikrishnan et al., 2022), which strengthens the case for emission reduction. Nevertheless, the benefits of climate policy are smaller than its costs, some 2.8% of GDP for the 2 • C target and about 3.1% for 1.5 • C. The range shown is again plus or minus what may be considered a standard error (see Tol, 2022b, for its derivation). If I instead use the SSP3-7.0 scenario as the baseline, the world would warm not 4.8 • Cbut 3.9 • Cby 2100. The benefits of climate policy would then be 1.8% of GDP for the 2 • C target and 2.2% for the 1.5 • C target. The central estimate of the benefits is always smaller than the central estimate of the costs. Ignoring the uncertainty for the moment, regardless of the discount rate, the present costs exceed the present benefits; the net present benefits are negative. Figure 5 shows the net benefits. It reaffirms that the central estimate of the costs is larger than the central estimate of the benefits. The central estimate is always negative. The confidence interval is rather large, however. From 2070 onwards, net benefits cannot be excluded. Without conducting a formal benefit-costs analysis, this confirms what is known from the literature: Stringent climate policy can be justified with a high rate of risk aversion and a low discount rate. Discussion and conclusion This paper reviews the costs and benefits of climate policy and assesses the economic justification of the climate targets in the Paris Agreement. Assuming first-best policy implementation and the deployment of negative emission technologies yet to be demonstrated at scale, meeting the 2.0 • C (1.5 • C) target would cost just under 4.0% (over 5.5%) of GDP in the year 2100, with a considerable range of uncertainty. The benefits of these climate policies are smaller, just under (over) 3.0% of GDP in 2100, but the uncertainty about the benefits is considerable larger than the uncertainty about the costs. The central estimate is that the costs exceed the benefits throughout the 21st century, but from 2070 onward net benefits cannot be excluded. Note that the above benefits of climate policy are inflated by the choice of an unrealistically warm baseline scenario, and its costs deflated by the use of first-best policy implementation. The Paris climate targets therefore only pass the cost-benefit test if the discount rate is low and the rate of risk aversion high. The main finding is not new, indeed has withstood the test of time. Instead of trying to refine cost-benefit analyses of climate policy, research should therefore focus elsewhere. The number of ex post estimates of the costs and efficacy of climate policy is growing rapidly. These cannot replace ex ante studies, but should inform model parameterizations and perhaps encourage retirements too. The number of empirical studies of the impact of climate change is also growing rapidly. This information needs to be consolidated and absorbed into Integrated Assessment Models. The biggest policy challenge lies in dealing with the inevitable fall-out if the 1.5 • C target is missed, perhaps later this decade, and the 2.0 • C becomes undeniably infeasible. The environmental movement will have to come to the terms with a catastrophe that was foretold but did not materialize. These topics are perhaps better left to political scientists and social psychologists. 7 Besides new and presumably better numbers in cost-benefit analysis, economists should focus on evaluating the many policy initiatives to reduce greenhouse gas emissions and the less numerous attempts to reduce vulnerability to climate change, as well as on the drivers of emissions and vulnerability that have little or nothing to do with climate policy.
2022-09-05T06:44:00.660Z
2022-09-02T00:00:00.000
{ "year": 2022, "sha1": "ab344f6c120055a8a281342130acfb7139a9167f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ab344f6c120055a8a281342130acfb7139a9167f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
135458821
pes2o/s2orc
v3-fos-license
Antiperiodontitis Effects of Magnolia biondii Extract on Ligature-Induced Periodontitis in Rats Over the past decades, periodontitis has become a rising health problem and caused various diseases. In the many studies shows that some extracts and compound to the prevention and treatment of periodontitis. This study focuses on the effects of inhibition of gingival damage and alveolar bone loss. The aim of this study was to evaluate the protective effects of Magnolia biondii extract (MBE) against ligature-induced periodontitis in rats. A ligature was placed around the molar teeth for 8 weeks, and MBE was administered for 8 weeks. Gingival tissue damage and alveolar bone loss were measured by microcomputed tomography (CT) analysis and histopathological examination. Serum Interluekin-1 β (IL-1β), tumor necrosis factor-α (TNF-α), cyclooxygenases-2 (COX-2), and receptor activator of nuclear factor–κB ligand (RANKL) levels were investigated using commercial kits to confirm the antiperiodontitis effects of MBE. We confirmed that ligature-induced periodontitis resulted in gingival tissue damage and alveolar bone loss. However, treatment for 8 weeks with MBE protected from periodontal tissue damage and downregulated serum inflammatory cytokine factors and RANKL levels. These results suggest that MBE exerts antiperiodontitis effects by inhibiting gingival tissue destruction and alveolar bone loss through regulation of anti-inflammatory cytokines in periodontitis-induced rats. Introduction Periodontitis is a chronic inflammatory disease that gradually destroys the periodontium including the gums, cementum periodontal ligament, and alveolar bone surrounding and supporting the teeth [1,2]. It was reported that periodontal disease affects 20-50% of the global population, and 47.2% of the US population over 30 years of age suffers from a certain degree of periodontitis [2,3]. The high prevalence rates of periodontal diseases have made it a serious worldwide health problem today. Periodontal disease also increases the risk of type II diabetes, hypertension, cardiovascular disease, and metabolic syndrome [3,4]. Moreover, people have been exposed to the risk of oral squamous cell cancer recently due to low fruit and vegetable intake, lack of vitamin intake, and abuse of alcohol and tobacco, and new cases are reported every year [5]. Also, chronic inflammation causes odontoma, which is considered as the most common odontogenic tumor of the oral cavity. It affects tooth tissue, such as enamel, dentin, pulp, and cementum [6]. Prevention and proper management of periodontal disease is therefore of importance. Corticosteroids and NSAIDS have been used to treat periodontal diseases. However, it has been reported some adverse effects such as gastrointestinal bleeding, a reduction in platelet function and when long-term use, the immune defense system decreased. Therefore, it is needed for safe and effective materials without side effects [7]. Previous studies demonstrated that the main cause of periodontitis is bacterial plaque accumulation [1]. Plaque accumulation causes an inflammatory process and activates the host immune response in periodontal tissue by secreting proinflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), and Interluekin-1 β (IL-1β). These cytokines then stimulate the production of secondary-mediators, including chemokines such as cyclooxygenases-2 (COX-2) [8]. These factors have a central role in the destruction of periodontal tissue by causing periodontal pocket formation, connective tissue damage, and alveolar bone resorption [9]. Alveolar bone loss is a typical feature of periodontitis, which is dependent upon the balance between osteoclast-mediated bone resorption and osteoblast-mediated bone formation [10]. It has been demonstrated that bone resorption is induced by osteoclasts and is stimulated by the receptor activator of nuclear factor-B ligand (RANKL) [11]. Magnolia biondii (MB) is one of a plant species belonging to Magnolia flos (MF) and has traditionally been used to treat nasal congestion with headache, sinusitis, and allergic rhinitis [12,13]. It is listed in the Chinese and Korea Pharmacopoeia and is known to have a wide range of pharmacological properties, including anti-inflammatory, antiallergic, antiproliferative, antifungal, and antimicrobial [13,14]. Therefore, we hypothesized that the anti-inflammatory activity of MB extract (MBE) would inhibit gingival destruction and inhibit osteoclast differentiation would reduce alveolar bone resorption. To test of this hypothesis, we examined the antiperiodontitis effects of MBE by investigating its anti-inflammatory activities on ligature-induced periodontitis in rats. Sample Preparation The MB extract (MBE) was provided from Nutrapharmtec (Seongnam, South Korea). The dried MB was extracted with aqueous ethyl alcohol and filtered. The extracts were concentrated in a vacuum evaporator, and the concentrate was sterilized and cooled. The residue was dried, and a powder was obtained. MBE was dissolved in saline and subsequently used for in vivo study. Animals Twenty-five male 6-week-old Crl;CD(SD) rats were purchased from Orient-bio Co. (Seongnam, South Korea). After acclimation for 7 days, healthy animals were selected and used for experimentation. The animals were given free access to food and water. This study was conducted under the conditions of a 12-h light-dark cycle at 23 ± 3 • C (8:00 AM to 8:00 PM), 55 ± 15% humidity, and illumination at 150 to 300 lux. The animal experiments were carried out in accordance with the national guidelines for the care and use of laboratory animals approved by the animal Ethics committee (permission number: KNOTUS-IACUC-17-KE-333) of KNOTUS Inc (Guri, Korea). We monitored changes in body weight once a week and observed changes in feed and water intake. To improve animal well-being, we provided a sanitary environment to prevent disease and proper breeding and management. Ligature-Induced Periodontal Disease and Drug Administration Animals were anesthetized with Zoletil 50 (Virbac, France) and xylazine (Rompun ® , Germany) by intraperitoneal injection. After anesthesia, the rats' mouths were kept open to facilitate access to the posterior teeth of the mandible. A 4-0 silk ligature was placed around the right second molar of the mandible for 8 weeks to induce periodontitis. After induction of periodontitis, the rats were divided into six groups: (1) nonligature control + vehicle, (2) ligature control + vehicle, (3) ligature + doxycycline 20 mg/kg, (4) ligature + MBE 100 mg/kg, and (5) ligature + 400 mg/kg. The drugs were dissolved in distilled water and orally administered once a day for 8 weeks. The total volume of daily gavage was 10 mL. Microcomputed Tomography (Micro-CT) Analysis All rats were anesthetized, and their mandibular jaws were scanned using a micro-CT (SCANCO Medical, Switzerland) at 8 weeks and 16 weeks. The cement-enamel junction (CEJ)-alveolar bone crest (ABC) distance and furcation involvement in the periodontitis-induced area in the images were measured to confirm alveolar bone loss and tissue damage. Distances were analyzed using the built-in instrument software. The CEJ-ABC distance was expressed as the mean value of the distance between the left and right CEJ to the ABC of the mandibular second molar regions. The furcation involvement was also analyzed by sliced micro-CT images taken using the software in the second molar regions. The device was set to 70 kv with 114 A energy and an integration time 200 ms per projection. It yielded a series of~420 consecutive 25 m slices that create serial incisor teeth to the mandible. The images were produced with a voxel size of 25 m. Measurement of Gingival Index and Tooth Mobility Animals were checked for ligation status, gingival bleeding, and the degree of erosion per week after periodontal disease, according to the following criteria; Score 0, normal gingiva; Score 1, mild inflammation, slight edema, minor change in color, and absence of bleeding on probing; Score 2, moderate inflammation, edema, glazing, redness, and bleeding on probing; and Score 3, severe inflammation, extreme redness, presence of ulcers, edema, and severe bleeding. The Gingival Index was used to assess the degree of inflammation in the gingiva. Tooth mobility was scored according to the following scale; Score 0, no mobility; Score 1, slight mobility (vestibular-palatal); Score 2, severe mobility (vestibular-palatal and mesial-distal); and Score 3, severe mobility (vertical, the tooth moves in and out of the socket). Histological Analysis and Inflammation Score of Periodontal Tissues The rats were sacrificed after the end of experimentation at 16 weeks and immediately dissected in their mandibular molars, alveolar bone (AB), and surrounding soft tissues. The tissues were fixed in 4% formaldehyde (pH 7.5) overnight at 4 • C and then transferred to a decalcifying solution with 0.5 M EDTA-Na (pH 7.5-8.0) for 4 weeks. The tissues were embedded in paraffin, and serial mesiodistal sections (5 µm) were stained with hematoxylin-eosin. Histopathological changes in stained tissues were observed using an optical microscope (Olympus BX53, Japan). The inflammation scoring system used for the determination of periodontal status. Serum Analysis After 16 weeks of treatment, rats were anesthetized, and blood was collected. Blood samples were centrifuged at 2000 × g for 15 min at 4 • C for serum collection. The separated serum was stored at −80 • C until analysis. Serum levels of IL-1β, TNF-α (Invitrogen, USA), COX-2 (CUSABIO, USA), and RANKL (LSBio, USA) were determined using commercial kits according to the manufacturer's instructions. Statistical Analysis Data were expressed as mean ± standard error and were analyzed with the SPSS Statistics 22.0 (SPSS Inc., Chicago, IL, USA) software. The different treatment groups were compared using Student's t-test and one-way analysis of variance followed by multiple comparisons with Dunnett's post hoc test using Origin 7.0 software (Microcal, MA, USA). Differences were considered statistically significant at p < 0.05 and p < 0.01. Micro-CT Analysis The periodontitis-induced rats were administered drugs for eight weeks, and the CEJ-ABC distance and furcation involvement before (week 8) and after (week 16) treatments were compared using micro-CT images. Representative images of all sample periodontal tissue results are shown in Figure 1. As shown in Table 1, the CEJ-ABC distance and furcation involvement in the ligature control group were significantly higher at eight weeks, at 0.098 mm and 0.041 mm, respectively. In the nonligature group, the CEJ-ABC distance had decreased by 0.007 mm, and the Furcation involvement had increased by 0.009 mm. We confirmed that the CEJ-ABC distance and furcation involvement levels in the ligature control group gradually increased over eight weeks and that these was significantly higher than in the nonligature group. However, levels were dramatically lowered upon doxycycline administration, at 0.066 mm and 0.030 mm, respectively, decrease in CEJ-ABC distance of 0.039 mm and a decrease in furcation involvement of 0.024 mm. In addition, MBE 100 mg/kg and 400 mg/kg treatment reduced the CEJ-ABC distance and furcation involvement in a dose-dependent manner. Administration of 400 mg/kg MBE effectively lowered the CEJ-ABC distance, similar to the effects seen with doxycycline administration. Nutrients 2019, 11, x FOR PEER REVIEW 4 of 10 using micro-CT images. Representative images of all sample periodontal tissue results are shown in Figure 1. As shown in Table 1, the CEJ-ABC distance and furcation involvement in the ligature control group were significantly higher at eight weeks, at 0.098 mm and 0.041 mm, respectively. In the nonligature group, the CEJ-ABC distance had decreased by 0.007 mm, and the Furcation involvement had increased by 0.009 mm. We confirmed that the CEJ-ABC distance and furcation involvement levels in the ligature control group gradually increased over eight weeks and that these was significantly higher than in the nonligature group. However, levels were dramatically lowered upon doxycycline administration, at 0.066 mm and 0.030 mm, respectively, decrease in CEJ-ABC distance of 0.039 mm and a decrease in furcation involvement of 0.024 mm. In addition, MBE 100 mg/kg and 400 mg/kg treatment reduced the CEJ-ABC distance and furcation involvement in a dose-dependent manner. Administration of 400 mg/kg MBE effectively lowered the CEJ-ABC distance, similar to the effects seen with doxycycline administration. Effects of MBE on Gingival Index (GI) and Tooth Mobility (TM) Measurement in Periodontitis Rats The gingival index and tooth mobility were measured in rats, and the results are showed in Figure 2. The mean Gingival Index in the ligature control group was measured as 2, which was significantly higher than in the nonligature control. Doxycycline treatment reduced the Gingival Index to 1.2. Treatment with 100 mg/kg MBE also decreased the Gingival Index and treatment with 400 mg/kg MBE statistically lowered the gingival index to 1.4. In the ligature control group, tooth mobility was measured as 2, which was significantly higher than in the nonligature group. Effects of MBE on Gingival Index (GI) and Tooth Mobility (TM) Measurement in Periodontitis Rats The gingival index and tooth mobility were measured in rats, and the results are showed in Figure 2. The mean Gingival Index in the ligature control group was measured as 2, which was significantly higher than in the nonligature control. Doxycycline treatment reduced the Gingival Index to 1.2. Treatment with 100 mg/kg MBE also decreased the Gingival Index and treatment with 400 mg/kg MBE statistically lowered the gingival index to 1.4. In the ligature control group, tooth mobility was measured as 2, which was significantly higher than in the nonligature group. Doxycycline treatment dramatically reduced tooth mobility to 0.6 compared to the ligature control group. MBE treatment exhibited a statistically dose-dependent decrease: the results of 400 mg/kg MBE were similar to those of doxycycline administration. Effects of MBE on Histological Analysis and Inflammation Score After the end experimentation, periodontal tissues were analyzed according to the inflammation score found in Table 2. The ligature control rats exhibited gingival epithelium erosion and moderate inflammatory cell infiltration. In contrast, periodontal tissues in the nonligature control group showed no lesions. Pathologic analysis of damaged periodontal tissues demonstrated that the administration of doxycycline significantly improved the degree of hyperplasia, inflammation, and periodontal ligament damage in the periodontal epithelium as compared with the ligature control group (Figure 3). Treatment with MBE 100 mg/kg decreased alveolar bone damage and inflammation erosion and showed improved periodontal conditions than the ligation control group. Treatment with 400 mg/kg diminished ligature-induced bone loss, histological changes, and inflammatory cell infiltration as compared with the ligature control group. Effects of MBE on Histological Analysis and Inflammation Score After the end experimentation, periodontal tissues were analyzed according to the inflammation score found in Table 2. The ligature control rats exhibited gingival epithelium erosion and moderate inflammatory cell infiltration. In contrast, periodontal tissues in the nonligature control group showed no lesions. Pathologic analysis of damaged periodontal tissues demonstrated that the administration of doxycycline significantly improved the degree of hyperplasia, inflammation, and periodontal ligament damage in the periodontal epithelium as compared with the ligature control group (Figure 3). Treatment with MBE 100 mg/kg decreased alveolar bone damage and inflammation erosion and showed improved periodontal conditions than the ligation control group. Treatment with 400 mg/kg diminished ligature-induced bone loss, histological changes, and inflammatory cell infiltration as compared with the ligature control group. Effects of MBE on Histological Analysis and Inflammation Score After the end experimentation, periodontal tissues were analyzed according to the inflammation score found in Table 2. The ligature control rats exhibited gingival epithelium erosion and moderate inflammatory cell infiltration. In contrast, periodontal tissues in the nonligature control group showed no lesions. Pathologic analysis of damaged periodontal tissues demonstrated that the administration of doxycycline significantly improved the degree of hyperplasia, inflammation, and periodontal ligament damage in the periodontal epithelium as compared with the ligature control group (Figure 3). Treatment with MBE 100 mg/kg decreased alveolar bone damage and inflammation erosion and showed improved periodontal conditions than the ligation control group. Treatment with 400 mg/kg diminished ligature-induced bone loss, histological changes, and inflammatory cell infiltration as compared with the ligature control group. light microscopy (Olympus BX53, Japan). The data are presented as mean ± SEM. **p < 0.01, compared with ligature control group; ## p < 0.01, compared with nonligature control group (n = 5/group). Effects of MBE on Serum Analysis in Periodontitis Rats The results of periodontitis rat serum analysis are presented in Figure 4. Serum levels of IL-1β, TNF-α, COX-2, and RANKL were significantly lower in the nonligature rats than in the ligature control rats. Serum IL-1β levels in the doxycycline treatment group were reduced, and rats treated with 400 mg/kg MBE also showed significantly decreased IL-1β levels compared with the ligature control group. Rats treated with 100 and 400 mg/kg MBE showed decreased TNF-α levels when compared with the ligature control groups. Increased serum levels of COX-2 induced by ligation were significantly reduced upon doxycycline administration, and MBE treatment with 100 and 400 mg/kg doses showed a dose-dependent decrease. The concentration of RANKL in the ligature in the groups treated with 100 and 400 mg/kg MBE showed a statistically significant reduction compared with the ligature control group. In addition, doxycycline administration significantly reduced the increased RANKL levels. (B) scored according to periodontal status. The stained sections were examined at a magnification of 100 x using light microscopy (Olympus BX53, Japan). The data are presented as mean ± SEM. **p < 0.01, compared with ligature control group; ## p < 0.01, compared with nonligature control group (n = 5/group). Severe Erosion and ulceration of gingival epithelium and severe inflammation Effects of MBE on Serum Analysis in Periodontitis Rats The results of periodontitis rat serum analysis are presented in Figure 4. Serum levels of IL-1β, TNF-α, COX-2, and RANKL were significantly lower in the nonligature rats than in the ligature control rats. Serum IL-1β levels in the doxycycline treatment group were reduced, and rats treated with 400 mg/kg MBE also showed significantly decreased IL-1β levels compared with the ligature control group. Rats treated with 100 and 400 mg/kg MBE showed decreased TNF-α levels when compared with the ligature control groups. Increased serum levels of COX-2 induced by ligation were significantly reduced upon doxycycline administration, and MBE treatment with 100 and 400 mg/kg doses showed a dose-dependent decrease. The concentration of RANKL in the ligature in the groups treated with 100 and 400 mg/kg MBE showed a statistically significant reduction compared with the ligature control group. In addition, doxycycline administration significantly reduced the increased RANKL levels. Discussion The present study is the first report to evaluate the antiperiodontitis effects in ligature-induced periodontitis in rats. The ligature for eight weeks resulted in severe damage of gingival tissue and Nutrients 2019, 11, 934 7 of 10 alveolar bone loss. In this study, we treated the periodontitis rats with MBE and confirmed the recovery effects by inhibiting serum cytokines and RANKL. Numerous studies have used the rat ligature-induced periodontitis model to investigate preventive measures for periodontitis. Rat molars have an anatomically similar structure to human teeth, and periodontitis by ligation can imitate human periodontal disease progression [15]. Periodontitis typically exhibits symptoms of gingival tissue inflammation and alveolar bone loss, and it is quite similar to human periodontitis [1,16]. Therefore, we administered MBE to periodontitis rats in this study and confirmed the antiperiodontitis of MBE effects by regulating the various inflammatory factors involved. Ligature placement around the teeth imitates accumulation of plaque and leads to ulceration of the sulcular epithelium, facilitating connective tissue damage [17]. To investigate gingival tissue destruction and alveolar bone loss, we measured the CEJ-ABC distance and furcation involvements using Micro-CT analysis. CEJ-ABC distance is used as parameter for measurement of periodontal breakdown [18]. Furcation involvement is known to be affected by the presence of periodontal diseases: a large value indicates more extensive alveolar bone loss [19]. In this study, we confirmed that ligation surrounding the teeth causes significant gingival tissue damage and alveolar bone loss, and it increase the CEJ-ABC distance and furcation involvement. Additionally, the GI and TM were measured to assess gingival tissue destruction and alveolar bone loss. The GI index was significantly increased by inflammation and edema in the ligature control group and tooth mobility was also higher due to gingival tissue damage, including alveolar bone loss. However, MBE treatment diminished the gingival index and significantly lowered tooth mobility compared with the ligature control group. The findings of this study demonstrated that MBE administration directly inhibited the progression of periodontitis, by reducing gingival tissue destruction and alveolar bone loss. Furthermore, to investigate the antiperiodontitis effects of MBE, we performed histopathological examination and quantified our findings. The ligature control group showed erosion and ulceration of gingival epithelium and moderate inflammatory cell infiltration [19]. Moreover, it was confirmed that the periodontal ligament and the alveolar bone were destroyed along with periodontitis progression. In the present study, treatment with 400 mg/kg MBE showed mild hyperplasia of the gingival epithelium and inflammation without alveolar bone loss when compared with the ligature control group. The accumulation of plaques due to ligation causes gingival tissue inflammation, and the associated immune response is activated. The main cause of periodontitis, bacterial plaque, triggers production of key cytokines, such as IL-1β and TNF-α, from macrophages. IL-1β, which has a wide range of biological activities, is known to have a strong association with the Gingival Index and pocket depth, and levels of IL-1β were significantly lower in healthy gingival tissue than in inflamed tissue in periodontitis patients [20]. TNF-α causes tissue destruction and an erosive reaction in periodontitis, and increased TNF-α levels are known to promote cartilage collagen degradation and bone resorption [21]. COX-2 plays a role as a mediator in inflammatory pathways. The inflammatory response is strongly activated upon release of proinflammatory cytokines such as such as IL-1β and TNF-α [22,23]. These are also known to play an important role in the initiation and progression of periodontitis and can upregulate RANKL expression in periodontal cells and increase osteoclast formation [11,24]. These cytokines are produced by lymphocytes and stromal/osteoblastic cells, and are known to exhibit high activity in humans with periodontal disease. In addition, RANKL is essential for osteoclast precursor differentiation and plays an important role in periodontal bone resorption [11,25]. In this study, ligation-induced rats showed increased serum levels of IL-1β, TNF-α, COX-2, and RANKL. By contrast, MBE treatment decreased these serum indicators as compared with the ligature control group. These results suggest that MBE alleviates the progression of periodontitis by reducing gingival tissue destruction and alveolar bone resorption factors and the relevant mechanism of action of MBE in this study is shown in Figure 5. resorption [11,25]. In this study, ligation-induced rats showed increased serum levels of IL-1β, TNF-α, COX-2, and RANKL. By contrast, MBE treatment decreased these serum indicators as compared with the ligature control group. These results suggest that MBE alleviates the progression of periodontitis by reducing gingival tissue destruction and alveolar bone resorption factors and the relevant mechanism of action of MBE in this study is shown in Figure 5. Conclusions Periodontitis caused by ligation with Porphyromonas gingivalis is triggered by inflammatory cytokine pathway and it destroys gingival tissue. In addition, cytokines activate osteoclasts to promote alveolar bone resorption and the alveolar bone destruction causes tooth loss. Our results indicate that MBE treatment decreased CEJ-ABC distance and furcation involvement and also reduced gingival index and tooth mobility with periodontitis induced rats. MBE administration also confirmed the reduction of the inflammatory level of periodontal tissue by histopathological examination. Furthermore, MBE significantly inhibited gingival destruction and alveolar bone resorption and these results were evidenced by decreased levels of serum inflammatory cytokines and RANKL levels. However, this study has limitations that it used in vivo model, and needed to further experiment on the mechanism study in periodontal tissue. We will confirm the efficacy by identifying related mechanism and conducting further study of the patients with periodontitis. Such confirmation would suggest that MBE will be developed as a health functional food and medicine for preventing or treating patients with periodontal disease. Conclusions Periodontitis caused by ligation with Porphyromonas gingivalis is triggered by inflammatory cytokine pathway and it destroys gingival tissue. In addition, cytokines activate osteoclasts to promote alveolar bone resorption and the alveolar bone destruction causes tooth loss. Our results indicate that MBE treatment decreased CEJ-ABC distance and furcation involvement and also reduced gingival index and tooth mobility with periodontitis induced rats. MBE administration also confirmed the reduction of the inflammatory level of periodontal tissue by histopathological examination. Furthermore, MBE significantly inhibited gingival destruction and alveolar bone resorption and these results were evidenced by decreased levels of serum inflammatory cytokines and RANKL levels. However, this study has limitations that it used in vivo model, and needed to further experiment on the mechanism study in periodontal tissue. We will confirm the efficacy by identifying related mechanism and conducting further study of the patients with periodontitis. Such confirmation would suggest that MBE will be developed as a health functional food and medicine for preventing or treating patients with periodontal disease.
2019-04-28T13:03:21.086Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "5aedad84b1a31ffa163e463e6a38e8858b3d3e00", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/4/934/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5aedad84b1a31ffa163e463e6a38e8858b3d3e00", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1326836
pes2o/s2orc
v3-fos-license
Recurrence and survival after pathologic complete response to preoperative therapy followed by surgery for gastric or gastrooesophageal adenocarcinoma Background: To characterise recurrence patterns and survival following pathologic complete response (pCR) in patients who received preoperative therapy for localised gastric or gastrooesophageal junction (GEJ) adenocarcinoma. Methods: A retrospective review of a prospective database identified patients with pCR after preoperative chemotherapy for gastric or preoperative chemoradiation for GEJ (Siewert II/III) adenocarcinoma. Recurrence patterns, overall survival, recurrence-free survival, and disease-specific survival were analysed. Results: From 1985 to 2009, 714 patients received preoperative therapy for localised gastric/GEJ adenocarcinoma, and 609 (85%) underwent a subsequent R0 resection. There were 60 patients (8.4%) with a pCR. Median follow-up was 46 months. Recurrence at 5 years was significantly lower for pCR vs non-pCR patients (27% and 51%, respectively, P=0.01). The probability of recurrence for patients with pCR was similar to non-pCR patients with pathologic stage I or II disease. Although the overall pattern of local/regional (LR) vs distant recurrence was comparable (43% LR vs 57% distant) between pCR and non-pCR groups, there was a significantly higher incidence of central nervous system (CNS) first recurrences in pCR patients (36 vs 4%, P=0.01). Conclusion: Patients with gastric or GEJ adenocarcinoma who achieve a pCR following preoperative therapy still have a significant risk of recurrence and cancer-specific death following resection. One third of the recurrences in the pCR group were symptomatic CNS recurrences. Increased awareness of the risk of CNS metastases and selective brain imaging in patients who achieve a pCR following preoperative therapy for gastric/GEJ adenocarcinoma is warranted. Together, gastric and oesophageal adenocarcinoma are the second most common malignancies of the gastrointestinal tract in the United States and worldwide (Kamangar et al, 2006;Jemal et al, 2010). The US age-adjusted incidence and mortality of gastric/ gastrooesophageal junction (GEJ) adenocarcinoma is 7.3 and 5.04 per 100 000 persons, respectively, and the incidence of gastric cancer is rising in the United States in the young age bracket (Anderson et al, 2010;Shah and Ajani, 2010). The majority of patients presenting with resectable gastric/GEJ adenocarcinoma will have locally advanced disease (defined as the penetration of the subserosa by the primary tumour (T3), regional nodal involvement (N þ ), or both (Edge et al, 2010)) for which the chance of cure with surgery alone is poor (Hundahl et al, 2000;Wu et al, 2009;Anderson et al, 2010). For these locally advanced cases, several random assignment studies have established additional therapy as the standard of care, including perioperative chemotherapy with or without radiation therapy (RT) (Cunningham et al, 2006;Stahl et al, 2009), or postoperative chemoradiation (Macdonald et al, 2001), highlighting the importance of multidisciplinary care for these patients. It is well established that pathologic complete response (pCR) following preoperative therapy is associated with improved survival in several malignancies, including breast adenocarcinoma after preoperative chemotherapy ± RT (Wolmark et al, 2001;Symmans et al, 2007;Adams et al, 2010;Chavez-Macgregor et al, 2010), oesophageal cancer after preoperative chemoradiotherapy (Berger et al, 2005;Rohatgi et al, 2005;Chao et al, 2009;Donahue et al, 2009;Park et al, 2010), lung cancer after preoperative chemoradiotherapy (Mamon et al, 2005;Chen et al, 2007), and rectal cancer after preoperative chemoradiotherapy (Maas et al, 2010). Notably, the 15 -27% rate of pCR after chemoradiotherapy in rectal cancer (Pucciarelli et al, 2004) has led some groups to omit surgery and undertake intensive follow-up in select patients who achieve a clinical complete response with no detectable residual tumour after chemoradiotherapy (Habr-Gama et al, 2004). The timing and pattern of recurrence and overall patient survival in patients with gastric/GEJ adenocarcinoma achieving a pCR after preoperative multimodality treatment is not well characterised, with the current description limited to a single series of 24 patients with gastric cancer who achieved a pCR following chemoradiation (Reed et al, 2008). Herein, we report our experience in patients with gastric/GEJ adenocarcinoma who received preoperative chemotherapy ± RT followed by complete (R0) resection and achieved a pCR. We compare their survival and recurrence to similarly treated patients who did not achieve a pCR. Patients and pretreatment evaluation Patients with gastric/GEJ adenocarcinoma who received preoperative therapy were identified from a prospective surgical database at Memorial Sloan-Kettering Cancer Center (MSKCC) from 1985 to 2009. The MSKCC Institutional Review Board approved the study design. Patients with a diagnosis of distal oesophageal carcinoma (Siewert I) were excluded. Pretreatment evaluation usually consisted of a computed tomography (CT) scan of the abdomen and pelvis, endoscopic ultrasound (EUS), diagnostic laparoscopy with cytologic washings, and selective use of positron emission tomography (PET). We collected patient demographics including age, gender, race, and body mass index; preoperative tumour characteristics including tumour location, pretreatment EUS T-stage, and tumour histology; and preoperative chemotherapy regimen and use of RT. Preoperative radiation was delivered as multifield, externalbeam megavoltage radiation using high-energy linear accelerators (6 or 15 MV). Treatment generally included five daily fractions of 1.8 Gy per week over a 5.5-week course with a total radiation dose of 50.4 Gy. The superior field border extended B5 cm cranial to the tumour, and the inferior border extended caudally to include the coeliac lymph node (LN) region. The anterior, posterior, and lateral field borders were B2 cm beyond the tumour, as defined by pretreatment imaging. The locoregional LNs were included in the radiation field. After preoperative treatment, patients underwent gastrectomy or oesophagogastrectomy with two-field (for oesophagogastrectomy) or D2 (for gastrectomy) lymphadenectomy and splenic preservation whenever possible. A curative (R0) resection was defined as the removal of all visible disease and associated nodal basins with negative microscopic surgical margins on final pathologic review. Pathologic staging is reported according to the American Joint Committee on Cancer staging guidelines (7th edn) for gastric adenocarcinoma (Edge et al, 2010). Surgical treatment characteristics collected included operative and pathologic details, including extent of gastrectomy/oesophagogastrectomy, type of LN dissection, and resected specimen pathologic analysis (T-stage, N-stage, number of LN examined, and pathologic treatment effect). Follow-up laboratory and imaging studies and additional postoperative treatment were at the discretion of the treating physician(s) or as directed by a patient-enrolled protocol (Kelsen et al, 1992(Kelsen et al, , 1997Bains et al, 2002;Brenner et al, 2004Brenner et al, , 2006Anderson et al, 2007;Schwartz et al, 2009). Patients were generally followed every 2 -3 months for the first 2 years, followed by every 6 -12 months thereafter. Recurrence was confirmed radiographically and/or pathologically and described as local/regional (including peritoneal) or distant. Date of recurrence was defined as the first notation in the medical record indicating the recurrence. Disease status at last follow-up and cause of death were determined by the medical record, death certificates, and follow-up correspondence. Pathology Pathologic complete response was defined as fibrosis or fibroinflammation within an entirely submitted and evaluated gross lesion without microscopic evidence of carcinoma, and histologically negative nodes. Non-pCR was defined as any evidence of viable carcinoma, either at the primary site or in the resected regional LN. The pathologic stage of residual carcinoma in the non-pCR group was based on the deepest focus of viable malignant epithelium in the gastric and oesophageal wall and/or any carcinoma found in the LN analysis. Pathologic treatment effect was analysed and quantified on a graded, per cent scale as previously described (Mansour et al, 2007). Positive LNs were defined as the presence of any viable tumour cells within LNs. Statistical analysis Statistical analysis was performed using the R package, version 2.10 (http://www.r-project.org). Patient, tumour, and treatment variables were compared between the pCR and non-pCR groups using the w 2 and Wilcoxon rank sum test with continuity correction for categorical and continuous variables, respectively. Recurrence-free survival (RFS) was compared between the pCR and non-pCR groups using the log-rank test. Recurrence location was compared using the w 2 test. Kaplan -Meier methods were used to estimate overall survival (OS), disease-specific survival (DSS), and RFS probability between pCR and non-pCR groups and compared using the log-rank test (Kaplan and Meier, 1958). Death without a recurrence was considered as a competing cause of failure (Prentice et al, 1978;Satagopan et al, 2004). Estimated cumulative incidence of recurrence was performed using the subdistribution method and compared using Gray's test (Gray, 1988). RESULTS From 1985 to 2009, 2676 patients underwent surgical treatment for gastric or GEJ (Siewert II/III) adenocarcinoma at MKSCC. In all, 714 of these patients (27%) received preoperative chemothera-py±RT. One hundred and five patients (15%) had either positive surgical margins after resection (64 patients, 9%) or presence of metastatic disease at surgical exploration/resection (41 patients, 6%) and were excluded from subsequent analysis. The final study population was 609 patients with gastric/GEJ adenocarcinoma treated with preoperative therapy (280 (46%) chemoradiotherapy, 329 (54%) chemotherapy alone) followed by complete (R0) resection. Sixty patients (8.4% of all preoperative treatment patients; 10% of preoperative treatment patients who underwent R0 resection) demonstrated no residual tumour on final pathology and are defined as the pCR group, and the remainder (n ¼ 549, 90% of all preoperative treatment patients who underwent R0 resection) had residual evidence of malignancy and are defined as the non-pCR group ( Figure 1). Table 1 lists the patient characteristics of the pCR and non-pCR patients. There were no differences between the pCR and non-pCR groups with respect to patient age, race, pretreatment EUS T-stage, or histology (Lauren or differentiation). Most patients had advanced T-stage tumours (X80% T3 for both pCR and non-pCR patients). Forty-seven patients who received chemoradiation (17%) achieved a pCR, and 12 (26%) of these patients recurred. Thirteen patients who received chemotherapy alone (4%) achieved a pCR, and 2 (17%) of these patients recurred. Patients who received taxane-based therapy more commonly also received concurrent radiotherapy and therefore more commonly achieved a pCR. Table 2 lists the pathologic T-and N-stages and extent of surgical nodal resection. There were no differences in the extent of LN dissection (D1, D2, or D3) between the pCR and non-pCR groups. The non-pCR patients had a mean of 3.0 (range 1 -8) positive LNs and a mean pathologic treatment effect of 45% (range 10 -95). The pCR group had a higher mean number of LNs examined (29 vs 23), P ¼ 0.04. Median follow-up for all surviving patients was 46 months (interquartile range ¼ 16 -90). Of the 549 patients with a non-pCR, 153 (28%) received postoperative (adjuvant) chemotherapy, and of the 60 patients with a pCR, one patient received additional postoperative chemotherapy. Overall survival, DSS, and RFS was significantly greater in the pCR group compared with the non-pCR group (Figure 2). The timing and pattern of recurrences are summarised in Table 3. For patients achieving a pCR, there was no difference in recurrence between patients who received chemoradiotherapy vs chemotherapy alone (26% and 15%, respectively, P ¼ 0.2). While the non-pCR group had a higher risk of recurrence at 1, 3, and 5 years (5-year recurrence ¼ 27 vs 51% for pCR and non-pCR, respectively, P ¼ 0.02), the pattern of recurrence was similar. There was no difference in the distribution of local/ regional vs distant recurrences (43% vs 57%, respectively) between pCR and non-pCR groups. However, we did observe a significantly higher incidence of first recurrences in the central nervous system (CNS) in pCR (36%) compared with non-pCR (4%) patients (P ¼ 0.01). All of the patients with a CNS recurrence in the pCR group presented symptomatically (four with seizures and one with localising neurologic symptoms), and similarly 8 of the 10 patients (4%) in the non-pCR group who initially recurred in the CNS presented symptomatically (seven with seizures and one with localising neurologic symptoms). Figure 3 summarises the probability of recurrence by final pathologic stage when treating death from other causes as a competing risk. When compared with pCR patients, the probability of recurrence is significantly higher only for pathologically stage III (pIII) non-pCR patients (5-year CI of recurrence ¼ 74 vs 27%, Po0.001). Among the pIII patients, although the majority were stage III by virtue of residual nodal involvement, the three node negative (i.e., T4N0) pIII patients also had a high risk of recurrence, each one developing recurrence within 1 year of resection. There is no significant difference in the probability of Figure 1 Patient study group CONSORT diagram. recurrence between pCR patients and stage pI or stage pII non-pCR patients (5-year CI of recurrence ¼ 39% and 25%, respectively, P ¼ 0.49 for pCR vs non-pCR stage I and P ¼ 0.36 for pCR vs non-pCR stage II). Table 4 provides clinical characteristics of those patients with pCR who developed recurrence (n ¼ 14, 23%). Five patients (36% of pCR recurrences; 8% of all pCR patients) developed CNS recurrence as their first site of recurrence, with a mean time to recurrence of 12.6 ± 7.7 months (range 5 -24 months). Treatment after CNS recurrence consisted of whole brain RT in two patients, surgery (craniotomy) in one patient, and no treatment in two patients. All five pCR patients with CNS recurrences died of their disease, with a mean time from recurrence to death of 9.6 months (range ¼ 2-26 months). Of note, 6 of the 14 pCR patients that recurred (43%) had local/regional recurrence (anastomotic or regional nodal), all of whom received preoperative chemoradiation. DISCUSSION In the past 10 years, results of several randomised controlled trials have established multimodality therapy as the standard of care for locally advanced gastric/GEJ adenocarcinoma (Macdonald et al, 2001;Cunningham et al, 2006;Stahl et al, 2009;Schuhmacher et al, 2010). Reflective of the multidisciplinary approach to locally advanced gastric and GEJ adenocarcinoma, we describe the outcome of patients who received preoperative chemotherapy or chemoradiation followed by complete surgical resection and who achieved a pCR to preoperative therapy. We found a 10% pCR rate in patients with gastric/GEJ adenocarcinoma treated with preoperative chemotherapy±RT followed by R0 resection (17% with prior chemoradiation and 4% with chemotherapy alone). Importantly, despite achieving a pCR with preoperative therapy and independent of type of therapy, the risk of recurrence remains significant; indistinguishable from patients who were downstaged to pathologic stage I or II following preoperative therapy. There is a substantial rate of CNS first recurrences (8% of all pCR patients and 36% of the pCR patients who developed a recurrence) in this cohort of patients, with each CNS recurrence presenting with life-threatening neurologic symptoms. Statistical comparisons between pCR and non-pCR groups were determined using the log-rank test. pCR after preoperative therapy for gastric cancer RC Fields et al The biology of tumours that completely regress with preoperative therapy is likely to be distinct from tumours that did not achieve a pCR (Ajani, 2005;Berger et al, 2005) and is reflected in RFS and OS. As demonstrated in other malignancies (Wolmark et al, 2001;Berger et al, 2005;Mamon et al, 2005;Rohatgi et al, 2005;Chen et al, 2007;Chao et al, 2009;Donahue et al, 2009;Adams et al, 2010;Maas et al, 2010;Park et al, 2010), patients with gastric/GEJ adenocarcinoma who achieve a pCR following preoperative therapy have significant improvements in 5-year OS (60 vs 35%), DSS (67 vs 43%), and RFS (69 vs 45%) when compared with the group who did not achieve a pCR. However, despite achieving a pCR, we noted a significant risk of recurrence in this cohort of patients. Specifically, as shown in Figure 3, there are no differences in the probability of recurrence between the pCR and posttreatment stage I and II patients. The distribution of local/ regional (43%) vs distant recurrence (57%) in the pCR and non-pCR groups is identical. However, there is a significantly higher rate of CNS first recurrences in the pCR (36%) compared with the non-pCR (4%) cohort. The increased risk of developing CNS metastases in patients achieving a pCR is likely due to diminished penetration of the CNS by all of the chemotherapeutic agents in the treatment of gastric/GEJ cancer (Chabner and Longo, 2011). Non-pCR patients, in contrast, are more likely to have persistent micrometastatic disease in systemic circulation, and are therefore more likely to have a non-CNS site of first recurrence. Although CNS recurrences may be more prevalent in patients with prolonged survival, we would highlight that in our cohort, three of the five CNS recurrences in the pCR group developed recurrence early (i.e., o13 months) in their postoperative period, making our findings more noteworthy. It is well established that CNS metastases occur in B50% of patients with locally advanced nonsmall-cell lung cancer (NSCLC) (Mamon et al, 2005). In patients with NSCLC treated with preoperative chemoradiation that have a pCR at the time of resection, there remains a 43% rate of CNS metastases as the site of first failure, which represents 71% of all isolated recurrences (Chen et al, 2007). This observation has led to the use of prophylactic cranial irradiation in patients with stage III NSCLC treated with preoperative chemoradiotherapy and curative surgery, a strategy that has significantly reduced the risk of CNS metastases (18.0 vs 7.7%, unadjusted odds ratio ¼ 2.52, P ¼ 0.004). However, this strategy has not improved OS or DFS (Gore et al, 2011). The rate of CNS recurrence in NSCLC is substantially higher than the 8% rate of CNS as the site of first failure in patients with gastric/GEJ adenocarcinoma who achieved a pCR following preoperative therapy, suggesting a limited value of prophylactic whole brain radiation in this select population. Interestingly, patients with a pCR had higher numbers of LNs examined in the pathologic specimen when compared with non-pCR patients (29 vs 23). In rectal cancer, it is suggested that interactions between tumour and host immune cells may be different between pCR and non-pCR tumours (Ogino et al, 2010). Increased LN count, and in particular increased negative LN count, has been found to be associated with increased survival in colorectal cancer (Chang et al, 2007). Patients who achieve a pCR may elicit a stronger immune response, resulting in more numerous and larger regional LN, suggesting a possible biologic/ immunologic difference in the host response to these tumours (Johnson et al, 2006;Ogino et al, 2010). The low frequency of pCR and varied overall histologic response rates to preoperative therapy highlight the importance of ongoing research to identify response to therapy early. We and others have examined FDG-PET/CT for this purpose (Lordick et al, 2007;Shah, 2007;Ott et al, 2008;Wieder and Weber, 2009). A presently accruing study at our institution is studying the ability of FDG-PET to discriminate responders vs non-responders to preoperative chemotherapy for locally advanced gastric cancer and to salvage non-responding patients with alternate chemotherapy (Shah, 2011). This retrospective evaluation reflects the current multidisciplinary approach to patients with gastric cancer in which proximal gastric tumours (Siewert type II or III) may receive either chemotherapy alone or combined modality chemoradiation before surgical resection. Our data are not intended to compare and contrast the merits of these two distinct treatment approaches, but rather are focused on the risk and pattern of recurrence of patients who achieve a significant and complete pathologic response to preoperative therapy. Notably, we did not observe a difference in recurrence rate between those receiving chemoradiotherapy (26%) and those receiving chemotherapy alone (15%). This may, in part, Table 3 Timing and patterns of recurrence in patients undergoing preoperative chemotherapy ± radiation therapy for gastric and gastrooesophageal junction adenocarcinoma, followed by R0 resection Figure 3 Cumulative incidences and probabilities of recurrence by stage (treating death from other causes as a competing risk) in patients undergoing preoperative chemotherapy±radiation therapy for gastric and gastrooesophageal junction adenocarcinoma, followed by R0 resection. Abbreviations: pCR ¼ pathologic complete response, non-pCR ¼ nonpathologic complete response. pCR after preoperative therapy for gastric cancer RC Fields et al be due to the low overall rate of pCR to chemotherapy alone (4%), corresponding to our limited statistical power to compare these two groups. We acknowledge that our observations are based on a small number of total events (i.e., 14 recurrences in 60 pCR patients, with 5 CNS recurrences). However, to our knowledge this represents the largest reported series describing patients with a pCR after preoperative treatment and surgical resection in gastric/ GEJ adenocarcinoma. All of the patients with CNS metastases presented with symptomatic seizures or neurologic symptoms. Early detection of brain metastases may identify these patients before they experience seizures or symptoms and allow for early treatment (stereotactic RT and/or surgery). These data support having an increased awareness of the risk of the CNS as the first site of recurrence in this cohort of patients. Considering that all CNS metastases developed within 2 years of follow-up (range 5 -24 months), selective surveillance brain imaging (contrast enhanced CT or MRI) to identify CNS disease before the onset of symptoms during the first 2 years of follow-up would be reasonable. Additionally, we noted that despite achieving a pCR, there was a 43% incidence of local/regional recurrence. Four patients (7% of all pCR patients and 29% of all pCR recurrences) developed a local recurrence as the site of first recurrence. Thus, pCR does not obviate the need for continued local -regional surveillance of this patient cohort. In summary, pCR following preoperative chemotherapy±RT followed by surgical resection for gastric/GEJ adenocarcinoma occurs in a minority of patients. When compared with non-pCR patients, a pCR results in improved survival; however, there remains a significant rate of recurrence. Patients that achieve a pCR after preoperative therapy have a similar risk of recurrence to posttreatment pathological stage I and II tumours. In addition, there is a significantly higher incidence of symptomatic CNS first recurrences in pCR patients. These findings have important clinical implications: care providers should be cognizant of the risk of symptomatic CNS recurrences in this select cohort of patients and should consider selective brain imaging for early identification and treatment of CNS metastases.
2017-11-08T17:26:24.310Z
2011-05-24T00:00:00.000
{ "year": 2011, "sha1": "dcdef54f87c9a1f2de3ca4a7cb0db0010fe594c5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/bjc2011175.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dcdef54f87c9a1f2de3ca4a7cb0db0010fe594c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245854245
pes2o/s2orc
v3-fos-license
Comparison of Acquired Activated Protein C Resistance, Using the CAT and ST-Genesia® Analysers and Three Thrombin Generation Methods, in APS and SLE Patients Background: Acquired activated protein C resistance (APCr) has been identified in antiphospholipid syndrome (APS) and systemic lupus erythematosus (SLE). Objective: To assess agreement between the ST-Genesia® and CAT analysers in identifying APCr prevalence in APS/SLE patients, using three thrombin generation (TG) methods. Methods: APCr was assessed with the ST-Genesia using STG-ThromboScreen and with the CAT using recombinant human activated protein C and Protac® in 105 APS, 53 SLE patients and 36 thrombotic controls. Agreement was expressed in % and by Cohen’s kappa coefficient. Results: APCr values were consistently lower with the ST-Genesia® compared to the CAT, using either method, in both APS and SLE patients. Agreement between the two analysers in identifying APS and SLE patients with APCr was poor (≤65.9%, ≤0.20) or fair (≤68.5%, ≥0.29), regardless of TG method, respectively; no agreement was observed in thrombotic controls. APCr with both the ST Genesia and the CAT using Protac®, but not the CAT using rhAPC, was significantly greater in triple antiphospholipid antibody (aPL) APS patients compared to double/single aPL patients (p < 0.04) and in thrombotic SLE patients compared to non-thrombotic SLE patients (p < 0.05). Notably, the ST-Genesia®, unlike the CAT, with either method, identified significantly greater APCr in pregnancy morbidity (median, confidence intervals; 36.9%, 21.9–49.0%) compared to thrombotic (45.7%, 39.6–55.5%) APS patients (p = 0.03). Conclusion: Despite the broadly similar methodology used by CAT and ST-Genesia®, agreement in APCr was poor/fair, with results not being interchangeable. This may reflect differences in the TG method, use of different reagents, and analyser data handling. Introduction The anticoagulant protein C pathway plays a central role in the regulation of coagulation and in the active cross-talk between the inflammation and coagulation systems. The physiological proteolytic activation of protein C by thrombin occurs on the endothelial surface and involves two membrane receptors, thrombomodulin (TM) and endothelial protein C receptor. The binding of thrombin to TM shields the procoagulant exosite I of thrombin and facilitates protein C activation [1]. Activated protein C (APC) exerts its anticoagulant effects by proteolytic inactivation of factor Va and factor VIIIa, with protein S acting as a cofactor in these reactions 1 . In addition to its anticoagulant properties, activated protein C also exerts cytoprotective and anti-inflammatory effects, including inhibition of leukocyte chemotaxis, reduction in expression of pro-inflammatory cytokines, and expression of adhesion molecules [2]. Resistance to the anticoagulant actions of activated protein C (referred to as activated protein C resistance, APCr) either heritable (i.e., caused by factor V Leiden) [3,4] or acquired, has been shown to be associated with hypercoagulability and an increased risk of thromboembolic events [5][6][7]. Acquired APCr, assessed using the thrombin generation (TG) system, which provides a global assessment of coagulation function, in the presence and absence of APC that enables assessment of the function of the protein C system, was shown to be associated with thrombosis in antiphospholipid antibody (aPL) positive patients [8]. Using the calibrated automated thrombogram (CAT), APCr to both exogenous APC and to activation of endogenous plasma protein C using Protac ® was shown to be greater and associated with a more severe thrombotic phenotype in antiphospholipid syndrome (APS) patients with previous venous thromboembolism [9]. Similarly, increased APCr, using the same system, and independently of criteria aPL was also demonstrated in systemic lupus erythematosus (SLE) patients [10]. In spite of the numerous attempts to standardise tests with the CAT system, the methodology is characterised by high inter-and often intra-laboratory variation, lack of standardisation, clinical validation and absence of quality controls [11][12][13][14][15][16]. More recently, it was demonstrated that with the use of standardised methodology, use of commercial reference plasma and quality controls for validation of each run of measurement, the ETP based APCr assay can be reproducible, sensitive and validated with excellent inter-experiment precision [17]. The ST-Genesia ® (Stago, France) is a new, fully automated TG analyser with customised reagents sensitive to procoagulant and anticoagulant protein deficiencies. In comparison to the CAT system, it offers normalisation of each TG parameter to a reference plasma for each test performed and has been designed to offer enhanced reproducibility and standardisation with the use of dedicated calibrators and controls, with the aim of reducing inter-laboratory and inter-assay variability [18][19][20][21]. However, the calibration used to obtain thrombin concentration from the fluorescent signal differs between the two TG systems. The aims of this prospective cross-sectional study were to a) evaluate the prevalence of APCr in APS patients and SLE patients compared to non-APS/SLE thrombotic controls using the ST-Genesia ® system in the presence/absence of TM and b) to compare APCr with the CAT system (using recombinant human APC (rh-APC) and Protac ® ) to establish if both systems can detect APCr observed in APS and SLE patients. Patients and Samples All patients recruited in this study fulfilled the relevant international disease classification criteria for either APS [22] or SLE [23]. Disease activity using the British Isles Lupus Assessment Group (BILAG)-2004 index [24] and the SLE disease activity index-2000 (SLEDAI-2K) [25] were recorded for all patients with SLE. BILAG categories were converted into numbers according to the 2010 coding scheme [26]. Patients with APS and thrombotic controls were receiving warfarin anticoagulation for at least three months since the thromboembolic event prior to being recruited. In this cross-sectional study, we tested for APCr the following patients: 105 APS patients with no other autoimmune diseases (83 thrombotic, venous and/or arterial thrombosis, and 23 with only pregnancy morbidity; PM), 53 SLE patients (16 with APS and 37 with no thrombosis), and 36 non-APS thrombotic controls. Seventy-five healthy normal controls were also recruited and used to establish cut off values for the assays. APCr with the CAT system was previously reported for 30 thrombotic APS, 20 non-thrombotic controls [9] and for 53 SLE patients [10]. Written informed consent was obtained from all subjects in accordance with the Declaration of Helsinki. Ethical approval was granted by the Research Ethics Committee NREC (reference: 13/EM/0150) and from the Research and Development office at UCLH (reference: 13/0030). Patients (APS, SLE and thrombotic controls) were excluded if they had heritable thrombophilia (factor V Leiden or the G20210A prothrombin gene mutation, antithrombin, protein S or protein C deficiency), a history of malignancy or myeloproliferative neoplasms. Patients and NC were also excluded if they were receiving estrogen preparations (combined oral contraceptives or hormone replacement therapy) or were pregnant. All samples were collected between 2017 to 2020 and were stored for a maximum of three years prior to their use. Clinical data were collected from medical records and included demographics, general disease characteristics over time, history of thrombotic events and medication. Antiphospholipid antibodies had been routinely assessed in the hospital laboratory with diagnostic procedures and assessment of aPL profile and status at the time of sampling performed in accordance with international consensus criteria and national guidelines [22,27,28]. A positive aPL profile was defined as the presence of at least one aPL type, confirmed by repeat assessment at least 12 weeks apart with antibody levels (β2 Glycoprotein I antibodies and cardiolipin antibodies) exceeding the 99th percentile of the laboratory reference range and lupus anticoagulant was positive/negative [22]. Venous blood was collected using a 21-gauge butterfly needle, with minimal venous stasis, into 5 mL Vacutainer ® tubes (Becton Dickinson, Plymouth, UK) containing 0.105 M citrate. Platelet poor plasma was prepared within two hours of collection by double centrifugation at ambient temperature (2000× g for 15 min) and stored in aliquots at −80 • C. Immediately prior to analysis, the samples were thawed in a water bath at 37 • C. APCr using the ST-Genesia ® analyser: TG was investigated according to the manufacturer's recommendations using the STG-ThromboScreen reagent, in the absence (−) and presence (+) of TM (Stago, Asnières sur Seine, France). The reagent contains a mixture of phospholipids and human TF at a medium picomolar concentration, referred to infra as "intermediate picomolar TF concentration (concentration not disclosed by manufacturer). Each batch of both reagents is adjusted by the manufacturer to obtain the desired TG profile ("reagents manufacturer undisclosed data"). The reagent TM concentration in the reagent is sufficient to inhibit 50% of the ETP obtained in normal pooled plasma in the absence of TM (final TM concentration is not disclosed by Stago). The assay contained three levels of quality control, low, normal, and high TM resistance and a reference plasma for parameter normalisation. In the presence of both reagents, TG was triggered by the CaCl 2 contained in a combined reagent with the fluorogenic substrate. The intra-and inter-assay coefficient of variation using pooled normal plasma for the ST-Genesia ® system in the presence of TM was: 1.1%, 0.9% and 0.9%; and 2.1%, 3.0%, and 4.9% for lag time, ETP, and peak thrombin respectively. APCr using the CAT analyser: Resistance to exogenous APC was determined using 5 nM recombinant (rh) APC, and to activation of endogenous protein C using 0.1 units/mL Protac ® , an enzyme that converts protein C into APC (Pentapharm AG, Basle, Switzerland) using the CAT machine with the PPP reagent (5 pM tissue factor) as previously described [9,10]. APCr assays using the CAT and the ST-Genesia ® systems in samples from patients on warfarin anticoagulation were performed by mixing patient plasma 50:50 with pooled normal plasma to correct any factor deficiencies induced by anticoagulation, as described by us and others [9,29,30]. APCr was expressed as % inhibition of endogenous thrombin potential (ETP), ETP (in nmol/L·min: area under the thrombin time concentration curve) where ETP is the amount of thrombin formed in vitro in a clotting reaction and reflects the in vivo capacity of an individual to generate thrombin. The % inhibition was calculated by (the result of TG parameter (ETP) in the absence of TM minus ETP in the presence of TM)/(ETP absence of TM) × 100 and for the CAT assays for rhAPC and Protac ® as previously described [9,30]. Greater APCr is defined as % inhibition of ETP below the ninety-ninth centile of 75 NC; rhAPC (56%); Protac ® (63%), −/+TM (49%). APCr is increased as the %inhibition of ETP decreases. Statistical Analysis Results are expressed as median with 95% confidence intervals (CI). Comparisons were made using the Mann-Whitney test or the Wilcoxon signed-rank test when appropriate. Results are reported as median and 95% confident intervals. Statistical comparisons of the results obtained with the different experimental conditions within and between different patient groups were performed using paired t-tests. The degree of agreement between methods was assessed categorically according to the presence or absence of APCr below the ninety-ninth centile in NC, using the kappa (κ) coefficient, where κ < 0 shows no; <0.20 poor; 0.2-0.40 fair; 0.41-0.60 moderate; 0.61-0.80 good; and 0.81-1.00 very good agreement [31]. The Bland-Altman method was used to evaluate the agreement between methods by constructing 95% limits of agreement. Fisher's exact test was used to study associations. A p-value of <0.05 was considered to be significant. Statistical analysis was performed using Graph Pad 8.0. Patients Characteristics, clinical features and medication for patients with APS and thrombotic controls are presented in Table S1 and for patients with SLE in Table S2. There were no major differences between the patient groups in terms of demographics, SLE clinical features and disease activity at the time of sample collection. According to the APS classification (categories: I, IIa, IIb, and Iic, based on Miyakis et al., 2006) [22], 64/105 APS patients were category I (more than one laboratory criteria present; 45 of whom were double and 19 triple aPL positive); 16 were category IIa (LA alone), 11 were category IIb (presence of aCL alone); 14 patients were category IIc (presence aβ 2 GPI alone). Out of the 36 SLE patients with aPL, 29 were category I (16 double and 13 triple aPL positive), four category IIa, two category Iib, and one category IIc (Table S2). APCr with ST-Genesia ® : Percent (%) inhibition of ETP with the ST-Genesia ® in the presence of TM was significantly lower in APS and SLE patients compared to thrombotic controls (p < 0.0001 for both). No differences were observed between APS and SLE patients ( Figure 1), with no significant differences in % inhibition of ETP between methods and reagents in any of the patient groups. Using the ST-Genesia ® , APCr values (below the established normal cut off) were identified in 53.8% APS, 50% SLE patients and 8.3% thrombotic controls. Using the CAT analyser, APCr with rhAPC was identified in 57.5% APS, 59.3% SLE patients, and 16.7% thrombotic controls; and APCr with Protac ® in 63.2% APS, 70.4% SLE, and 13.9% thrombotic controls (Table 1). Subgroup-analysis of the patients that were receiving warfarin anticoagulation revealed that APCr was significantly different between thrombotic controls and thrombotic APS patients (p < 0.0001 for all three methods) as was also between thrombotic controls and thrombotic SLE patients (ST-Genesia ® ; p = 0.03, CAT rhAPC; p = 0.003, Protac ® ; p < 0.0001). However, agreement between the ST-Genesia ® and the CAT analyser (using either rhAPC or Protac ® ) in identifying patients with APCr was only poor to fair for APS and SLE patients, respectively ( Figure 2, Table 1). In thrombotic controls, there was no agreement between the two analysers ( Figure 2, Table 1), but the number of patients with APCr below the established cut off was small, and most of these had borderline results. Bland-Altman analysis showed consistently lower APCr values with the ST-Genesia ® compared to the CAT analyser (for both rhAPC and Protac ® ) with a small degree of bias, but no particular trends and with varying APCr levels for all patient groups ( Figure S1). Patients with APS Patients with APS were further stratified (a) according to clinical phenotype: into thrombotic (venous and/or arterial) and PM patients, and (b) according to aPL status (single, double triple aPL positive). Percent inhibition of ETP with the ST-Genesia ® was significantly lower in PM compared to thrombotic APS patients (p = 0.03), but it failed to reach significance with the CAT analyser with either rhAPC or Protac ® (Figure 3).; agreement between the three methods was poor to fair. In APS patients with PM, % inhibition of ETP with the ST-Genesia ® was significantly lower compared to the CAT analyser with rhAPC, (p = 0.03) but not with Protac ® (Figure 3A). No differences were observed in median APCr between the three methods in thrombotic APS patients ( Figure 3A, Table 2) or between venous and arterial APS patients (data not shown). Poor APCr prevalence is presented as n (%) for the number of patients identified with APCr with each method as well as median and 95% confident intervals. For the agreement n represents the number of patients where the results were in agreement. aPL: antiphospholipid antibodies. Agreement in APCr between the ST-Genesia ® and the CAT with rhAPC was fair in thrombotic and poor in PM APS patients while with Protac ® was poor for both clinical subgroups ( Figure S2, Table 2). APCr was identified in 66.6% of triple aPL + APS patients with the ST-Genesia ® compared to 80% with the CAT analyser with rhAPC and 93.3% with Protac ® . Comparable prevalence of APCr for double and single aPL + APS patients with the two analysers was observed ( Table 2). In triple aPL positive APS patients APCr values with the ST-Genesia ® and the CAT analyser with Protac ® were significantly greater compared to double and single aPL positive APS patients. ( Figure 3B, Table 2). Moderate agreement was observed in the triple aPL+ APS patients between the Genesia ® using TM and the CAT using rhAPC or Protac ® but not in double or single aPL positive APS patients, which showed fair and poor agreement between either method ( Figure S2, Table 2). Bland-Altman analysis showed a trend towards lower APCr values with the ST-Genesia ® when compared to the CAT with rhAPC in both thrombotic and PM APS ( Figure S3A,B) and in all aPL positive APS groups ( Figure S3). Patients with SLE Patients with SLE were further stratified according to aPL status (positive and negative) and thrombotic history (with and without thrombosis). There were no differences in APCr values with either of the analysers between aPL positive and negative SLE patients ( Figure 4A). However, % inhibition of ETP with the ST-Genesia ® and the CAT analyser with Protac ® was significantly lower in those with thrombosis compared to those without (TM; p = 0.04, Protac ® ; p = 0.05). There were no differences in APCr values assessed with any of the three methods between thrombotic APS patients and SLE patients with APS. Lower % inhibition of ETP with the ST-Genesia ® and the CAT analyser with either rhAPC or Protac ® was identified in between 45-53% of aPL positive and negative patients. In SLE patients, only poor to fair agreement in APCr was seen between the two analysers and between the three methods ( Figure S4, Table 2). Bland-Altman analysis showed a trend towards higher APCr values with the CAT analyser (rhAPC or Protac ® ) compared to values obtained with the ST-Genesia ® for all the different groups of SLE patients tested ( Figure S5). Discussion This study reports on the novel comparison of two TG analysers, the ST-Genesia ® and CAT, using three different TG methods in assessing APCr in patients with APS, SLE, and in thrombotic controls. Percent inhibition of ETP with the ST-Genesia ® in the presence of TM was significantly lower in overall APS and SLE patients compared to thrombotic controls. Sub group-analysis of anticoagulated patients also revealed that APCr with all three methods was significantly greater in thrombotic APS and thrombotic SLE patients when compared to thrombotic controls. We demonstrated that regardless of the TG method used, agreement in identifying APCr between the two analysers was poor in APS patients (≤65.9%, k coefficient: ≤0.20) and fair in SLE patients (≤68.5%, k coefficient: ≥0.29). No agreement was observed in thrombotic control patients, probably due to the small number of patients with APCr with only borderline abnormality. When APS and SLE patients were further stratified according to clinical phenotype and aPL status, we observed that APCr with both the ST Genesia and the CAT using Protac ® , but not with the CAT using rhAPC, was significantly greater in triple aPL APS patients compared to double/single aPL patients and in thrombotic SLE patients compared to non-thrombotic SLE patients. A novel observation of our study was that the ST-Genesia ® identified significantly less % inhibition of ETP in PM compared to thrombotic APS patients, which was not identified using the CAT analyser with either of the two methods used. APCr values were consistently higher with the CAT analyser compared to ST-Genesia ® in all patient groups, and results were not interchangeable. APCr using the TG can be assessed by either investigating the downstream effects of APC, using exogenous APC or by assessing the integrity of the mechanism of endogenous protein C activation, using either TM or Protac ® , which can highlight differences in the development of APCr [32]. Previous APCr studies in APS patients mainly employed exogenous APC and the CAT analyser and demonstrated a clear association between increased APCr and thrombotic events [30,33]. Our group extended the APCr studies in APS patients with the CAT analyser by using Protac ® to assess activation of endogenous protein C, showing that APCr with Protac ® compared to rhAPC was associated with a severe thrombotic phenotype in venous thrombosis APS patients 9 . Previous studies have also confirmed that APCr is frequently present in SLE using the CAT analyser and exogenous APC [33][34][35][36]. More recently, we also expanded on this work using both Protac ® and exogenous APC. APCr was observed in SLE independently of aPL positivity, while patients with thrombosis tended to exhibit APCr to both reagents [10]. APCr assessed with the CAT analyser is not in widespread use due to limitations including high inter-laboratory variability, poor standardisation, lack of appropriate quality control materials [12,13,37], and differences in concentrations of APC, tissue factor and phospholipid vesicles. These problems make comparisons between different studies and the implementation into routine practice difficult [32,38]. While efforts have been made to improve the performance of the CAT analyser by the introduction of reference plasma for normalisation of results that reduces the interlaboratory and inter-assay variability [11,12,37,39], proper standardisation of the method and its implementation in routine daily care remains an issue. Recent work by Douxfils et al. showed that by implementing a validated and standardised method, using commercially available reference plasma and quality control samples [17], and normalising APCr [40], steps could be made towards implementing APCr TG in routine practice as a predictive biomarker [10][11][12][13]17,30,[32][33][34][35][36][37][38][39][40]. More recently, ISTH provided further guidance and recommendations for (pre)analytical steps when standardising the TG assay aiming to harmonise differences between methods and laboratories [41]. In contrast to the CAT analyser, the ST-Genesia ® is a new analyser for the assessment of TG, with a fully automated and standardised system aimed to introduce TG into the clinical routine. This analyser uses dedicated reagents, calibrators and internal quality controls and has been shown that it can achieve improved inter-experimental precision with the use of a reference plasma [19]. It also showed good inter-assay precision with the use of internal quality controls [42]. Our study showed that % inhibition of ETP with the ST-Genesia ® was significantly lower in both APS and SLE patients (mixed on and off treatment) when compared to thrombotic controls. This was not affected by anticoagulation treatment or by mixing patient plasma 50:50 with pooled normal plasma as sub-group analysis revealed that APCr for both thrombotic APS and thrombotic SLE patients on warfarin anticoagulation remained significantly higher compared to thrombotic controls. These results suggest a prothrombotic phenotype in thrombotic APS and SLE patients, in agreement with previous studies [43]. Previous assessment of the ST-Genesia ® and the CAT analyser showed good agreement between most but not all of the TG parameters measured [18]. One study showed limited bias between the ST-Genesia ® and the CAT in anticoagulated samples [42], but a different study identified significant differences in lag time, time to peak and ETP in healthy controls [44]. In addition, in patients with cirrhosis, although the ST-Genesia ® correctly identified patients with hypercoagulability who had been identified with the CAT analyser, others were missed [45]. Similarly, in patients undergoing a liver transplant, the CAT and the ST-Genesia ® provided very different results [20], suggesting that the two systems are not comparable. In agreement with the above studies, we found poor agreement between the two analysers in APS, fair in SLE and no agreement in thrombotic control patients with a small bias in APCr values regardless of the TG method used. Agreement between the analysers remained low regardless of the method even after APS and SLE patients were subcategorised according to clinical phenotype and aPL status. Our findings suggest that APCr evaluation with the two analysers is not comparable despite the similar methodology used. Furthermore, differences in APCr were also identified between the three methods used in both APS and SLE patients highlighting differences in the mechanism leading to APCr. APCr with both the ST Genesia and the CAT using Protac ® , but not the CAT using rhAPC, was significantly greater in triple antiphospholipid antibody (aPL) APS patients compared to double/single aPL patients and in thrombotic SLE patients compared to non-thrombotic SLE patients. Both methods assess the integrity of the mechanism of protein C activation compared to rhAPC, which assess the ability of the plasma to resist the anticoagulant action of exogenous provided APC. These results suggest that assessing the integrity of the endogenous protein C activation mechanism might be more sensitive in detecting differences in APCr between different clinical phenotypes and also between different aPL subtypes as both TG methods are based on activation of endogenous protein C. It might indicate that in these patients, the endogenous mechanism of activation of protein C might be defective and confirm that the use of similar TG methods might result in a higher degree of agreement between the two analysers. This could potentially be a useful tool in identifying patients at higher risk of thrombosis and in further delineating the differences between different clinical subtypes and of possible clinical significance that deems further investigation. A novel observation of our study is that the ST-Genesia ® identified a significantly lower % inhibition of ETP (greater APCr) in PM compared to thrombotic APS patients that failed to reach significance with the CAT analyser for either of the two methods. The TG methods are clearly not interchangeable. In our previous studies, we have shown that there was not always complete agreement between the presence of antibodies against protein C and APCr measured by the CAT system with Protac ® or rhAPC, indicating differences in the TG methodology might result in different final results [9,10]. The discrepancies between methods observed in the current study could be explained by different patients having varying populations of substances that interfere with the TM catalysis of protein C activation; antibodies that block Protac ® cleavage of protein C; and antibodies that block APC function. It could also suggest that not only differences in the method used but also differences in the reagent composition and analyser data handling might also have affected outcomes. This could be due to a number of reasons: the concentration of TF used by the ST-Genesia ® ST-Genesia ® is unknown (the manufacturers state that it contains a medium tissue factor concentration) and might be different to that used with the CAT analyser (approximately 5 pM); the use of different reagents for the assessment of APCr (TM, rhAPC and Protac ® ) might also have played a role; other contributing factors might be the reaction conditions and the fact that although both analysers rely on the same principle, they use different calibration procedures and different software for analysis of the results. As stated above, for patients on warfarin anticoagulation, APCr assays were performed using samples mixed 50:50 with pooled normal plasma to correct for possible factor deficiencies introduced by anticoagulation. However, this is also a limitation of our study as the addition of pooled normal plasma could dilute the effect of certain antibodies, including aPL, and could introduce other variables into the system that could explain some of the differences between our study and others [46]. Further studies are required to establish the exact reasons for the differences in results between the two analysers in these patients, and disclosure of the TF and TM concentration by the manufacturer might be critical for this. In conclusion, despite the ST-Genesia ® and the CAT analyser having a broadly similar methodology, an agreement was only poor to fair in patients with APS and SLE with the results not interchangeable and with no clear indication of which analyser or method gives a true reflection of hypercoagulability and greater APCr in these patients. The ST-Genesia ® ST-Genesia ® offers some clear advantages over the CAT analyser, including full automation, easier to track reagents and results for accreditation and documentation purposes that could be a benefit for a clinical laboratory or a clinical trial. However, neither method can be used as the gold standard. Measurement of TG with the addition of TM might provide a more sensitive assessment of coagulation capacity and could aid in highlighting differences in APCr between different clinical phenotypes in APS and SLE patients as TM is more physiological than Protac ® . Additional studies are needed using both analysers to establish the effectiveness of each analyser in predicting clinical thrombotic events and, therefore, its potential use for better management of these patients. Each laboratory would be advised to establish its own reference range and performance criteria for each analyser. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm11010069/s1, Table S1. Demographic, clinical characteristics and treatment of thrombotic controls and APS patients, Table S2. Demographic, clinical characteristics and treatment of patients with SLE, Figure S1. Bland-Altman graphs for the agreement in APCr between ST-Genesia ® and the CAT analyser with rhAPC (left panel) and Protac ® (right panel) in A. APS patients B. SLE patients and C. thrombotic controls, Figure S2. Agreement in APCr between the ST-Genesia ® in the presence of TM and the CAT analyser with rhAPC (left panel) and Protac ® (right panel) in APS patients stratified according to clinical phenotype and aPL status, Figure S3. Bland-Altman graphs for the agreement in APCr between ST-Genesia ® with TM and the CAT analyser with rhAPC or Protac ® in APS patients stratified according to clinical phenotype and aPL status, Figure S4. Agreement in APCr between the ST-Genesia ® and the CAT analyser with rhAPC (left panel) and Protac ® (right panel) in SLE patients stratified according to aPl status and thrombotic history, Figure S5. Bland-Altman graphs for the agreement in APCr values between ST-Genesia ® with TM and the CAT analyser with rhAPC and Protac ® in patients with SLE stratified according to aPL status and thrombotic history. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Committee NREC (reference: 13/EM/0150) and from the Research and Development office at UCLH (reference: 13/0030). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available upon reasonable request from the corresponding author.
2021-12-25T16:09:29.452Z
2021-12-23T00:00:00.000
{ "year": 2021, "sha1": "260bbf3101e64df8816435856e83d63b0412625b", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "992692928e03f6c8451ba13ef38ddc398e098d9c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16238882
pes2o/s2orc
v3-fos-license
Geometry of $Q$-recurrent maps Given a critically periodic quadratic map with no secondary renormalizations, we introduce the notion of $Q$-recurrent quadratic polynomials. We show that the pieces of the principal nest of a $Q$-recurrent map $f_c$ converge in shape to the Julia set of $Q$. We use this fact to compute analytic invariants of the nest of $f_c$, to give a complete characterization of complex quadratic Fibonacci maps and to obtain a new auto-similarity result on the Mandelbrot set. Introduction The principal nest of a quadratic polynomial is a collection of pieces of the puzzle with good shrinking properties (see [L3]). In [P] we proposed a supplementary construction, the frame of the nest, to help classify different nest types. Based on that classification, we introduce in this paper the family of Q-recurrent maps. In essence, these are polynomials for which the nest frames are combinatorially modeled on the puzzle of a chosen critically periodic quadratic polynomial Q. We will generalize to the family of Q-recurrent maps some results of M. Lyubich and L. Wenstrom that concern the real Fibonacci parameter c fib . It is shown in [L1] that the central pieces of the nest of f c fib are asymptotically similar to the Julia set of z 2 − 1. This fact is expanded in [W], where the shape of pieces is exploited to compute the exact rate of growth of the principal nest moduli. Wenstrom translates these results to the parameter plane in order to obtain the corresponding shape of parapieces and the rate of paramoduli growth around c fib on the Mandelbrot set M . The corresponding results for Q-recurrent maps are as follows: Theorem. Let c be the center of a prime component of M and Q its associated quadratic polynomial. Then the nest pieces of a Q-recurrent map converge in shape to the Julia set K Q . The asymptotic shape of pieces is used to derive analytic information about the nest. In the simplest case, Q(z) = z 2 − 1, the family of (z 2 − 1)-recurrent maps has an interesting description. Theorem. A quadratic polynomial is (z 2 − 1)-recurrent if and only if it is complex Fibonacci. Moreover, the principal moduli of any map in this family grow linearly with growth rate ln 2 3 . The corresponding result is different when the critical point of Q has period ≥ 3. Theorem. For Q(z) = z 2 − 1 the principal moduli of a Q-recurrent map grow exponentially with a rate that depends on the period of Q. When the shape results are translated to the Mandelbrot set, the above statements find parametric counterparts. Theorem. The paranest pieces around a Q-recurrent parameter c converge in shape to K Q . The growth of paramoduli around c is as in the corresponding dynamical plane. The family of Q-recurrent parameters forms a dyadic Cantor set of Hausdorff dimension 0. As an application of parapiece shape, an easy diagonal argument yields a powerful auto-similarity result on M . Theorem. Let c 1 , c 2 ∈ ∂M be two parameters such that f c 2 has no indifferent periodic orbits that are rational or linearizable. Then there exists a sequence of parapieces {Υ 1 , Υ 2 , . . .} converging to c 1 as compact sets, but such that Υ n converges to K c 2 in shape. 1.1. Paper structure. The puzzle of Yoccoz, the principal nest and their parametric counterparts are defined in Section 2 as a means to introduce notation. The adjacency graphs introduced in Subsection 2.4 are used in Section 3 to define the frame of a principal nest, and in Section 4 to describe Q-recurrent behavior. In Section 5 we present the classification of complex Fibonacci quadratic polynomials and the results on the shape of pieces and growth of moduli for the nest of Q-recurrent maps. The corresponding results on parametric pieces of the Mandelbrot set M are presented in Section 6. Theorem 6.3 introduces a new similarity phenomenon between different locations of ∂M . An appendix summarizes the tools borrowed from complex variables. This includes an extension of the Grötzsch inequality and brief discussions of Carathéodory topology, Koebe's distortion lemma and the Teichmüller space of a surface. 1.2. Acknowledgments. Many thanks are due to Mikhail Lyubich and John Milnor for their helpful suggestions. Some of the pictures were created with the PC program mandel.exe by Wolf Jung [J]. Basic notions 2.1. Basic complex dynamics. In order to fix notation, let us start by defining the basic notions of complex dynamics that will be used; the reader is referred to [DH1] and [M1] for details on this introductory material. We focus attention on the quadratic family Q := f c : z → z 2 + c | c ∈ C . For every c, the compact sets K c := z | the sequence {f •n c (z)} is bounded and J c := ∂K c are called the filled Julia set and Julia set respectively. Depending on whether the orbit of the critical point 0 is bounded or not, J c and K c are connected or totally disconnected. The Mandelbrot set is defined as M := c | c ∈ K c ; that is, the set of parameters with bounded critical orbit. A component of int M that contains a parameter with an attracting periodic orbit will be called a hyperbolic component 1 . The boundary of a hyperbolic component can either be real analytic, or fail to be so at one cusp point. The later kind are called primitive components. In particular, the hyperbolic component ♥ associated to z → z 2 is bounded by a cardioid known as the main cardioid. M contains infinitely many small homeomorphic copies of itself, accumulating densely around ∂M . In fact, every hyperbolic component H other than the main one is the base of one such small copy M ′ . H is called prime if it is not contained in any other small copy. To simplify later statements, prime components are further subdivided in immediate (non-primitive components that share a boundary point with ♥) and maximal (primitive components away from ∂♥). 2.2. External rays, wakes and limbs. Since f −1 c (∞) = {∞}, the point ∞ is a fixed critical point and a classical result of Böttcher yields a change of coordinates that conjugates f c to z → z 2 in a neighborhood of ∞. With the requirement that the derivative at ∞ is 1, this conjugating map is denoted ϕ c : N c −→ C \ D R , where D R is the disk of radius R ≥ 1 and N c is the maximal domain of unimodality for ϕ c . It can be shown that N c = C \ K c and R = 1 whenever c ∈ M . Otherwise, N c is the exterior of a figure 8 curve that is real analytic and symmetric with respect to 0. In this case, R > 1 and K c is contained in the two bounded regions determined by the 8 curve. Consider the system of radial lines and concentric circles in C \ D R that characterizes polar coordinates. The pull-back of these curves by ϕ c , creates a collection of external rays r θ θ ∈ [0, 1) and equipotential curves e s here s ∈ (R, ∞) is called the radius of e s on N c . These form two orthogonal foliations that behave nicely under dynamics: f c (r θ ) = r 2θ , f c (e s ) = e (s 2 ) . When c ∈ M , we say that a ray r θ lands at z ∈ J c if z is the only point of accumulation of r θ on J c . In [DH1] it is shown that Φ M : C \ M −→ C \ D is a conformal homeomorphism tangent to the identity at ∞. This yields connectivity of M and allows us to define parametric external rays and parametric equipotentials as in the dynamical case. Since there is little risk of confusion, we will use the same notation (r θ , e s ) to denote these curves and say that a parametric ray lands at a point c ∈ ∂M if c is the only point of accumulation of the ray on M . For the rest of this work, all rays considered, whether in dynamical or parameter plane, will have rational angles. These are enough to work out our combinatorial constructions and satisfy rather neat properties. Proposition 2.1. ( [M1], ch.18) Both in the parametric and the dynamical situations, if θ ∈ Q the external ray r θ lands. In the dynamical case, the landing point is (pre-)periodic with the period and preperiod determined by the binary expansion of θ. A point in J c (respectively ∂M ) can be the landing point of at most, a finite number of rays (respectively parametric rays). If this number is larger than 1, each component of the plane split by the landing rays will intersect J c (respectively ∂M ). Unless c = 1 4 , f c has two distinct fixed points. If c ∈ M , these can be distinguished since one of them is always the landing point of the ray r 0 . We call this fixed point β. The second fixed point is called α and can be attracting, indifferent or repelling, depending on whether the parameter c belongs to ♥, ∂♥, or C \ ♥. The map ψ 0 : ♥ −→ D given by c → f ′ c (α c ) is the Riemann map of ♥ normalized by ψ 0 (0) = 0 and ψ ′ 0 (0) > 0. Since the cardioid is a real analytic curve except at 1 4 , ψ 0 extends to ♥. Definition 2.1. The closure of the component of C \ r t − (η) ∪ c η ∪ r t + (η) that does not contain ♥ is called the η-wake of M and is denoted W η . The η-limb is defined as L η = M ∩ W η . Definition 2.2. Say that η = p q , written in lowest terms. Then P p q will denote the unique set of angles whose behavior under doubling is a cyclic permutation with combinatorial rotation number p q . If P p q = {t 1 , . . . , t q }, then for any parameter c ∈ L p/q the corresponding point α splits K c in q parts, separated by the q rays {r t 1 , . . . , r tq } landing at α. The two rays whose angles span the shortest arc separate the critical point 0 from the critical value c. These two angles turn out to be t − ( p q ) and t + ( p q ). 2.3. Yoccoz puzzles. The Yoccoz puzzle is well defined for parameters c ∈ L p/q for any any p q ∈ Q ∩ [0, 1) with (p, q) = 1. If 0 is not a preimage of α, the puzzle is defined at infinitely many depths and we will restrict attention to these parameters. Since we describe properties of a general parameter, it is best to omit subscripts and write f instead of f c , K instead of K c and so on. Let us fix the neighborhood U of K bounded by the equipotential of radius 2. The rays that land at α determine a partition of U \ {r t 1 , . . . , r tq } in q connected components. We will call the closures q−1 of these components, puzzle pieces of depth 0. At this stage the labeling is chosen so that 0 ∈ Y (0) 0 where the subindices are understood as residues modulo q. In particular, Y (0) 1 contains the critical value c and the angles of its bounding rays turn out to be t − ( p q ), t + ( p q ). The puzzle pieces Y (n) i of higher depths are recursively defined as the closures of every connected Figure 1. At each depth n, there is a unique piece which contains the critical point and we will always choose the indices so that 0 ∈ Y (n) 0 . Let us denote by P n the collection of pieces of level n. The resulting family Y c := {P 0 , P 1 , . . .} of puzzle pieces of all depths, has the following two properties: P1 Any two puzzle pieces either are nested (with the piece of higher depth contained in the piece of lower depth), or have disjoint interiors. P2 The image of any piece Y is a 2 to 1 branched covering or a conformal homeomorphism, depending on whether j = 0 or not. These properties characterize Y c as a Markov family and endow the puzzle partition with dynamical meaning. Note that the collection of ray angles at depth n consists of all n-preimages of {r t 1 , . . . , r tq } under angle doubling. The union of all pieces of depth n is the region enclosed by the equipotential e (2 2 −n ) . Note also that every piece Y of depth n is the n th preimage of some piece of level 0. By further iteration, Y will map onto a region determined by the same rays as Y (0) 0 and a possibly larger equipotential. This provides a 1 to 1 correspondence between puzzle pieces and preimages of 0. The distinguished point inside each piece is called the center of the piece. 2.4. Adjacency Graphs. Given a set of puzzle pieces P ⊂ P n , define the dual graph Γ(P ) as a formal graph whose set of vertices is P and whose edges join pairs of pieces that share an arc of external ray. Due to its finiteness, it is always possible to produce an isomorphic model of Γ (P ) sitting in the plane, without intersecting edges and such that it respects the natural immersion of Γ(P ) in the plane. Definition 2.3. When P = P n , we call Γ n := Γ(P n ) the puzzle graph of depth n. In this context, the vertices corresponding to the central piece Y (n) 0 and the piece around the critical value f c (0) are denoted ξ n and η n respectively. Definition 2.4. The vertices ξ n and η n determine two partial orders on the vertex set of Γ n as follows: If a, b ∈ V (Γ n ), we write a ≻ ηn b when every path from a to η n passes through b. We write a ≻ ξn b when every path from a to ξ n passes through b or through its symmetric image with respect to the origin. The following are natural consequences of the definitions; see Figure 1 for reference. Proposition 2.2. The puzzle graphs of f satisfy: G1 Γ n has 2-fold central symmetry around ξ n . G2 Γ 0 is a q-gon whenever c ∈ L p/q . For n ≥ 1, Γ n consists of 2 n q-gons linked at their vertices in a tree-like structure; i.e. the only cycles on this graph are the q-gons themselves. G3 For n ≥ 1, removing ξ n and its edges splits Γ n into 2 disjoint (possibly disconnected) isomorphic graphs. Reattaching ξ n to each, and adding the corresponding edges defines the connected graphs Puzz − n and Puzz + n (here, η n ∈ Puzz − n ). Then Γ n = Puzz − n ∪ Puzz + n and Puzz − n , Puzz + n are isomorphic to Γ n−1 with ∓η n playing the role of ξ n−1 in Puzz ± n . G4 For n ≥ 1 there are two natural maps: f * : Γ n −→ Γ n−1 induced by f , and ι * : Γ n −→ Γ n−1 induced by the inclusion among pieces of consecutive depths. f * is 2 to 1 except at ξ n and sends Puzz ± n onto Γ n−1 . In turn, ι * collapses the outermost q-gons into vertices. Figure 1. Puzzle of depth 2 and its corresponding graph. Splitting the graph at ξ 2 produces the graphs Puzz − 2 and Puzz + 2 ; both shaped like a bow tie and isomorphic to Γ 1 . Definition 2.5. Let Γ be a graph isomorphic to a subgraph of Γ n and Γ ′ a graph isomorphic to a subgraph of Γ n−1 . A map E : Γ −→ Γ ′ that satisfies G1 and G2 will be called admissible if it also respects order in the sense of G5. Proof of Proposition 2.2. Property G1 and the existence of f * and ι * are immediate consequences of the structure of quadratic Julia sets. The configuration of Γ 0 is given by the rotation number around α and then the tree-like structure of Γ n (n ≥ 1) follows from G3. Consider a centrally symmetric simple curve γ ⊂ Y (n) 0 connecting two opposite points of the equipotential curve e (2 2 −n ) that bounds Y (n) 0 . Then γ splits the simply connected region Y ∈Pn Y in 2 identical parts. Therefore, Γ \ ξ n is formed by 2 disjoint graphs justifying the existence of Puzz ± n . However, ∂Y (n) 0 may contain several segments of e (2 2 −n ) ; so γ, and consequently Puzz ± n , are not uniquely determined. This ambiguity is not consequential; Lemmas 3.4 and 3.5 describe the proper method of handling it. The fact that f maps the central piece to a non-central one containing the critical value legitimizes the selection of Puzz − n as the unique graph containing η n . By symmetry, every piece of P n except the central one has a symmetric partner and they both map in a 1 to 1 fashion to the same piece of P n−1 . The isomorphisms in G3 follow. If two pieces A, B of depth n share a boundary ray, their images will too. Moreover, letting A ′ , B ′ be the pieces of depth n − 1 containing A and B, it is clear that ∂A ′ and ∂B ′ must share the same ray as ∂A and ∂B. This shows that f * and ι * effectively preserve edges and are well defined graph maps. Clearly f * is 2 to 1, so to complete the proof of G4 it is only necessary to justify the collapsing property of ι * , and by Property G3, it is sufficient to consider the case ι * : Γ 1 −→ Γ 0 . Now, the non-critical piece Y (0) j contains a unique piece Y j of P 1 . However, the critical piece Y (0) 0 contains a total of q different pieces of depth 1: a smaller central piece Y (1) 0 and q − 1 lateral pieces −Y j . The resulting graph, Γ 1 , consists then of two q-gons joined at the vertex ξ 1 . Under ι * , one of these q-gons collapses on the critical vertex ξ 0 . To prove G5, let us construct the tree Γ ′ n with 2 to 1 central symmetry by collapsing every q-gon into a single vertex. The orders ≻ ξ ′ n , ≻ η ′ n in Γ ′ n are induced by the orders in Γ n . Then the corresponding map f * ′ : Γ ′ n , ≻ ξn −→ Γ ′ n−1 , ≻ ηn is a 2 to 1 map on trees that takes each half of Γ ′ n injectively into a sub-tree of Γ ′ n−1 and respects order. Since vertices in a cycle are not ordered, f * respects order as well. 2.5. Parapuzzle. While the puzzle encodes the combinatorial behavior of the critical orbit for a specific map f c , the parapuzzle dissects the parameter plane into regions of parameters that share similar behaviors: In every wake of M we define a partition in pieces of increasing depths, with the property that all parameters inside a given parapiece share the same critical orbit pattern up to a specific depth. Definition 2.6. Consider a wake W p/q and let n ≥ 0 be given. Call W n the wake W p/q truncated by the equipotential e (2 2 −n ) and consider the set of angles P n ( p q ) = t | 2 n t ∈ P p q (compare Subsection 2.2). The parapieces of W p/q at depth n are the closures of the components of W n \ r t | t ∈ P n ( p q ) . Note. Even though the critical value f c (0) is simply c, it will be convenient to write c ∈ ∆ when ∆ is a parapiece and f c (0) ∈ V when V is a piece in the dynamical plane of f c . In general, we will use the notation OBJ[c] to refer to dynamically defined objects OBJ associated to a specific parameter c. Definition 2.7. When the boundary of a dynamical piece A is described by the same equipotential and ray angles as those of a parapiece B, this relation is denoted by ∂A ⊜ ∂B. Definition 2.8. Let c ∈ M be a parameter whose puzzle is defined up to depth n. Denote by CV n [c] ∈ P n [c] the piece of depth n that contains the critical value: f c (0) ∈ CV n [c]. A consequence of Formula 1 is the well known fact that follows. For a proof of the main statement, refer to [DH2] or [R]. For a proof of the winding number property, refer to [D2] and Proposition 3.3 of [L4]. Proposition 2.3. Let ∆ be a parapiece of depth n in some wake W . Then CV n [c] ⊜ ∆ for every c ∈ ∆. The family c → CV n [c] | c ∈ ∆ is well defined and it determines a holomorphic motion of the critical value pieces. The holomorphic motion has c → f c (0) as a section with winding number 1. The result on winding number can be interpreted as loosely saying that, as c goes once around ∂∆, the critical value f c (0) goes once around ∂CV n . However, this description is not entirely accurate since ∂CV n [c] changes with c. Let us mention the following examples of combinatorial properties that depend on the behavior of the first n iterates of 0. The fact that these entities remain unchanged for c ∈ ∆ follows from Proposition 2.3 and will be useful in the next sections. • The isomorphism type of Γ n [c]. • The combinatorial boundary of every piece of depth ≤ n. • The location within P n [c] of the first n iterates of the critical orbit. From the general results of [L4], we can say more about the geometric objects associated to the above examples. Proposition 2.4. Each of the sets listed below moves holomorphically as c varies in ∆: • The boundary of every piece of depth ≤ n. • The first n iterates of the critical orbit. • The collection of j-fold preimages of α and β (j ≤ n). 2.6. The principal nest. The principal nest is well defined for parameters c that belong neither to ♥ nor to an immediate M copy. The first condition means that both fixed points are repelling (so the puzzle is defined), while the second condition characterizes those polynomials that do not admit an immediate renormalization as described below. We restrict further to parameters c such that the orbit of 0 is recurrent to ensure that the nest is infinite. These necessary conditions will justify themselves as we describe the nest. In order to explain the construction of the principal nest, a more detailed description of the puzzle partition at depth 1 (use Figure 2 for reference) is necessary. As a note of warning, the pieces of depth 1 will be renamed to reflect certain properties of P 1 . That is, we will override the use of the symbols Y (1) j . The puzzle depth P 1 consists of 2q−1 pieces of which q−1 are the restriction to lower equipotential of the pieces Y q−1 . Such pieces cluster around α and will be denoted Y 1 , Y 2 , . . . , Y q−1 . The restriction of Y (0) 0 however, is further divided into the union of the critical piece Y (1) 0 and q − 1 pieces Z 1 , Z 2 , . . . , Z q−1 which are symmetric to the corresponding Y j and cluster around −α. The indices are again determined by the rotation number of α so that f (Z j ) is opposite to Y j and consequently f ( Douady and Hubbard; or else, we can find the least k for which the orbit of 0 under f •q escapes from Y (1) 0 . We will assume that this is the case, so f •kq (0) ∈ Z ν for some ν and call kq the first escape time. The initial nest piece V 0 0 is defined as the (kq)-fold pull-back of Z ν along the critical orbit; that is, the unique piece that satisfies 0 ∈ V 0 0 and f •kq (V 0 0 ) = Z ν . In fact, V 0 0 can also be defined as the largest central piece that is compactly contained in Y (1) 0 is a non-degenerate annulus. The higher levels of the principal nest are defined inductively. Suppose that the pieces V 0 0 , V 1 0 , . . . , V n 0 have been already constructed. If the critical orbit never returns to V n 0 then the nest is finite. Otherwise, there is a first return time ℓ n such that f •ℓn (0) ∈ V n 0 ; then we define V n+1 0 as the critical piece that maps to V n 0 under f •ℓn . . is a family of strictly nested pieces centered around 0. it is a piece of depth kq + ℓ 1 and, in general, V n 0 will be a piece of depth kq + ℓ 1 + · · · + ℓ n . Since all pieces contain 0, Property P1 implies that 0 ; thus, the f •ℓ 1 -pull-backs of these 2 pieces satisfy V 1 0 ⋐ X with X a central piece of depth 1 + ℓ 1 . Now, 0 / ∈ Z ν , so f •kq (0) requires further iteration to reach a central piece; i.e., ℓ 1 > kq. By construction, V 0 0 is a central piece of depth 1 + kq, so Property P1 implies V 1 0 ⋐ X ⊂ V 0 0 . An analogous argument yields the strict nesting property for the nest pieces of higher depth. Definition 2.9. The principal annulus V n−1 0 \ V n 0 will be denoted A n . It may happen that ℓ n+1 = ℓ n ; this means that not only does 0 return to V n 0 under f •ℓn , but even deeper to V n+1 0 without further iteration. In this case we say that the return is central and call a chain of consecutive central returns ℓ n = ℓ n+1 = . . . = ℓ n+s a cascade of central returns. An infinite cascade means that the sequence {ℓ n } is eventually constant, so is a renormalization of f ; that is, a 2 to 1 branched cover of V n 0 such that the orbit of the critical point is defined for all iterates. The return to V n 0 , however, can be non-central. In fact, it is possible to have several returns to V n 0 before the critical orbit hits V n+1 0 for the first time. When a return is non-central, the description of the nest at that level is completed by the introduction of the lateral pieces V n k ∈ V n−1 . If we call r n−1 (z) the first return time of z back to V n−1 0 , we can define V n (z) as the unique puzzle piece that satisfies z ∈ V n (z) and f •r n−1 (z) V n (z) = V n−1 0 . In particular, it is clear that V n (0) is just the same as V n 0 and that any 2 pieces created by this process are disjoint or equal. Definition 2.10. The collection of all pieces V n (z) where z ∈ O ∩ V n−1 0 that actually contain a point of O is denoted V n and referred to as the level n of the nest. From this moment on, we will assume that the principal nest is infinite, with |V n | < ∞ at every level, and that f is non-renormalizable; thus excluding the possibility of an infinite cascade of central returns. In this situation we say that f is combinatorially recurrent. It follows from [L2] and [Ma] that f acts minimally on the postcritical set. In this situation, we can name the pieces V n = {V n 0 , V n 1 , . . . , V n mn } in such a way that the first visit of the critical orbit to V n i occurs before the first visit to V n j whenever i < j. Obviously, the value of r n−1 (z) is independent of z ∈ V n k ; thus we will denote it r n,k . Definition 2.11. For every level of the nest, define the map: Figure 3. Relation between consecutive nest levels. The curved arrow represents the first return map f •ℓn : which is 2 to 1. The dotted arrows show a possible effect of this map on each nest piece of level n + 1. Each V n+1 j may require a different number of additional iterates to return to this level and map onto V n 0 . given on each V n k by g n| V n k ≡ f •r n,k . The map g n satisfies the properties of a generalized quadratic-like (gql) map, i.e.: and all the pieces of V n are pairwise disjoint. gql-2: is a 2 to 1 branched cover or a conformal homeomorphism depending on whether k = 0 or not. Note that g n usually is the result of a different number of iterates of f when restricted to different V n k . Since we often refer to the map g n as acting on individual pieces, it is typographically convenient to introduce the notation Definition 2.12. The map g n| V n k = f •r n,k will be denoted g n,k . is a 2 to 1 branched cover or a homeomorphism depending on whether k = 0 or not. 2.7. Paranest. The paranest is well defined around parameters c outside the main cardioid that are neither immediately renormalizable nor postcritically finite. Definition 2.13. If c is a parameter such that f c has a well defined nest up to level n (for n ≥ 0), the paranest piece ∆ n [c] is defined by the condition ∂∆ n [c] ⊜ ∂f c (V n 0 ); where V n 0 is the central piece of level n in the principal nest of f c . By the Douady-Hubbard theory, ∆ n [c] is a well defined region. The definition of principal nest, together with Proposition 2.3 imply that when c ′ ∈ ∆ n [c], the principal nests of f c and f c ′ are identical until the first return g n (0) to V n−1 0 (which creates V n 0 ). In fact, the relevant pieces move holomorphically as c ′ varies and ∆ n [c] is the largest parameter region over which the initial set of ℓ n iterates of 0 (recall that g n ≡ f •ℓn ) moves holomorphically without crossing piece boundaries. Following the presentation of [L4], the family g n [c ′ ] : is a proper DH quadratic-like family with winding number 1. The last property follows from Proposition 2.3 since g n is the first return to a critical piece at this level. Since the central nest pieces are strictly nested, the above definition implies that the pieces of the paranest are strictly nested as well. It follows that int ∆ n \∆ n−1 is a non-degenerate annulus. One of our main concerns is to estimate its modulus or, as it is sometimes called, the paramodulus. Frame system Let f c have an infinite principal nest. We need a description of the combinatorial structure around nest pieces in order to record their positions relative to each other. In this Section we enhance the principal nest with the addition of a frame system. The notion of frame, introduced in [P], provides the necessary language to locate the lateral nest pieces and describe as a consequence, the behavior of the critical orbit. The idea is to split the central nest pieces in smaller regions by a procedure that resembles the construction of the puzzle. 3.1. Frames. Figure 2 provides a useful reference for the construction of the initial frames F 0 , F 1 and F 2 . Some attention is necessary at these levels to ensure that the properties of Proposition 3.3 hold. Starting with level 3, frames are defined recursively. Consider the puzzle partition at depth 1 and recall that kq denotes the first escape of the critical orbit to Z ν . The initial frame F 0 is the collection of nest pieces The frame F 1 is the collection of (f •kq )-pull-backs of cells in F 0 along the orbit of 0. From the definition, the central piece V 0 0 that maps 2 to 1 onto Z ν ∈ F 0 , is one of the cells of F 1 . The pull-back of any other cell A ∈ F 0 consists of two symmetrically opposite cells, each mapping univalently onto A. We say that F 1 is a well defined unimodal pull-back of F 0 . Let λ be the first return time of 0 to a cell of F 1 . By Lemma 3.1, the collection F 2 of pull-backs of cells in F 1 along the (f •λ )-orbit of 0 is well defined and 2 to 1. Lemma 3.2. The frame F 2 satisfies: ( after the first escape to Z ν . It follows that kq < λ ≤ ℓ 0 , where the second inequality is true since V 0 0 ∈ F 1 . Then the first return to F 1 occurs no later than the first return to , the first assertion follows. Now, V 1 0 is central. By the Markov properties of Y c , either V 1 0 is contained in the central cell C of F 2 or vice versa. However, both f •ℓ 0 (V 1 0 ) and f •λ (C) belong to F 1 . Since ℓ 0 ≥ λ, the first possibility is the one that holds. This proves property (2). After introducing the first frames and linking them to the initial levels of the nest, we can give the complete definition of the frame system. The driving idea of this discussion is that the internal structure of a frame F n+2 , represented by the graph Γ(F n+2 ), provides a decomposition of J f ∩ V n 0 that describes the combinatorial type of the nest at level n + 1. Definition 3.1. For n ≥ 0 consider the first return g n (0) ∈ V n 0 and define F n+3 as the collection of g n -pull-backs of cells in F n+2 along the critical orbit. The family F c = {F 0 , F 1 , . . .} is called a frame system for the principal nest of f c and each piece of a frame is called a cell. The dual graph Γ(F n ) (see Subsection 2.4) is called the frame graph. As in the case of the puzzle graph, consider Γ(F n ) with its natural embedding in the plane. Let us mention now some properties of frame systems (refer to [P]). Proposition 3.3. The frame system satisfies: (1) Frames exist at all levels. (2) The central cell of F n contains the nest piece V n−1 0 . (4) Suppose there is a non-central return; then, eventually all nest pieces are compactly contained in cells of the corresponding frame. 3.2. Frame labels. Our next objective is to introduce a labeling system for pieces of the frame. This will allow us to describe the relative position of pieces of the nest within a central piece of the previous level. Unlike the case of unimodal maps, where nest pieces are always located left or right of the critical point, the possible labels for vertices of Γ(F n ) will depend on the combinatorics of the critical orbit. Only after determining the labeling, it becomes possible to describe the location of nest pieces in a systematic manner. Observe that the structure of F n+1 is determined by the structure of F n and the location of g n (0). A graphic way of seeing this is as follows. Say that the first return g n−1 (0) to V n−2 0 falls in a cell X ∈ F n . Let L n and R n be two copies of Γ(F n ) with disjoint embeddings in the plane. Now connect L n and R n with a curve γ that does not intersect either graph. Suppose that one extreme of γ lands at the vertex of L n that corresponds to X and the other extreme lands at the corresponding vertex of R n approaching it from the same access. Figure 5. The curve γ joins two copies of the same frame graph approaching the selected vertex from the same direction. The new frame graph is obtained after γ is contracted to a point. Lemma 3.4. If γ is collapsed by a homotopy of the whole ensemble, the resulting graph is isomorphic to Γ(F n+1 ). Lemma 3.5. The plane embedding of Γ does not depend on the homotopy class of the curve γ in lemma 3.4. Proof. Since we regard Γ = Γ(F n ) as embedded in the sphere, the exterior of Γ is simply connected, so there is a natural cyclic order of accesses to vertices (some vertices can be accessed from more than one direction). In this order, all accesses to L n are grouped together, followed by the accesses to R n . A label at level n will be a chain of n+1 symbols taken from the alphabet {Z 0 , Z 1 , . . . , Z (q−1) , L, R}. and moving counterclockwise. Let σ 0 be the label of the cell that holds the first return of 0 to F 0 and, in general, let σ n denote the label of the cell in Γ(F n ) that holds the first return of 0. In order to label Γ(F n+1 ), assume that the number q of pieces in F 0 is known, and the label sequence (q; σ 0 , . . . , σ n−1 ) that identify the location of first returns of 0 to levels 0, . . . , n − 1 of the nest. In particular, all frames up to Γ(F n ) have been successfully labeled. Duplicate in L n the labels of Γ(F n ), but concatenate an extra ′ L ′ at the beginning. Do a similar labeling on R n by concatenating an extra ′ R ′ to the duplicated labels. Note that the labels of the two vertices corresponding to X are ′ L ′ σ n and ′ R ′ σ n . The labels on Γ(F n+1 ) will be the same as those in the union of L n and R n except that we change the label of the identified vertex, to become ′ Z 0 ′ σ n . Clearly, f induces a map f * : Γ(F n+1 ) −→ Γ n for n ≥ 2, that acts by forgetting the leftmost symbol of each label. It is important to mention that the resulting labeling of Γ(F n ) does depend on the access to ξ n approached by γ. However, the final unlabeled graphs are equivalent as embedded in the plane. As was just mentioned, some vertices are accessible from ∞ in two or more directions. These are precisely the vertices whose label contains the symbol ′ Z 0 ′ (for n ≥ 1). Since such a vertex represents a frame cell that maps (eventually) to a central frame cell, the tail of a label with ′ Z 0 ′ at position j must be σ j . On the other hand, for every j there must be labels with a ′ Z 0 ′ in position j. It follows that the set of labels of Γ(F n ) and the sequence (q; σ 0 , . . . , σ n ) can be recovered from each other. 3.3. Frame system and nest together. The definition of frame system was conceived to satisfy the properties of Proposition 3.3. An extension of the argument used to prove those properties shows that every piece V n j of the nest is contained in a frame cell of level n + 1. Moreover, we would like to extend the definition of frames so that each V n j can be partitioned by a pull-back of an adequate central frame. For this, recall first that g n,j (V n j ) = V n−1 0 ⊃ F n+1 . Definition 3.2. The frame F n,k is the collection of pieces inside V n−2 k obtained by the g n−2,k -pullback of F n−1 . Elements of the frame F n,k are called cells and we will write F n,0 instead of F n , when there is a need to stress that a property holds in F n,k for every k. If a puzzle piece A is contained in a cell B ∈ F n,k , denote B by Φ n,k (A). We have described already how to label F n . The other frames F n,k (k ≥ 1), mapping univalently onto F n−1 , have a natural labeling induced from that of F n−1 by the corresponding g n−2,k -pull-back. Let us describe now the itinerary of a piece V n j . Since V n j ⊂ V n−1 0 , the map g n−1 takes V n j inside some piece V n−1 k 1 (j) ⊂ V n−2 0 . Then, g n−1,k 1 (j) takes g n−1 (V n j ) inside a new piece V n−1 k 2 (j) and so on, until the composition of returns of level n − 1 (g n−1,kr(j) • . . . • g n−1,k 1 (j) • g n−1 ) | V n j is exactly g n,j : V n j → V n−1 0 . Of course, k r is just 0, and we will write it accordingly. There is extra information that deems this description more accurate. For the sake of typographical clarity, we will write k i instead of k i (j). For i ≤ r, let Φ n+1,k i be the cell in F n+1,k i ⊂ V n−1 Definition 3.3. The itinerary of V n j is the list of piece-label pairs: (2) up to the moment when V n j maps onto V n−1 0 . Note first of all that the last label, λ n+1,0 , will start with ′ Z 0 ′ due to the fact that V n−1 0 is in the central cell of F n . More importantly, the conditions When the sequence of frame labellings is specified up to a given level n, the locations of the nest pieces and their (admissible) itineraries, we say that we have described the combinatorial type of the map at level n. Q-recurrency Lyubich and Milnor established in [LM] the uniqueness of the real quadratic Fibonacci map f c fib and described in detail its asymptotic geometry. The real parameter c fib = −1.8705286321 . . . is determined by either of the following two equivalent conditions: Additionally, the first returns to Y (1) 0 and V 0 0 happen on the third and fifth iterates respectively. The critical behavior of f c fib is the simplest among maps whose nest has no central returns: Every level of the nest has a unique lateral piece, so in a way, every first return comes as close as possible to being central without actually being central. This means that f c fib is not renormalizable in the classical sense, although its combinatorics can be described as an infinite cascade of Fibonacci renormalizations in the space of gql maps with one lateral piece. The papers [L1] and [W] analyze an unexpected feature of the Fibonacci map. If the central pieces V n 0 are rescaled to regions V n of fixed size, each g n induces a map G n : V n −→ V n−1 . On increasing levels, the criss-cross behavior that determines c fib in condition F2 approximates with exponential accuracy the pattern of the critical orbit of P −1 (z) = z 2 − 1 (i.e. 0 → −1 → 0 → −1 → . . .). In fact, G n −→ P −1 locally uniformly in the C 1 norm. Also, since diam V n ≍ 1, it is shown that the rescaled pieces converge in the Hausdorff metric to the filled Julia set of P −1 . In [W], Wenstrom translates this behavior to the Mandelbrot set and obtains pieces of the paranest around c fib that asymptotically resemble K −1 ; see Figure 1 of [W]. As consequences of this control on shape, he computes the exact rate of linear growth of the principal moduli and proves hairiness around the parameter c fib . Let c be the center of a prime hyperbolic component and Q(z) its associated polynomial. The critical orbit is periodic (of least period m) and Q •m is the only renormalization of Q. An important consequence of this, is that high enough depths of the puzzle of Q will isolate in individual pieces each point of the critical orbit O(Q) = {0 → c → z 2 → . . . → z m−1 }. Let us assume that the fixed point α of Q has combinatorial rotation number p q . In what follows we will save notation by restricting the use of "P n " to refer to the puzzle of Q and "V n j ", "F n " for the nest and frames of Q-recurrent maps. Let us label Γ(P 0 ), the graph of the puzzle of Q at depth 0, with symbols ′ Z 0 ′ to ′ Z q−1 ′ starting at the critical point piece and moving counterclockwise. Since P n+1 is a 2 to 1 pull-back of P n , the graph Γ(P n+1 ) consists of two copies of Γ(P n ) identified at the critical value vertex and we can launch a labeling procedure identical to the frame labeling of Subsection 3.2. Note that Γ(P n ) is symmetric, but a canonical orientation can be specified by dictating that the label on the critical value vertex begins with the symbol ′ L ′ . For Q ∈ L p/q the puzzle label sequence begins (q; ′ Z p ′ , . . .). The above procedure creates a labeling of the puzzle of Q. Now consider any map f in the p q -limb, with first escape time q and such that f •q (0) ∈ Z p . The map f satisfies • The initial frame F 0 of f consists of q pieces and Γ(F 0 ) is isomorphic to Γ(P 0 ). • The first return to F 0 is on the cell Z p which corresponds, under the above isomorphism, to the critical value piece of P 0 . Therefore • Γ(F 1 ) is isomorphic to Γ(P 1 ). There is in fact, a full family ∆ of parameters c such that f c satisfies the above condition. Since the puzzle of Q is created by successive pull backs of the configuration P 0 , the labeling of the puzzle of Q determines a weak admissible type in ∆. Then Corollary 3.7 of [P] guarantees the existence of parameters c ∈ ∆ such that the frame system of f c has the same structure as the puzzle of Q. Observe that F n is symmetric so there are two choices for the homeomorphism identifying Γ(F n ) with Γ(P n ). Once a frame orientation is selected, we have an admissible label system. Definition 4.1. A critically recurrent polynomial f Q whose frame system has the same label sequence (q; p, σ 1 , σ 2 , . . .) as the puzzle of Q is called Q-recurrent if it satisfies the following additional condition. For any n ≥ 0 and 2 ≤ k ≤ m − 1, the k th return to V n 0 is the composition (g n • . . . • g n+k−2 • g n+k−1 ). Note. There is an annoying offset between nest levels and frame levels. Because of it, V n 0 is contained in the central cell of F n+1 and contains in turn the cells of F n+2. The notation suffers slightly when discussing return maps to several consecutive levels; hopefully this complication is balanced by the advantage of matching every frame level with the corresponding depth of the puzzle of Q. Proposition 4.1. For a Q-recurrent map every sufficiently high level n of the nest has exactly m pieces V n 0 , V n 1 , . . . , V n m−1 . For any 0 ≤ j ≤ m − 1, V n j is contained in the cell of F n+1 corresponding to the piece in P n that contains z j . Proof. Choose N big enough so that the puzzle P N isolates every point of O(Q) and let n ≥ N . We will call L n j the piece of P n containing z j . Consider the orbit of 0 under the composition g n−2 • . . . • g n+m−3 . According to the label sequences, g n+m−3 (0) falls in the cell of F n+m−2 that corresponds to L n+m−2 1 . Next, g n+m−4 g n+m−3 (0) falls in the cell of F n+m−3 corresponding to L n+m−3 2 . Continue in this manner, with g n+m−3−j • . . . • g n+m−3 (0) (where 0 ≤ j ≤ m − 2) falling in the cell of level n + m − 2 + j that corresponds to L n+m−2+j j+1 . At every step, jump out one nest level and create in the process (by adequate pullbacks) the nest pieces V n+m−4 1 , V n+m−5 2 , . . . , V n−2 m−1 . Note that all these are lateral pieces since they are contained in a frame cell that is not central. In fact, V n+m−4+j j+1 is in the cell of F n+m−2+j that corresponds to L n+m−2+j j ; see Figure 6. The last map in this chain of compositions is g n−2 . It brings the critical orbit very nearly to the center, inside V n+m−3 0 . To see this, remember that the definition of Q-recurrency requires that the composition of maps g n−2 • . . . In summary, if a point of O(f ) falls in a piece V n j (for j ≤ m − 1), the next return falls inside V n−1 j+1 . If it falls on a piece V n m−1 , the next return falls m levels deeper, inside V n+m−1 0 and is in fact, the first return to this piece. Repeating this procedure at m − 1 consecutive levels creates various pieces of different levels. Among these, the m − 1 lateral nest pieces of level n, each corresponding to a point z j (1 ≤ j ≤ m − 1) of the critical orbit of Q. A first consequence of Proposition 4.1 is the fact that the itinerary of V n j in the nest of level n − 1 is This hints to a similarity between the actions of Q and g n that will be made precise in the next Section, where we develop the asymptotic properties of Q-recurrent maps and their principal nests. Asymptotics of Q-recurrency This Section presents the geometric properties of Q-recurrent maps. Their explicit relation to the combinatorics of the map Q gives control over the shapes of nest pieces and this will yield very precise estimates of the analytic invariants of the nest. For reference, let us state again the fundamental relation between levels of a Q-recurrent nest: We will also make repeated reference to the following result (see [L3]). Lyubich's Theorem. Let κ(n) count the levels of the principal nest up to level n, which are non-central. Then the moduli of the principal annuli grow linearly: where the constant B depends only on the initial modulus. Complex Fibonacci maps. Section 4 begins with the definition of the real parameter c fib . We will see that properties F1 and F2 are shared by many complex maps. Definition 5.1. A complex polynomial map, or equivalently its corresponding parameter, is said to be complex Fibonacci if the first return to each level of the nest happens exactly when the iterates are the Fibonacci numbers. Note. The first return to a level can be viewed as a close return in a combinatorial sense; that is, a return to a small central piece. Since Lyubich's Theorem guarantees that central pieces decrease in size, the definition of complex Fibonacci parameter is equivalent to its metric analogue, property (F1) at the beginning of Section 4. Our first result is a classification of the Fibonacci behavior in the complex case. Theorem 5.1. A parameter c is complex Fibonacci if and only if c ∈ Fıbo. Proof. All (z 2 − 1)-recurrent maps have the same weak combinatorial type. As was pointed out in the note above, the first returns of high levels are just predetermined compositions of lower level ones. Thus, the number of iterates until the first return to a piece V n 0 is independent of the parameter c ∈ Fıbo. Since the real parameter c fib ∈ Fıbo is complex Fibonacci, the first direction of the assertion follows. To show the converse, it is only necessary to observe that the first return times in a Fibonacci nest are strictly increasing, so there are no central returns. Therefore the first return map g n+1 must be the composition of at least two first return maps of the two preceding levels. If the nest does not have (z 2 − 1)-recurrent type, there must be more than one lateral piece at some level n. Then the composition of maps generating g n+1 will actually contain more than two maps and the sequence {ℓ n } of first return times grows faster than the sequence generated by the recursion ℓ n+1 = ℓ n + ℓ n−1 . This contradicts the assumption that the map is complex Fibonacci. Although Yoccoz's Theorem on rigidity of non-renormalizable maps allows us to characterize Fıbo as a Cantor set, we must wait until next Section to show that the relevant parapieces shrink exponentially fast, thus allowing us to complete the description of the set Fıbo as a Cantor set of Hausdorff dimension 0 on which we can impose a natural dyadic decomposition. 5.2. Shape. We want to study the shape of nest pieces in the following sense. Definition 5.3. A sequence of compact sets {C j ⊂ C} is said to converge in shape to a compact K if there exist rescalings C j = a j · C j (with a j ∈ C) such that C j −→ K in the Hausdorff metric. The main Theorem of this Section is a vast extension of the result on the shape of central pieces of f c fib found in [L1]. In order to give the statement, some notation is needed. Let Q = Q(z) be the center of a prime hyperbolic component and c 0 a Q-recurrent parameter. Recall that f c 0 is described by a dyadic choice of labels ( ′ L ′ and ′ R ′ ) on every level. These frame orientations determine the sequence of paranest pieces {∆ n } around c 0 . If c ∈ ∆ n is any nearby parameter, the combinatorics of f c are identical to those of f c 0 including the orientations of the homeomorphic frames, at all levels j ≤ n. In particular, for any c ∈ ∆ n we can find a (unique) point s j in F j corresponding to the fixed point α of Q. In what follows, we omit from the notation the fact that the objects described depend on c. Let α j = α s j and define the complex rescalings V j := α j ·V j 0 of the central nest pieces of f c , up to level n. Then, the first return maps g j induce maps G j : V j −→ V j−1 on the rescaled pieces whose action on the rescaled frame F j−2 := (α j ·F j−2 ) ⊂ V j is isotopic to the action of Q on its own puzzle. Theorem 5.2. Given ε > 0, there is an N such that for every parameter c ∈ ∆ n and level n ≥ N , the maps G N , G N +1 , . . . , G n are all ε-close to Q in the C 1 topology inside the ball of radius 1 ε . Corollary 5.3. The sequence of central nest pieces {V n 0 } of f c 0 converges in shape to the filled Julia set K Q . Proof of Corollary 5.3. The point α ∈ K Q is fixed under G n and is surrounded by V n . This rescaled nest piece also surrounds the critical point 0 which attracts every point in K Q \ J Q . Now, V n is the pull-back of V n−1 under G n . By Theorem 5.2, G n is a small perturbation of Q; since the rescaled pieces in the sequence { V n+1 , V n+2 , . . .} have bounded diameter, they become exponentially close to the regions in the sequence Q •−1 ( V n ), Q •−2 ( V n ), . . . which converge to K Q . This yields the result. In particular, the central pieces of any quadratic complex Fibonacci map look like K −1 , although each one may be tilted at a bizarre angle (recall that the V n are rescaled by a complex number). Other examples can be seen in Figure 7, showing puzzle pieces that approximate the behavior of different periodic orbits of period 3. Notice that, since the frames are defined by the same sequence of pull-backs as the central nest pieces, the result of Corollary 5.3 holds also for frames; i.e. the union of cells in F n converges in shape to K Q . The proof of Theorem 5.2 depends on the convergence of Thurston's map on an appropriate Teichmüller space (see the Appendix for definitions). Let O ≡ O(Q) and consider the surface S obtained by puncturing the plane at the critical orbit of Q; that is, S = C\O. Since deformations are considered only up to an isotopy that leaves O invariant, the structure of a puzzle-like construction does not change. Thus, when h is a deformation in the class of id, the deformation h P (Q) of the puzzle of Q can be isotoped back to the puzzle P (Q) itself without changing its configuration and without moving O. Thurston's map is best described via the alternate description of T S in terms of Beltrami differentials. First, normalize every deformation h by an affine change of coordinates ϕ so that ϕ • h leaves 0, c ∈ O fixed. The Beltrami coefficient µ = ∂h ∂h dz dz determines a conformal structure associated to h. Definition 5.4. The map τ Q : T S −→ T S induced on equivalence classes of conformal structures by the pull-back µ → Q * µ is called the Thurston map associated to Q. The action of τ Q on a deformation class h is easy to describe. The class τ Q [h] is represented by a deformation h such that the map Q h = h • Q • h −1 is analytic. Because of conjugacy, Q h replicates the critical orbit behavior of Q in a neighborhood of h(O). In particular, one can specify a puzzle-like structure around h(O) which pulls back according to the same combinatorics as Q. Since O is finite, and Q •m is not renormalizable, such a puzzle structure of high enough depth will isolate all the elements of the critical orbit in individual cells. We conclude that the isotopy class of h relative to punctures consists of those Q h -pull-backs of h(O) that keep the puzzle structure intact (however deformed). Proof of Theorem 5.2. Let X be any finite collection of simply connected compact subsets of C. By a multicurve Γ around X we mean a system of disjoint isotopy classes of simple closed curves in C \ X such that each curve γ i ∈ Γ splits C in two regions, each enclosing at least two elements of X (i.e. γ i is non-peripheral). If f : C \ X −→ C \ X fixes every element of X, denote by Γ −1 f the multicurve consisting of the classes of f -preimages of elements γ i ∈ Γ that are not peripheral. The multicurve Γ is said to be f -stable if Γ −1 f ⊂ Γ. Given a map f : C −→ C fixing the critical orbit O of Q and an arbitrary f -stable multicurve Γ around O, we can construct the linear space R Γ generated by the curves of Γ, and an induced linear mapf Γ : R Γ −→ R Γ given as follows. If γ i ∈ Γ, let γ i,j,k denote the components of f −1 (γ i ) that are in the class of γ j ∈ Γ −1 f . Thenf where d i,j,k denotes the degree of f |γ i,j,k : γ i,j,k −→ γ i . An obstruction to the convergence of Thurston's map τ f is represented by a f -stable multicurve around O, for whichf Γ has an eigenvalue λ ≥ 1. In our case, Q is a polynomial so it represents the fixed point of its own Thurston map. In particular, there are no obstructions to the convergence of τ Q ; see [DH3]. Now, since Q belongs to a prime hyperbolic component of period m, the map Q •m is a renormalization conjugate to z → z 2 . By hyperbolicity, the central puzzle pieces of K Q get arbitrarily close to the immediate basin of 0. In particular, there is a finite depth so that 0 is the only point of O inside the central piece. By further iteration, the same will be true of any point in O. Let us choose a level k high enough so that the puzzle P k−1 isolates all the points in the critical orbit of Q. Again, this is possible since Q •m is not renormalizable. Then, any Q-stable multicurve Γ can be represented with curves that are constructed from segments of the arcs defining P k−1 . In this way, Γ is described in terms of the structure of P k ′ , for any level k ′ ≥ k − 1. Moreover, Γ −1 Q is a multicurve around O that can be described in terms of the combinatorial structure of P k ′ +1 . Now consider f c with c ∈ ∆ k . Any G k -stable multicurve Γ ′ around the pieces V k+1 j can be described with segments of curves in the boundary of the frame F k−1 . Since F k−1 is isomorphic to P k−1 , there is a correspondence between G k -stable multicurves around j V k+1 j and Q-stable multicurves around O. This means that the only possible obstructions for τ G k must form inside one of those pieces; that is, a multicurve realizing such obstruction would intersect at least one of the pieces V k+1 j . Note that such multicurve cannot be represented by curves that are close to the boundary of F k−1 . By [L3], the size of V k+2 0 with respect to V k+1 0 decreases exponentially as k → ∞. Then, Koebe's Theorem implies that G k is exponentially close to being quadratic; that is, it can be decomposed as G k = D k • Q h k , where the maps D k become linear and the deformations h k are given by iteration of the Thurston map τ Q . Moreover, both Q h k and G k fix α and send 0 close to itself, so we can conclude that D k −→ id. It follows that G k rapidly approaches Q h k . Select any Q-stable multicurve Γ ′ around O. If there is a level k such that Γ ′ does not intersect any of the pieces V k+1 j , then Γ ′ can be pushed to the boundary of F k−1 to represent a G k -stable multicurve around the pieces V k+1 j . Since Γ ′ is not an obstruction for Q, we deduce that, outside the V k+1 j , the map G k is isotopic to Q. However, the only possible Thurston obstructions are restricted to extremely small regions, then the distortion of h k goes to 0 and the maps G k converge to Q exponentially fast in a neighborhood of K Q \ O. The Koebe space between V k+1 0 and V k 0 increases without bound, so we can claim convergence of the maps G k in arbitrarily big neighborhoods of K Q . Theorem 5.2 has broad implications since it provides excellent control of the shapes of nest pieces. In the next Subsection, we use our knowledge on the shape of the central pieces to compute the rate of growth of the principal moduli. Growth of annuli. Here we study the moduli of the principal annuli in Q-recurrent maps. For this family, we can state a more precise version of Lyubich's Theorem on the linear growth of moduli. The key ingredient in our proof is Theorem 5.2 giving control over the shape of pieces, together with the extended Grötzsch inequality as stated in the Appendix. As a preparation for Theorem 5.5, we compute first the capacities of K Q with respect to 0 and ∞. Lemma 5.4. Let Q = Q(z) be the center of a hyperbolic component with critical period m. Then cap ∞ (K Q ) = 0 and Proof. K Q is connected, so the Böttcher coordinate ϕ : C \ K Q −→ C \ D sending 0 to ∞ is the Riemann mapping with derivative 1, so the first equality is obvious. The capacity of K Q at 0 is simply cap 0 (U ), where U is the immediate basin of attraction of 0. Consider the iterated polynomial Q •m : U −→ U . It is a 2 to 1 map of a simply connected domain with fixed critical point. Therefore, there is a map ψ : D −→ U such that and it is clear that cap 0 (K Q ) = cap 0 (U ) = ln |ψ ′ (0)|. Equation 7 shows that ψ ′ (0) is the inverse of the quadratic coefficient in the series expansion of Q •m (z). Since the constant term of Q •j (z) is just Q •j (0), it is easy to find recursively that 1 ψ ′ (0) = m−1 j=1 2Q •j (0) and thus, Recall that capacity and modulus are invariants that vary continuously with respect to the Carathéodory topology. Thus, given a sequence of topological disks around 0 converging in the Hausdorff topology to a set with pinched points, the sequence of capacities will converge to the capacity of the component of the limit set that contains 0. Similarly, for a sequence of annuli with adequate convergence, the limit of moduli detects only the modulus of the limit component that contains the limit closed geodesic. Theorem 5.5. For any parameter c ∈ Fıbo the principal moduli grow linearly at the rate lim n→∞ µ n n = ln 2 3 . If Q = Q(z) is the center of a prime hyperbolic component with critical period m ≥ 3, the rate of growth is exponential lim n→∞ µ n µ n+1 = κ m , depends only on the period m of Q ad=nd satisfies κ m ր 3 2 as m increases. Proof. Fix a level N large enough so that the shape of rescaled nest pieces is already close to the shape of K Q . In particular, cap 0 V n ∼ cap 0 (K Q ) 0 for all n ≥ N , where (K Q ) 0 is the Fatou component of K Q containing 0. We also require the lateral pieces are small enough to sit in the center of their (almost pinched) regions, far away from the boundary. This is possible since Lyubich's Theorem on linear growth forces shrinking and Theorem 5.2 locates the nest pieces in positions that resemble the critical orbit of Q. Theorem 5.2 also gives the recursion formula On consecutive levels the first returns of a central piece V . From this, we obtain the annuli relation In order to estimate the modulus of V n+m 0 \ V n+m+1 0 , let us split the return map g n+m | ( V n+m 0 \V n+m+1 0 ) in the above mentioned composition of first returns. First, g n+m−1 is 2 to 1 on the annulus ; in fact, these two pieces are separated by a nested sequence of preimages of the central pieces V n+1 0 , . . . , V n+m−1 0 . Due to the pinching of pieces near repelling points, most of the modulus of g n+m−1 V n+m is concentrated in a region of V n+m−1 0 that resembles the immediate basin of the critical value of Q. On this region, g n+m−2 is injective. The remaining returns g n+m−3 , . . . , g n behave in a similar manner, essentially preserving the modulus on regions around non-central pieces. We can conclude that The right hand side can be estimated by applying the Extended Grötzsch Inequality to the The recursive formula x n+m = 1 2 (x n+m−1 + x n+m−2 + . . . x n ) + ε Q has an asymptotic behavior that is ruled by the largest real root of its characteristic polynomial When m = 2, the largest root of z 2 − 1 2 (z + 1) is 1. Consequently, the growth of the moduli is dominated by a linear term µ n ∼ An + B. The only map with critical orbit of period m = 2 is Q(z) = z 2 − 1, for which ε Q = ln 2. The recursive relation µ n ≍ µ n−1 2 + µ n−2 2 + ln 2 gives The analysis is different when m ≥ 3. If z ≥ 3 2 , the second term of (8') is 1 2 1−z m z−1 ≤ 1 − z m , so the characteristic polynomial p satisfies p(z) ≥ 1. On the other hand, it follows from (8) that p(1) < 0 (since m ≥ 3). Thus the largest root κ of p(z) is in the interval (1, 3 2 ) and the exponential growth of µ n follows. Note. One should contrast the above result with [AM]. There, the authors show that for almost every non-hyperbolic real parameter, the principal moduli grow at least as fast as a tower of exponentials. The "slower" growth displayed by Q-recurrent polynomials has immediate geometric consequences. Definition 5.5. Say that a compact set K is hairy at a point c ∈ K if there is a sequence {ε 1 , ε 2 , . . .} converging to 0, such that 1 ε j · (K − c) ∩ D becomes dense in D. If K is hairy at c for any sequence of scaling factors {ε j }, we say that it satisfies hairiness at arbitrary scales. By an observation of Rivera-Letelier ( [R-L]), the construction of [W] can be extended to prove hairiness of M at any critically recurrent non-renormalizable parameter. The idea is as follows: Since K c is connected, it contains a path from 0 to β. This crosses every principal annuli from one boundary component to the other. Choosing a high enough level, the annulus A n can be rescaled to constant diameter containing a hair that connects the outer boundary with a small neighborhood of 0. The pull-backs by consecutive first return maps duplicate the number of hairs inside deeper annuli and this collection of hairs is equidistributed around 0 (control of geometry). The hairiness of K c is then translated to the parameter plane to obtain the result. Rivera-Letelier has announced a proof that the real quadratic Fibonacci polynomial displays hairiness at arbitrary scales. The proof relies in an essential way on the linear growth of moduli, so it holds true for any parameter in Fıbo. Other Q-recurrent polynomials miss this sharper property in account of the exponential growth of their principal moduli. It should be observed that this same property creates a somewhat embarrassing difficulty; since the moduli grow so fast, computer generated pictures fail to exhibit a convincingly hairy picture. In order to do so, it would be necessary to reach deep levels of the nest that may be out of the range of resolution of the software used. 5.4. Meta-Chebyshev. Starting with the chain LŘŘLĽ, construct an infinite sequence by the following iterative procedure. At each step, concatenate a second copy of the current chain on which the second to last marked symbol is substituted by its opposite. The result is Θ : LŘŘLĽLRLLĽLRRLRLRLLĽLRRLLLRLLRLRRLRLRLLĽ · · · In order to verify that this is an admissible kneading sequence, we have to describe the sequence ǫ 1 ǫ 2 . . . of accumulated orientation reversals in Θ; that is, ε j is + or − depending on whether the number of L's up to position j is even or odd. Then we only have to prove that for any m, the least i such that ǫ m+i = ǫ m · ǫ i satisfies ǫ i = −. The sequence of ǫ j begins: −−− +− + + − +− + + + − − + + − +− + + + − + − − + − − + + + − − + + − +− · · · and the rule to construct it is as follows: Start with the chain −−− +−. At each step make a second unchecked copy of the current chain and invert every symbol that appears to the left of the second to last check; then concatenate this copy to the right and put a check on the last symbol. This sequence starts with −−− +− and every mark will be on a − symbol. It follows that there cannot be more that three + in a row so the admissibility condition is satisfied. Thus, the kneading sequence Θ can be realized by a real polynomial. In fact, by P MCheb (z) := z 2 − 1.87450961730020085 . . . This map was constructed with the requirement that the graphs Γ(F n ) are isomorphic to Γ P n (f −2 ) , where f −2 : z → z 2 − 2 is the Chebyshev polynomial. The motivation for this example is to investigate what properties of Q-recurrent polynomials will hold for parameters that imitate the behavior of (non-periodic) postcritically finite maps. Actually, the construction of P MCheb is very similar to that of Q-recurrent polynomials; the main difference being as follows. Since the critical orbit of f −2 does not return to the center, the first return to level n + 1 must be delayed until after the composition of n first return maps g 1 • . . . • g n , when the critical orbit falls in Y (0) 0 . The parameter was chosen so that the first return to level n + 1 occurs exactly at this moment; that is, g n+1 is precisely g 1 • g 2 • . . . • g n for all n. These are the iterates marked with a check. Moreover, the choice of frame orientations that result in a real parameter imposes the required label sequence 2; ′ Z 1 ′ , ′ LZ 0 ′ , ′ LRZ 0 ′ , ′ LRRZ 0 ′ , ′ LRRRZ 0 ′ , . . . . By analogy with the critical orbit of f −2 , every nest level of P MCheb has two lateral pieces and the itinerary of V n i includes an infinite number of visits to V n 2 after the first return to V n+1 0 . Since this combinatorial type is admissible, the results of [P] guarantee an uncountable set of complex parameters with the same combinatorics. By the rigidity result of Yoccoz, there cannot be other real polynomials in this class. The methods used to work with Q-recurrent polynomials are not enough to study the nest of P MCheb in a metric sense. In particular, we have relied on the fact that K Q had a non-empty interior; this is not the case for K −2 . Nevertheless, the analogy is good enough that it is natural to pose the following. Conjecture. There are suitable rescalings of the nest pieces of P MCheb such that the functions induced from the first return maps converge to f −2 and such that properly rescaled pieces converge to the interval [−2, 2] in the Hausdorff topology. Parameter space One of the most amazing attributes of complex quadratic dynamics is the replication of dynamical features in the parameter plane. For instance, the structure of a limb L p/q reflects the initial steps of the critical orbit for any parameter contained in it. In [T], Tan Lei showed that for a strictly preperiodic parameter c, the Julia set of f c and the Mandelbrot set exhibit local asymptotic similarity around c. As it has been mentioned, a result of similar nature appears in [W] where Wenstrom shows that the paranest pieces around the real Fibonacci parameter c fib are asymptotically similar to the central pieces in the principal nest of f c fib . Thus, ∆ n (c fib ) −→ K −1 in shape and the author exploits this geometric result to obtain hairiness of M around c fib . This Section discusses a generalization of the above results to the family of all Q-recurrent parameters. Note that the maps Q are dense in ∂M and that for each one there is an uncountable set of Q-recurrent parameters. Theorem 6.1. Let Q = Q(z) be the center of a prime hyperbolic component with critical period m, and let c Q be a Q-recurrent parameter. Then the paranest around c Q is infinite and the parapieces ∆ j (c Q ) converge in shape to the filled Julia set K Q . This will require translating the corresponding result obtained in the dynamical plane to the space of parameters. To do this, we need to introduce certain auxiliary parapieces; describe in detail the boundary of ∆ j (c Q ) and define a map M n : ∆ n (c Q ) −→ C that "rescales" ∆ n to a compact set close to K Q . From the above result follows the possibility of computing the rate of growth of the paramoduli. Since the paramoduli increase at least linearly, the set of Q-recurrent parameters is a Cantor set of Hausdorff dimension 0. For the rest of this Section, unless explicitly mentioned, fix a map Q = Q(z) in the center of a prime hyperbolic component such that the critical orbit has period m; also, c Q will stand for a fixed Q-recurrent parameter. 6.1. Auxiliary parapieces. Consider the first return map g n−1 : , we can study the effect of g n−1 on V n 1 ; there, the condition of Q-recurrency gives: Applying g n−2 we obtain This procedure can continue further for a total of m − 2 steps: and the combined effect on V n 1 will be (recall Definition 2.12): , the following is well defined. Definition 6.1. Denote by U n ⋐ V n 1 and F * n+2 ⊂ U n the (g n,1 )-pull-backs of V n 0 and F n+2 ⊂ V n 0 , respectively. Compare Figure 8. Note that F * n+2 is known once the nest structure up to level n is given. However, assuming that the nest of our parameter displays the Q-recurrency type up to level n + 1, it is possible to say more. Since g n+1 (0) = g n−m+1 • . . . g n−1 • g n (0), we must have (10) g n (0) ∈ F * n+2 . . Then, we can pull V n 0 back all the way to the piece U n inside V n 1 . Also, U n has a frame F * n+2 which is the corresponding pull-back of F n+2 ⊂ V n 0 . Neither U n nor F * n+2 are drawn. See also Figure 6. Let us pass to the parameter plane. Our initial goal is to obtain a precise control of the combinatorics inside relevant consecutive parapieces. In the first place, ∆ n is the set of parameters that have the same nest combinatorics as c Q , up to the first return g n (0) to V n−1 0 . Definition 6.2. We introduce two new auxiliary parapieces. • ∆ n+2 * is the set of parameters such that g n (0) falls inside the frame F n+1 ⊂ V n−1 0 . • Ξ n+1 is the set of parameters such that g n (0) falls in V n 1 ⋐ F n+1 . Each region Ξ n is well defined as a parapiece since it represents the return to an explicit piece of the puzzle. On the other hand, ∆ n+2 * is actually the union of several parapieces; nevertheless, it is convenient to regard it as a parapiece to avoid longer descriptions. With this in mind, we are interested in the fact that parapieces of consecutive levels can be described in terms of a single first return map. From Formulas (6), (9) and (10), we obtain: , we have the following parapiece inclusions: 6.2. Shape and paramoduli. In order to prove Theorem 6.1, let us introduce the map M n : Ξ n−1 −→ C, where Ξ n−1 belongs to the paraframe of the fixed parameter c Q . Recall that α n−2 = α Q s n−2 is the rescaling factor that defines Proof of Theorem 6.1. From Table (11), when c ∈ ∆ n+1 * the first return g n−1 (0)[c] is in F n . Recall that for n large, F n [c] is exponentially close to K Q . Fix an ε > 0 and find n big enough so that both rescalings α n−2 [c] · F n = F n and α n−2 · F n are at most ε 2 -close to each other and to K Q . This means that M n (∆ n+1 * ) is a compact set ε-close to K Q . By definition, the parapiece Ξ n−1 is the set of parameters c for which g n−2 (0)[c] falls on the lateral piece V n−2 1 . Since this map is a first return, Proposition 2.3 implies that the correspondence c → g n−2 (0)[c] is univalent in Ξ n−1 . Moreover, V n−2 1 [c] is at a definite distance away from the central piece V n−2 0 [c] for all c, so the image of Ξ n−1 under c → g n−2 (0)[c] is uniformly far from 0 and similarly for all further iterations up to the first return g n−1 (0) [c]. Again, we can use Proposition 2.3 to deduce that M n is univalent in its entire domain. Since n is big, the modulus of the annulus intV n−3 0 \ F n [c] is large for every c ∈ Ξ n−1 . In particular, since F n [c] has bounded diameter, this implies that the distance between the point [c] and the curve ∂ F n [c] is exponentially big for c ∈ ∂Ξ n−1 . Let d n be the minimum of these distances over all c. Then, M n (∂Ξ n−1 ) and M n (∂∆ n+1 * ) are at least a distance (d n − ε) ∼ d n ր ∞ apart. We can conclude that the modulus of M n int Ξ n−1 \ ∆ n+1 * is arbitrarily large and so will be the modulus of int Ξ n−1 \ ∆ n+1 * . We have shown that the map M n is univalent in its domain and the modulus of int Ξ n−1 \ ∆ n+1 * is big. Then, by the Koebe distortion Theorem, M n is asymptotically linear in a neighborhood of ∆ n+1 * . Since M n ∆ n+1 * is ε-close to K Q , we can conclude the proof. As an immediate consequence of this control over the shape of parapieces, we can compute the rate of growth of principal paramoduli µ n . Corollary 6.2. The annuli of consecutive parapieces in the nest of a (z 2 − 1)-recurrent map grow linearly at the rate lim n→∞ µ n n = 2 ln 2 3 . For any other Q-recurrent map (where Q has critical orbit of period m ≥ 3) the moduli grow exponentially at the rate lim where κ m is the same constant as in Theorem 5.5, converging to 3 2 as the period m of Q increases. Proof. First note that, although U n is defined as a pull-back of V n 0 , relation 10 shows that this piece is just g n (V n+1 0 ). Now, when c ∈ ∆ n , the first return g n (0) falls in V n−1 0 . For c ∈ ∆ n+1 , g n (0) is in U n . From the previous result, c → g n (0)[c] is an almost linear map taking the annulus ∆ n \ ∆ n+1 close to V n−1 Corollary 6.2 now follows from Theorem 5.5. 6.3. Auto-similarity in the Mandelbrot set. The discovery that parapieces around c fib are similar to the Julia set of −1 revealed one more level of complexity in the structure of M since it relates the dynamics of two different parameters. In this Subsection we use our results to take one further step. Having at our disposal an infinite collection of superattracting parameters, we reveal an interesting relation between two arbitrary parameters on ∂M whose combinatorics can be completely dissimilar. The Q-recurrency phenomenon is not restricted to the Cantor sets described so far. As part of the proof of the next Theorem, we will show that parapieces whose shape approximates K Q are dense on ∂M . This requires relaxing the definition of Q-recurrency which assumes that the correct combinatorics start from level 0. Instead, we allow critical orbits that behave arbitrarily for several levels before settling in the desired Q-recurrent pattern. This critical behavior is referred to as generalized Q-recurrency. The assertion of Theorem 6.3 follows; it can be interpreted as saying that the geometry of most Julia sets is replicated near arbitrary locations of the boundary of M . Theorem 6.3. Let c 1 , c 2 ∈ ∂M be two parameters such that f c 2 has no indifferent periodic orbits that are rational or linearizable. Then there exists a sequence of parapieces {Υ 1 , Υ 2 , . . .} (most likely not nested) converging to c 1 as compact sets, but such that Υ n −→ K c 2 in shape. Proof. It is not difficult to obtain the result of Theorem 5.2 in more generality. In fact, inside any ball B ε (c) with c ∈ ∂M , we can find a system of parameters for which the first return maps converge (after scaling) to a given superattracting map Q. To see this, simply consider a tuned copy of M contained in B ε . All parameters in this copy M ′ , are renormalizable by the same combinatorics. In particular, there will be parameters whose renormalization is hybrid equivalent to a Q-recurrent map. For these parameters a high level of the frame will contain a substructure whose graph is isomorphic to Γ 0 (Q) and we can start the same construction as in the proof of Theorem 5.2 to produce frame-like structures whose graphs are isomorphic to Γ n (Q). Since the combinatorics is prescribed by a polynomial, there can be no obstructions just as in the original case. Then, the rescaled first return maps will converge to Q as before. Moreover, we can translate the shape property to the parameter plane. Note that this argument is equivalent to prescribing the itineraries of nest pieces arbitrarily on the initial levels and then proving that they can be admissibly extended on subsequent levels to match the pattern given in Formula (4). Now consider the filled Julia set of f c 2 . We know from [D1] that there is a sequence {Q 1 , Q 2 , . . .} of superattracting polynomials in a prime hyperbolic component of M such that K c 2 can be arbitrarily approximated by filled Julia sets: K Qn −→ K c 2 . To fix ideas, let us choose subindices so that the Hausdorff distance is dist H (K Qn , K c 2 ) < 1 2n . For any n, consider the ball B 1 n (c 1 ) and locate a generalized Q n -recurrent parameter s n . By going to a deep enough level, we can find some parapiece Υ n ⊂ B 1 n around s n whose shape is 1 2n -close to K Qn ; that is, so that there is a rescaling Υ n of Υ n for which dist H Υ n , K Qn < 1 2n . Since Υ n ⊂ B 1 n , the sequence {Υ n } consists of parapieces that get arbitrarily small and converge to c 1 , while at the same time dist H Υ n , K c 2 < 1 n , so Υ n −→ K c 2 in shape. C. Grötzsch inequality. The following result (and its quantitative version) is essential to estimate the modulus of an annulus that is split in subannuli. Theorem C.1. Extended Grötzsch Inequality: Let K ⊂ C be a simply connected compact set and denote int 0 K the component of its interior that contains 0. (b) Let {U n } and {V n } be two sequences of nested topological disks satisfying • 0 ∈ U n ⊂ int 0 K and diam U n ց 0 • K ⊂ V n and dist(K, ∂V n ) ր ∞. Then the deficit in the Grötzsch inequality tends to An important observation is the fact that equality in (a) is achieved if and only if ∂K ⊂ V \ U maps to a centered circle under the Riemann map. Definition 6.5. Given an analytic univalent map ϕ between regions U and V , the distortion of ϕ 0 is defined as: . Koebe's Theorem provides great control of the distortion when there is enough space between U and V . Theorem D.1. Let U and V be two topological disks with U \ V . Then there is a constant C such that for any univalent map ϕ(U ) = V , Dist(ϕ) < C. Moreover, C = 1 + O e −mod(V \U ) as the modulus goes to ∞. E. Teichmüller space. The Teichmüller space of a Riemann surface carries a great deal of structural information. Here we focus in the case that the surface S is the complex plane punctured at a finite set O. Then, the Teichmüller space T S can be described as a quotient of the space of quasiconformal deformations of S (i.e. the family of maps {h : S −→ C | h is a qc homeomorphism}), where two deformations h 1 and h 2 are identified if and only if there is a conformal change of coordinates ϕ : C −→ C such that ϕ • h 1 is isotopic to h 2 relative to the puncture set h 2 (O). Note. The coordinate changes ϕ are affine maps, so the deformation of O within a class is determined up to translation and complex scaling. Therefore, we can normalize a deformation h by requiring that h fixes two distinguished points in O. These could be, for instance, the critical point and critical value of Q in the case that O is the postcritical set of a hyperbolic map Q. It is fundamental to consider an alternate description of T S in terms of Beltrami differentials. Fix two almost complex structures on S determined by their Beltrami coefficients µ dz dz and ν dz dz . Assume that they are related by ν = h * µ, where h : S −→ S is a quasiconformal self homeomorphism of S which is homotopic to id relative to O. Then, the straightening maps h µ and h ν are two quasiconformal deformations of S in the same equivalence class in T S . Conversely, we can associate to a deformation h the almost complex structure h * σ = ∂h ∂h dz dz where σ is the standard structure. It is easy to verify that this correspondence lifts to the equivalence classes where it induces a bijection.
2014-10-01T00:00:00.000Z
2003-11-20T00:00:00.000
{ "year": 2003, "sha1": "3c5e2bcf5e56c038e4c2957724d1f81d85fc7d0e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "559c122ab8689f5f4b3ea1d2edb951ff827a3e7c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
73666612
pes2o/s2orc
v3-fos-license
Using Frequency Ratio Method for Spatial Landslide Prediction Numerous landslides have occurred in the study area and they damage to agriculture and pasturelands. Since the study area do not have any landslide inventory and landslide predicted maps, landslide inventory produced based on field research (GPS) and satellite image (Geoeye and Ikonos). Frequency ratio technique is a statistical approach to simulation environmental conditions. It also uses to take the factors related to dependent variable. Frequency technique considered for generating landslide susceptibility map. Pixel landsliding and non-landsliding calculated in eight factors related-landslide. Landslide susceptibility map produce in five insensitive to very high sensitive classes based on natural breaks method. Receiver Operating Characteristics (ROC) graph implement to evaluate of the frequency ratio method. In particular, the model will be able to predict landslide area occurrence in future completely (sensitivity = 1). Although, model identify insensitive area with 17% errors (specificity = 0.83). INTRODUCTION Landslide is the world's third largest natural disaster that causes many damages (Zillman, 2000).Human casualties and settlement damages as well as the infrastructure problems caused by landslide are increasing worldwide.In the world scale, landslides cause billions of dollars in loss and thousands of deaths and injuries each year.Most part of Iran has mass movement, which includes landslide, rock fall, earth flow and creep because of relief and mountainous terrain.The Study area in this research is west of Iranan area of 250 km 2 in the Kermanshah province in Iran.This area is quite susceptible to landslide due to its climatic conditions, geology, geomorphologic characteristics and human activities.Many villages and farms are located on unstable ground and then if landslide susceptibility map is prepared, it can help to displace some buildings in hazardous area.Landslide hazard evaluation is based on the analysis of the ground conditions in those regions where a previous landslide occurred (Carrara et al., 1999).The purpose of the present study was to produce landslide inventory and then landslide susceptibility map of the selected area by frequency ratio approach.Statistical methods are more applicable for prediction and classification of environmental problems in various regions (Paliwal and Kumar, 2009).Landslide zonation mapping methods can be divided into the following categories: quantitative or statistical (Guzzetti et al., 1999;Rautela and Lakhera, 2000;Lineback et al., 2001;Cevik and Topal, 2003;Gorsevski et al., 2003;Lee, 2004;Tangestani, 2004;Sakellariou and Ferentinou, 2005;Ayalew and Yamagishi, 2005), deterministic methods (Gökceoglu and Aksoy, 1996) and qualitative or knowledge-based (Ives and Messerli, 1981;Rupke and Veilleux, 2011;Regmi et al., 2010). For the landslide-hazard analysis, the main steps were data collection and construction of a spatial database from which the relevant factors were extracted, followed by assessment of the landslide hazard using the relation between the landslide and landslide-related factors and validation of the results.A key assumption of this approach is that the past and present is the key to the future.In other words, the potential (occurrence possibility) of landslides can be comparable to the actual frequency of landslides (Pradhan and Lee, 2010). The aim of present article is to use and to identify the implications frequency ratio method for generating sensitive land in the study area for landsliding.It also to evaluate the accuracy of the model in ROC graph and to present advantage and disadvantage of the frequency ratio method. Study area: The study area lies in the east south of Kermanshah province in west of Iran.It is mountainous area between 34°:05':6"N to 34°:13':06"N latitude and 47°:22':03" E to 47°:35':26"E longitude, with a total area of 250 km² (Fig. 1).It is characterized by rugged hills and mountain terrains covered by scattered trees and forest fragmentation.The study area is frequently subjected to landslides following land use change, especially alongside the agriculture land since they were changed (Fig. 1 to 3). MATERIALS AND METHODS Landslide inventory: Landslide inventory maps show locations and characteristics of landslides that have displaced in the past but generally do not indicate the mechanism (s) that triggered them.Landslide inventory map necessary to evaluate the validation in any study for assessing or zoning of landslide.After implementation of our method, we need to evaluate the model, so only landslides previously occurred could help to assess of the result.Therefore, inventory maps provide useful information about the landslide potential area.In addition, recognizing the type and recency of land-sliding can also facilitate the scope and design of site-specific geotechnical investigations and guide for slope remediation strategies.In previous studies, the inventory maps will be prepared from four methods that including: geo-morphological, event, seasonal and multi-temporal inventories (Guzzetti et al., 2012).For the study area we determined some situation landslides (about 35% of total exist landslides in area (29 landslide)) with GPS (global position system).The study area is varying in relative topography and lithology (Fig. 4), therefore distribution of landslide occurrence is very different.Aerial photography can aid to identify landslide especially with suitable scale (i.e., 1:20000 or larger).Interpretation of aerial photography and satellite images with view of stereoscopic determined boundary of landslide which we cannot see in the field work especially where topography is hummocky or vegetation is dense (We determined 85 landslides in the entire area).Most of the landslides in the area are more than 3000 m 2 , so we easily identify them because aerial photograph stereoscopic with 1:20000 scale can accelerate for landslide finding in the study area.These include old and new landslides which most of old landslide need to control from ground for accuracy in the landslide inventory preparing (Fig. 1). Factors related-landslide: According to the research background, 8 parameters collected such as slope, slope aspect, lithology, land use, erosion, distance to fault, distance to river and distance to road.Geological paper maps at 1:100, 000-scale covering the study area were digitized and the geologic formations were identified. The two largest datasets were topographical parameters that were collected from the 1:25000-scale paper topographic maps.A Digital Elevation Model (DEM) was generated from a Triangulated Irregular Network (TIN) model that was derived from digitized topography contours with a contour interval of 25 m (Fig. 5).The elevation, slope angle, aspect and shape of the slope parameters were obtained from the DEM. Another dataset was land use, which was interpreted from ETM+ images on the 21 April 2009.It was calibrated from using field observations.Because of significant cloud coverage, results of the classification were edited and simplified by manual digitization.The images modify the boundaries by supervision classification with ERDAS (Earth Resource Data Analysis System) software.The accuracy of the land use interpretation was checked in the field.Seven main lands considered and classified.Based on validation from field observations, the land use map has the accuracy of the Landsat image spatial resolution (~30 m).After geo-referencing the image, a combination of bands 1, 4 and 7 was used to make complex color images and operational information layer created by the method of Categorization of Utmost Probability (Dymond et al., 2006).Finally, zoning and susceptibility of landslide were done through frequency ratio method.Figure 6 shows the flowchart stages of study. RESULTS AND DISCUSSION Instability is a phenomenon sometimes it is observable and sometimes it is imperceptible.If the latter occurs, only evidence of it can be searched.This needs to be analysis of data and information.That is, to analyze this phenomenon, it is necessary to fit the data and information. In principle, the only way to demonstrate the real accuracy of the landslide evaluation maps when new landslides occur after the generation of landslide susceptibility.Spatial relationship between the factors and the landslide susceptibility map was done using a frequency ratio method.Weights and values the various factors have been calculated for the classes is shown in Table 1.Based on ratio it can be suggested that the slope, lithology and landuse are the main role in the occurrence of landslides in the area. The relation between landslide distribution and model was analyzed using Frequency Ratio (FR) model.In the FR model, percentage of landslide occurrence in a factor class divided by the total percentage of that factor in the area Fig. 7.The FR value grater 1 represent probability of landslide occurrence in this class is higher than average of landslide occurrence in the area.Slope greater than 60 due to the lack of soil has small number of landslides.Slope aspect is in degree (between 0 and 360) from the north.It also defines the azimuth of the flow.It was found that landslides in the study area in the North West and North facing hillside (Table 1), which are more abundant due to unexposed to the sun. Fault and lineament map extracted directly of geology map and it was controlled by Geoeye image and band 7 of Landsat satellite image.There are not large seismic faults in the study area, but it can be divided into four regions based on the area density of seismicity from high to low lineament density.The map represent landslide points are more frequency at the location where area is high seismicity.Also, by using FR method, buffer zones between seismicity and landslide occurrence assigning by buffer zones based on various distance from lineaments.Up to the distance of 2,000 m, the FR ratio indicated strong correlation with landslide occurrence.It can be stated that near to 70% of landslide occurrence were around this area and below the distance of 2,000 m, the ratio shows less correlation, which is again an expected result. The area under the ROC curve for each model is a global statistical accuracy for each model.It is independent of single prediction threshold which value is varies between 0.5 and 1.If the curve has a more distance from the reference line is better (Beguería, 2006).And, however, the closer the ROC curve is to the upper left corner of the area is more accurate.If the ROC is 1, it shows a perfect model and that the ROC is equal to 0.5 indicates a random fit. "Sensitivity" is probability of prediction of the positive case which indicates, it is properly classified and it is plotted on the y-axis in a ROC curve (Erener et al., 2010).Based on their distance from the reference line, the models show good results.From detection of positive area (Sensitivity), FR model is more performance.Area under curve for FR model was 0.93 (Fig. 8). CONCLUSION In this study, results of the susceptibility map have been validated with the Receiver Operating Characteristics (ROC) curve.Result of validation represent more than 80% of prediction of the landslide occurrence is suitable compare to past landslide occurrence.The distribution of the landslide density among different susceptibility classes is acceptability.Landslides occurrence in the None-susceptible class is the least while their density increases in the high-and very high-susceptibility level. Fig. 1 :Fig. 2 : Fig. 1: Map of the study area and distribution of shallow landslide (inventory) in the area that show on Ikonos (above) and Geoeye (low) images (courtesy from Geoeye company) Table 1 : Frequency ratio calculation of landslide influencing factors Fig. 7: Landslide susceptibility map generated from frequency ratio method
2018-12-27T00:44:25.676Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "999e51d91a7b6e442182844dd9244a9571119e8f", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/RJASET/7-3174-3180.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "999e51d91a7b6e442182844dd9244a9571119e8f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
256636617
pes2o/s2orc
v3-fos-license
A balance between aerodynamic and olfactory performance during flight in Drosophila The ability to track odor plumes to their source (food, mate, etc.) is key to the survival of many insects. During this odor-guided navigation, flapping wings could actively draw odorants to the antennae to enhance olfactory sensitivity, but it is unclear if improving olfactory function comes at a cost to aerodynamic performance. Here, we computationally quantify the odor plume features around a fruit fly in forward flight and confirm that the antenna is well positioned to receive a significant increase of odor mass flux (peak 1.8 times), induced by wing flapping, vertically from below the body but not horizontally. This anisotropic odor spatial sampling may have important implications for behavior and the algorithm during plume tracking. Further analysis also suggests that, because both aerodynamic and olfactory functions are indispensable during odor-guided navigation, the wing shape and size may be a balance between the two functions. In addition to their aerodynamic properties, insect wings also move odor plumes closer to sensory organs. Li et al. show that Drosophila wings may trade optimal aerodynamic performance for improved olfactory function during flight. I nsects have remarkable flying and odor-tracking capabilities [1][2][3] that have captivated the interest of naturalists and biologists for centuries. For example, a male moth can track its female counterpart from miles away via pheromone detection 4,5 . Insects possess a sophisticated olfactory system that is extremely sensitive to a great number of volatile chemicals, with its total detection capacity that has never been fully cataloged 6,7 . But then, how are olfactory detection and tracking affected by wing flapping during the flight, which inevitably perturbs the incoming odor plume? One widely held hypothesis is that flapping wings may actively draw odor plumes to the primary olfactory sensory organ, the antennae-an action analogous to "sniffing" in mammals, and that wing beating may be a critical part of active olfactory sampling for insects 8,9 . For example, a silkworm with its wings removed is unable to track odor plumes, even though it tracks plumes while walking 10 . Experimental measurements using hotwire anemometers showed that the induced airflow generated by the wings varied with wingbeat frequency, which may alter olfactory stimuli to the sensory organs 11 . Yet, there is a lack of quantitative details and confirmation on how flapping wings actively draw odor plumes to the antennae. On the other hand, through million years of evolution, insects have developed superior wing designs and complex flying mechanisms to enhance their aerodynamic performance [12][13][14] . Wing flapping back and forth generates a tornado-shaped leading-edge vortex on its top surface that can provide almost twofold lift compared with static wings 15,16 . Unsteady wing flapping motion can augment lift through delayed stall when the wing sweeps through the air with high angle of attack during the translational phase. Rapid wing reversal can capture its own wake and thus further enhance lift. Besides these three most common unsteady aerodynamic mechanisms 13 (leading-edge vortex, delayed stall, and wake capture), many insects also apply other unique aerodynamic mechanisms to further increase lift. For example, the slender wings of mosquitoes can take advantage of the trailing-edge vortex 14 . For insects like butterflies, their wings may interact with each other to enhance force generation through clap-and-fling and clap-and-peel aerodynamics 17,18 . For insects with wider body shape, such as cicadas, wing-body interaction mechanisms may play a critical role to enhance lift generation 19 . Despite our improved understanding of the aerodynamics of insect flight, it is still unclear whether the need to maintain high aerodynamic efficiency is in conflict with the need to draw more odors to the antennae. An alternative hypothesis that may have been overlooked by the scientific community is that the antennae may be located on the head precisely to avoid the disruption of the odor plume structure, which may contain localization information 20,21 . Imagine a speed boat skimming across a calm lake; its bow is always hitting the calm water, ahead of the wake and the turbulence generated by the propeller-if the insect indeed takes advantage of air perturbation generated by wing flapping, why are its antennae not located on its body or on its tail? Olfactory sensilla are located predominantly on the antennae and maxillary pulp, both near the tip of the head, potentially avoiding the disturbance created by wing flapping. Ultimately, since both aerodynamic and olfactory functions are indispensable during an odor-guided navigation, there has to be mechanism to balance both its aerodynamic and olfactory needs. Here, we utilized an in-house high-fidelity computational fluid dynamics solver to simulate a fruit fly in forward flight motion. We quantified the odor mass flux around the insect body and visualized the odor plume structures through Lagrangian tracking method. The aerodynamic performance and vortex structures were also evaluated. The present effort explores the role of flapping wings in enhancing the olfactory stimulus and offers new insights into key regions of flapping wings that may differentially impact aerodynamic forces and antenna odor mass fluxes during odor-tracking flight. Results Modeling a fruit fly in forward flight. We designed a "numerical wind tunnel" and simulated a morphological-accurate fruit fly model (Fig. 1a, b) in forward flight. Our simulated fruit fly is prescribed with realistic flapping kinematics (Supplementary Fig. 1) according to the literature 22,23 at a frequency of 213 Hz and a forward speed of 0.94 m/s, with a corresponding Reynolds number of 173, which describes the ratio of inertial to viscous forces in a fluid; and reduced frequency (k) of 0.65, which is the ratio of wing-tip velocity over forward velocity (see Methods and Supplementary Fig. 2). A normalized uniform pseudo-odor was released from the upstream inlet (see methods). This pseudo-odor can represent most natural odors in the environment, whose diffusivities in the air are generally quite low, ranging from 10 −1 to 10 −2 cm 2 /s. Since the convective odor transport due to air movement dominates the system (Peclet number 10 2 -10 3 ), the odor diffusion was ignored. Utilizing an in-house direct numerical simulation solver 24 , we simulated the unsteady aerodynamics of the forward-flying fruit fly (Fig. 1b) and quantified its associated odor plume structures ( Fig. 1c-e). Wing flapping enhances odor mass flux to antenna. Our simulation confirmed that the flapping locomotion indeed enhanced the odor mass flux over its antennae (by~1.8 times at its peak value, Fig. 1d position i). Surprisingly, odor flux along the fruit fly body and tail locations has lower peak intensity than at the antenna and is more chaotic due to the wake generated by the flapping wings ( Fig. 1d and Supplementary Fig. 3). This finding confirmed that the conventional wisdom 25 is correct: during forward flight, the antennae are well positioned to receive significantly increased odor mass flux while avoiding significant air disturbance compared to other locations along the body. The mechanism for this enhanced odor mass flux to the antennae, as we observed, consists of two stages: trapping and flicking upward. In Fig. 1e and Supplementary Movie 1, the odor plume structure is visualized by odor particle tracing. The colors of the particles indicate different releasing locations. Without flapping motion, only a narrow jet of particles (the yellow particles) that are directly in the path of the antennae will pass over the antennae. With wing flapping, during the downstroke ( Fig. 1e; t/T = 6.75 and t/T = 7.00), the wings push and trap odorous air below the body, preventing it from escaping downstream. Once the wings start the transition to upstroke ( Fig. 1e; t/T = 7.25), the wide trailing edges close to the wing root rotate and flick the trapped odorous air (green and cyan particles) upward toward the antennae (Supplementary Movie 1, 00:38-01:15). The peak odor mass flux occurs not during upstroke or downstroke but, rather, during this wing transitional phase. This phase-locked odor mass flux within the wing-flapping cycle may be utilized by the olfactory system to enhance odor detection through potential neural connections to the motor centers 2,8 . Effects of higher flapping frequencies. To further explore the effects of flapping wings, different flapping frequencies (k = 0.33-1.30) were also simulated (Fig. 2). Figure 2a shows the top view of wake topology, using Q-criterion and color coded by the normalized pressure. In general, the wake is dominated by a chain of vortex loops behind each wing and periodically sheds off at the wing reversal points of the wing-beat cycle. Wing flapping induces a strong air vortex over the head (Fig. 2b) that intensifies with higher reduced frequency, which is the main driver of the increased odor mass flux to the antenna region, and potentially to the maxillary palp as well. Correspondingly, antenna odor mass flux increases significantly with higher flapping frequency (Supplementary Movie 1, 01:16-01:53), mostly because particles farther below the body (blue and purple) are also being perturbed and pushed up over the antenna region (Fig. 2c, k = 1.30). In another sense, the increased odor mass flux with wing flapping is the result of broader spatial sampling range. However, this spatial sampling is mostly limited vertically to below the body. In the horizontal plane ( Fig. 2d; Supplementary Movie 1, 01:54-02:31), only a narrow stream of odor particles that is in the direct path of the body center can pass through the antennae, regardless how fast the wing flaps. The anisotropic spatial sampling ranges suggest that insects may have better capability to sample and detect odor plumes coming from below their body owing to their wing flapping. This may have implications in the behavior of plume tracking of most insects, which often consist of two distinct phases: surging upwind toward an odor source and zigzagging cross-wind (casting), which is triggered by loss of the odor plume 21 . Behaviorally, the zigzagging occurs more often horizontally than vertically 26 , potentially because insects are able to sample a wider spatial range in the vertical direction by wing flapping. Thus, horizontal casting is more essential to search and locate lost plumes. If we can speculate further, it might also be a reason for horizontally oriented antennae in moths (and many others), to potentially compensate for the lateral sampling range. These hypotheses, albeit speculative but amenable to future experimental investigation, may lead to further insights into insect odor-tracking behavior and algorithms. Balance between aerodynamics and olfaction. Since both aerodynamic and olfactory functions are indispensable during odorguided navigation, we set out to understand how insect wings achieve a balance between these seemingly conflicting roles. As shown in Fig. 2e, most of the lift force is produced by the wings during the downstroke and peaks near the mid-downstroke, corroborating previous study 22 , but in a very different phase from odor mass flux peaks (Fig. 2f). The cycle-averaged lift coefficient and odor mass flux over the antennae obtained from the simulations are summarized in Table 1, and they do not follow the same trend. The lift coefficient (C L ) is the total lift force nondimensionalized by wing-tip velocity squared and wing area, which generally reflects the efficiency of wing shape and design 27 . Higher lift coefficients can translate into carrying more payload per unit wing area 28 . The data show that increased reduced frequency (from k = 0.65 to k = 1.30) enhances peak odor mass flux over the antennae (59%) but slightly decreases the lift coefficient (−11%). More importantly, the cycle-average lift coefficient distribution contour plot on the wing surface ( Fig. 3a; Supplementary Fig. 2) shows that the trailing-edge portions that are important to odor transport as observed previously (upward flicking) contribute poorly to lift generation. Intrigued by this observation, we virtually cut off the trailingedge portion of the fruit fly wing (Fig. 3b) and reran the flight simulation while maintaining all other settings the same. Lift production (Fig. 3a-c), vortex formation ( Fig. 3d-f), and odor Lift coefficients peak at mid-downstroke phase and decrease with higher reduced frequencies. However, antenna odor mass flux peaks during the downstroke to upstroke transition phase and increases with higher reduced frequencies transport ( Fig. 3g-i) were compared side by side over a wide range of reduced frequencies (Fig. 3j-l). The modified wing improves the average lift coefficient by 9.6% at k = 0.65 and by 18.0% at k = 1.30, as well as improving the overall aerodynamic efficiency, evaluated using the ratio of total force generated (combining both lift and forward thrust force) over total power consumed ( Fig. 3k: by 4.3% at k = 0.65 and by 6.3% at k = 1.30). The reason: the trailing-edge portion of the wing accounts for 20% of wing area yet accounts for only <5% of lift force generation ( Supplementary Fig. 4); thus, removing it significantly reduces the power needed to flap the wing against air resistance while improving the lift coefficient as well as overall aerodynamic power economy. However, this wing modification results in a significant reduction of peak odor mass flux over the antennae, by −10.7% at k = 0.65 and by −17.9% at k = 1.30 (Fig. 3l). The modification does not significantly affect the leading-edge vortex formation and circulation of the wing but significantly reduces the strength of the vortex around the antenna (Fig. 3d-f), which explains the differential impact on aerodynamic performance versus antenna odor flux. In addition, the vertical spatial sampling range was also decreased compared to the original wing (Fig. 3g, h; Supplementary Movie 1, 02:51-04:07). For example, after the wing shape modification, the blue and purple particles cannot be pushed over the antenna region at k = 1.30 (Supplementary Movie 1, 03:29-04:07). This wing manipulation confirms that a wider trailing edge leads to a stronger odortrapping and flicking effect during flight and suggests that the original wing shape may not be optimal for aerodynamic performance but may result in better olfactory performance. Discussion Some 400 million years ago, insects evolved wings and the ability to fly 29 . Flight allows them to escape from ground predators, to explore farther for food sources and mates, and to fill new ecological niches that ground animals cannot reach 30 . With this critical advantage, insects quickly become the most diverse and abundant animal group 7 . Through natural selection, insects have developed very complicated wing designs and flying mechanisms to enhance aerodynamic performance that is still beyond our completely knowledge, including delayed stall 13 , wake capture 13 , clip and fling 17 , trailing-edge vortices 14 , wing-wing interactions 31 , wing-body interactions 19 , and etc. There is a common belief that insect wings have evolved to be highly aerodynamically efficient 32,33 and that even slight changes in wing geometry or flapping kinematics could lead to loss in aerodynamic performance [34][35][36] . Yet, a different challenge arises when insects take to the air: their wing flapping now inevitably perturbs incoming chemosensory cues. How do insects address the conflict between aerodynamic performance and olfactory function? Our study, through the use of computational fluid dynamics simulations, quantitatively confirms and clarifies that flapping wings may enhance olfactory stimuli to the perfectly positioned primary olfactory organs (antennae) and offers new insights that (1) because both aerodynamic and olfactory functions are indispensable during odor-guided navigation, some aerodynamic performance may be sacrificed to improve olfactory performance, and (2) the shape and size of the wing may be a balance between the two functions. The wide trailing edge close to the wing root might not be optimal in terms of aerodynamics, but it can induce strong airflow over insect antennae. Furthermore, we found that higher flapping frequencies and strong wing transition phases induced higher odor mass flux, while lower flapping frequencies and downstroke phases produce better lift coefficients-again, a balance between the two functions. The seemingly effortless flying and odor-tracking abilities of many insects remarkable for their tiny size have captivated the interest of naturalists and biologists for centuries. Insect wings are a remarkable evolutionary product that are known to serve diverse roles in addition to flying, including pheromone dispersal 37 , sound production 38,39 , and ventilation of hives 40,41 . Drosophila may use their wings in a courtship display for engaging potential mates 42 . Beetles have evolved a hardened forewing (or elytron) as armor protection 43 , as well as for improving aerodynamic performance by interacting with the flapping hindwings during flight 44 . The results of our study critically expand the basic understanding of insect wing functions and insect biology by further revealing that optimal aerodynamics may be traded for more efficient olfactory performance during flight and may inspire future novel biological and neuroethology investigations. Directly assessing the impact of wing and antenna geometries, kinematics, and spatial orientations on olfactory sensitivities for different species of insects may help elucidate characteristics that improve odor-guided navigation. The interaction between wing flapping, the anisotropic spatial sampling ranges, and complicated incoming odor plume structure that insects might experience in the field may have further implications in understanding the behavior and algorithm of plume tracking of insects. The findings can also contribute to the design of future, more efficient micro-aerial vehicles with onboard chemical detectors. Methods Model fruit fly. A morphological-accurate model of the fruit fly D. melanogaster was constructed ( Fig. 1a; Supplementary Fig. 1a). The wing shape was digitized from a D. melanogaster wing 45 (Supplementary Fig. 1b) that has a wing area of 2.59 mm 2 , a wingspan (R) of 2.87 mm, and an average chord ( c) of 0.89 mm. Left-right symmetry was assumed. The fruit fly wings, small and relatively stiff, are assumed to be rigid during flapping motion based on previous literature 46,47 . Based on the previous literature on forward-flying insects 22,23,48 , wing kinematics were prescribed with sinusoidal function of wing position angle ϕ(t) = 0.5Фcos(2πft) with an amplitude of Ф = 140°, setting the wing deviation angle θ = 0°with respect to the stroke plane, and assuming a constant wing feathering angle α of 60°during the upstroke and −30°during the downstroke. At the ventral and dorsal stroke reversal, α changed sinusoidally over the duration of the 0.22cycle period. This wing motion presents as an idealized flapping motion used by insects during a forward flight motion. The wing Euler angle profiles are illustrated in Supplementary Fig. 1c, and the wing chord kinematics are visualized in Table 1 Aerodynamic performance and odor mass flux over the antennae at various reduced frequencies Fig. 1d. The inclination angle of the stroke plane (β) against the horizontal body axis is 20°. The entire body is inclined by χ = 45°with respect to the horizontal plane. The clipping of wing trailing edge discussed in the main text is based on the surface contour of the cycle-averaged lift coefficient of the original wing at k = 0.65 ( Supplementary Fig. 5). The modified wing area and mean chord length are 2.09 mm 2 and 0.73 mm, respectively. Numerical method. The numerical simulations were performed using a secondorder Cartesian grid-based immersed boundary method. The details of this solver have been previously described; 24 brief descriptions are provided here. The non-dimensional equations governing the flow in the numerical solver were the time-dependent viscous incompressible Navier-Stokes equations, written in indicial form, as follows: where u i (i = 1,2,3) are the velocity components in the x-, y-, and z-directions, respectively; p is the pressure; and Re is the Reynolds number. Equations 1 and 2 were discretized using a second-order central difference scheme on a nonuniform Cartesian mesh, where the velocity and pressure are collocated at the cell centers. The unsteady equations were solved using a fractional step method, which provides second-order accuracy in time. An Adams-Bashforth scheme and an implicit Crank-Nicolson scheme were used to discretize the convective terms and diffusion terms, respectively. Boundary conditions on immersed bodies were imposed through a "ghost-cell" procedure, and the flow simulations were conducted on stationary non-body-conformal Cartesian grids. This arrangement eliminates the complicated remeshing algorithms usually needed for conventional Lagrangian body-conformal methods. Simulation setup. Simulations were conducted on a nonuniform 289×137×249 (about 10 million)-point Cartesian grid. The overall computational domain had dimensions of a 15R×15R×15R cubic box, where R = 2.87 mm is the wingspan length. To resolve the near-wake vortex structures, a cuboidal area around the fruit fly with dimensions of 2R×1R×2.5R had a high-resolution uniform grid (Δ≅0.0125R), as shown in Supplementary Fig. 6a. Stretching grids were applied in all three directions from the fine region to the outer boundaries. At the left-hand boundary, a constant inflow velocity boundary condition is applied. The right-hand boundary is the outflow boundary, where a zero stream-wise gradient boundary condition was applied for the velocity, allowing the vortices to convect out of this boundary without significant reflections. The zero-stress boundary condition was applied at all lateral boundaries. A homogeneous Neumann boundary condition was provided for the pressure at all boundaries. High-density triangular surface mesh specify the surface of the fruit fly's body and wings (see Supplementary Fig. 6b). Nonslip boundary conditions were applied on both body and wing surfaces. To guarantee that the entire flow field reached a periodic state 49,50 , all simulations were run for eight flapping cycles. This running period also ensured that the wake structures generated by flapping wings were fully affected by the outflow boundary condition. Grid refinement was performed to ensure that the simulation results were grid independent. Supplementary Fig. 7 presents the comparison of lift and forward thrust coefficients in three different girds densities. The plots show that the differences between the medium grid (presented in this article) and fine grid are <2.1% for lift coefficient and 0.9% for thrust coefficient, at their peaks. This demonstrates that the results of the current study are grid independent. In addition, the thrust coefficient in Supplementary Fig. 7b has positive and negative values, indicating that the fruit fly produced thrust in the upstroke and drag in the downstroke. The cycle-averaged thrust coefficient is close to zero (~0.018). Thus, the force balance is approximately achieved in the horizontal direction at k = 0.65, which is close to a self-propelled forward flight. At other flapping frequencies, the fruit fly should be considered as tethered. The Reynolds number in forward flight is defined as Re = U ∞ R/v, where U ∞ represents the forward flight speed (0.94 m/s) and v is the kinematic viscosity (1.56 × 10 −5 m 2 s −1 ) for air at room temperature (27°C). Based on the definition, the Reynolds number in this study is 173. The reduced frequency is defined as k = fR/U ∞ , where f is the flapping frequency. In the unsteady aerodynamics, the reduced frequency is a dimensionless number that used to define the degree of unsteadiness of the flow filed. To change the reduced frequency, we can either adjust the forward flight speed (U ∞ ) or adjust the wing flapping frequency (f), with the similar effect on aerodynamics. (see Supplementary Fig. 2). Thus, in the current study, we used a single incoming flow velocity and varied the wingbeat frequencies (summarizes Supplementary Table 1). The results may be extrapolated to other forward speeds based on matching Re and reduced frequencies. Evaluation of the aerodynamic force and power. The instantaneous aerodynamic forces acting on the wing surface can be calculated from the pressure and stresses along its surface based on the solutions to the Navier-Stokes equations. The lift and thrust force (F L , along the vertical direction; F T along the horizontal direction) are presented as non-dimensional lift and thrust coefficients, which are computed by where C L and C T are the lift and thrust coefficients and S is the area of the wing surface. U tip is the mean wing-tip velocity, defined as where u tip ,v tip , and w tip are wing-tip velocity components in x-, y-, and z-directions, respectively. Similarly, the non-dimensional total force coefficient is given by C F ¼ F total =0:5ρ U 2 total S, where the F total represents the total aerodynamic force generated by the wing, a combination of both lift and thrust forces. The aerodynamic power consumption (P aero ) is the power needed to flap the wing against air resistance. The non-dimensional aerodynamic power coefficient is defined asC PW ¼ P aero =0:5ρ U 3 tip S, same as previous studies on fruit fly 51 , cicada 19 , and dragonfly 52 flight. The overall aerodynamic efficiency is evaluated using the ratio of total force generated over total power consumed, which is defined as C F /C PW . Supplementary Fig. 4 compares the lift coefficient and lift force generated by the original wing and the modified wing. The trailing-edge region accounts for~20% of the total wing area yet only contributes <5% of lift generation over all frequencies. Thus, its removal improves lift coefficient as well as the overall aerodynamic efficiency (force-to-power ratio) (Fig. 3k), due to less power consumed to flap the wing against air resistance. Validation of numerical method. To validate the numerical method used in the present study, a separate numerical simulation of the fruit fly was conducted to replicate experiments of Sane and Dickinson 53 . The wing sweeps in the horizontal plane and rotates at the end of each stroke. The stroke amplitude was 180°, and the angle of attack at the midstroke was 50°. The Reynolds number was 136. A nonuniform Cartesian grid of size of 256×144×192 was used in a computational domain of 30 c 30 c 30 cto obtain domain-independent results. The comparisons to the experimental measurements 53 and previous numerical simulations 22,54 are shown in Supplementary Fig. 8. The magnitude and variation of the computed lift and drag forces agree reasonably well with the previous results. Quantification of odor mass flux around antennae. The governing equation of odorant convection and diffusion in the air phase is where i = 1,2,3 indicate the components in the x-, y-, and z-directions; C' is the normalized odorant concentration defined by C' = C/C in , in which C in is the inlet or ambient air odorant concentration (C' at the inlet boundary equals 1). The normalized uniform inlet concentration allows us to focus on the effect of wing flapping. In the future, the more complicated odor plume structure that the insect might experience in the field can be introduced. The Peclet number for the mass transfer is defined by Pe = Re Sc, where Sc represents the Schmidt number, which is the ratio between kinematic viscosity and mass diffusivity (Sc = v/D). Typical natural odor in the environment has quite low diffusivity (D) in air at normal temperature and pressure, ranging from 10 −1 to 10 −2 cm 2 /s. Thus, the Sc has a range of 10 0 to 10 1 . Based on the definition, the Peclet number in the current study is 10 2 to 10 3 ; thus, convective transport due to air movement dominates the system for most natural odors, and odor diffusion may be ignored. By ignoring odorant diffusion, the first term in the right-hand side of Eq. 3 is treated as zero. The odor mass flux over antennae is then calculated as C' ρ odor U * , where ρ odor is the density of odor and U * represents the air velocity at 0.03R above the antenna surface (since air velocity is always zero at the surface). So this equation assumes 100% odor absorption when the odorant-laden air passes through the olfactory organ, which is a reasonable simplification for an initial study based on Sc number. Sc number is the ratio of momentum diffusivity and mass diffusivity, and is higher than 1 for most common odors. The diffusion and binding process of specific odor through the boundary layer to the olfactory structure and to olfactory receptor will certainly be worthy of further investigation in the future. The instantaneous profiles of U * are obtained by averaging three virtual probes around the antenna. In addition, the density ratio between odor particles and air is assumed to be 1 (ρ odor = ρ air = 1.225 kg m −3 ). Similar probes are placed at 10 different locations 0.03R above the body surface around fruit fly body (see Supplementary Fig. 3). Lagrangian tracking of odor structures. To visualize the odor plume structures, the Lagrangian tracking approach is applied by assuming that odor transport is dominated by the convective flow field, as described above. Computational neutralbuoyant particle tracers have been widely used to mimic smoke 55 and bubbles 56 when diffusion is low, with good experimental agreement. The time step was set as 0.001 s. Code availability. The in-house CFD solver algorithm 24 has been published elsewhere. The executable file of the code is available from the authors upon reasonable request for non-commercial purposes only.
2023-02-08T14:48:20.514Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "46ecb9dbe5e08ffa60ab931d5553cc8e0463c2f3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-05708-1.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "46ecb9dbe5e08ffa60ab931d5553cc8e0463c2f3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
16398126
pes2o/s2orc
v3-fos-license
Recent Developments in Uv Optics for Ultra-short, Ultra-intense Coherent Light Sources With the advent of Free Electron Lasers and general UV ultra-short, ultra-intense sources, optics needed to transport such radiation have evolved significantly to standard UV optics. Problems like surface damage, wavefront preservation, beam splitting, beam shaping, beam elongation (temporal stretching) pose new challenges for the design of beam transport systems. These problems lead to a new way to specify optics, a new way to use diffraction gratings, a search for new optical coatings, to tighter and tighter polishing requirements for mirrors, and to an increased use of adaptive optics. All these topics will be described in this review article, to show how optics could really be the limiting factor for future development of these new light sources. Introduction and Wavefront Preservation In terms of optics, what really matters when you have an ultra-intense, ultra-short, and fully-coherent source?Well, the easy answer is, survive the power/fluence and preserve the unique source characteristics: easy to say, but not to achieve. Let us start with some basic concepts that arise infrequently with UV optics for Synchrotron Radiation sources but appear more often in the Laser world. The most challenging task for an optic dealing with coherent sources is the preservation of the wavefront.There are two main factors that can alter the wavefront, and reduce the intensity of the beam: Limited mirror acceptance and mirror shape errors.The wish maximize flux collection is obvious, but having to deal with a fully-coherent beam is more challenging.In fact, if one considers the acceptance from a geometrical point of view, this very seriously underestimates the effect on the wavefront.In fact, the mirrors act as slits, and there is a diffraction effect that can introduce periodic structure in an OPEN ACCESS unfocused beam, as well as side-diffraction lobes in a focused beam.However, the most detrimental effect, as well as the most complicated to handle, is the effect of figure or shape errors on both focused and unfocused beams.To quantify this effect, one often uses the Strehl Ratio [1].The Strehl Ratio (SR) represents the ratio between the obtained or simulated peak intensity and what is available from a perfect optical system.It therefore has numerical values between 0 and 1. Usually, an optical system is considered "good" if the SR is ≥ 0.8 (Maré chal Criterion [2]).This is actually only a useful criterion if the optical system always delivers light at a focus.Away from the focus, we need a tougher criterion.To understand what is needed, let us see how the SR is defined: where φ is the phase error introduced by the non-ideal optics on the wavefront.The phase error φ is wavelength (λ) dependent and, for the purpose of this discussion, due to some imperfection of the mirror surface, e.g., shape errors.In the case of a reflecting, normal-incidence optic, with defects of an rms amplitude error δ, φ = 2δ/λ.In the case of a grazing incidence mirror, with rms shape deviation from the desired profile δh and grazing angle-of-incidence θ, the phase error introduced on the wavefront is: Now, a SR of 0.8 results from an rms phase variation (and therefore reflection amplitude variation) of ~7% (from Equation ( 1)).If we consider a plane wave reflected by the mirror, and measure the intensity of the unfocussed reflected beam, we will find an rms difference in intensity (proportional to the square of the amplitude) of ~13%.This means that if one is using the beam away from the focus, for instance for imaging or coherent diffraction experiments, the sample will be illuminated in a non-uniform way, even up to 40%-50% (e.g., the peak to valley intensity variation can be easily 5 or more times larger than the rms variation).This is, of course, unacceptable for experiments requiring a "uniform" illuminating beam.For most cases, when a uniform beam is necessary, a required SR of 0.97 is a good assumption (see Figure 1 for a further explanation of this statement). It is often difficult to define what is really needed, and sometimes, the state-of-the-art or the cost of the optics could be the limiting factor.What one really needs are two things, a very low residual rms shape error and the absence of high-frequency errors on the mirrors.If the first statement is obvious, the second needs some explanation.High frequency shape errors can have large amplitude but, being of short period, produce a limited increase in the rms value.The overall SR is determined by the rms shape errors but the profile of the intensity variation out-of-focus is mainly determined by the P-V shape errors. As an example, let us consider a system of four mirrors with 2° grazing-incidence angle, and wavelengths of 10 and 5 nm.A SR of 0.8 requires shape errors of 5 and 2.5 nm rms respectively.A SR of 0.97 would require shape errors as low as 2 and 1 nm rms.These numbers can be derived from Equation (2).In a system of N consecutive mirrors, with the same angle-of-incidence and with similar, uncorrelated rms shape errors, the shape error can be calculated from: Shape errors of the order of 1 nm rms are very challenging, in particular for long mirrors.To simplify, slightly, the manufacturing process, the optics could be specified based on the real footprint of the beam on the mirror itself.For instance, let us consider the case of the FERMI@Elettra FEL [3], the free electron laser source located in Trieste, Italy.This source delivers photons in the wavelength range from 100 to 5 nm (and even lower wavelengths).The beam is diffraction limited, and therefore the divergence, and consequently the footprint of the beam on the mirror surface, changes by a factor of 20 over the wavelength range.Since the tightest specification is required for the shortest wavelengths, the best way to specify a mirror, and have a vendor able to make it, is to request the shape errors as a function of the aperture of the mirror (Figure 2). For practically, one is interested in having the central part of the mirror polished to a certain level of shape error, and the rest with lower and lower requirements are acceptable.This is the way to specify the mirrors.To clarify, since most of the incident beam intensity is contained in 2xFWHM, the aperture of the mirror associated with a particular wavelength can be taken as 2 FWHM.Of course, such a mirror profile must be preserved once installed, and that makes the situation even more complicated.Nevertheless, the peculiar characteristics of the source give us further aid.Since the deformation of the mirror bulk is mainly concentrated where the mirror restraints are located, and since the longer wavelengths, having the largest footprint, require lower tolerances, the holder for these mirrors can be designed with the restraints as far as possible from the central part of the mirror.Such a solution is adopted by LCLS for their 1 m long mirrors [4] and is shown in Figure 3. Since such specifications are rarely met by polishing only, one alternative approach is to use adaptive optics.The idea is to have a number of actuators shaping the mirror surface to compensate most of the errors due to polishing, thermal and mechanical deformations.Several approaches were used in the past, most of them based on the use of piezo actuators.A non-exhaustive list of such projects is listed in the reference section [5][6][7][8]. Damage of Optical Coatings The other main problem related to an ultra-intense and ultra-short-pulse source is potential damage to the optical surface.Practically, when a mirror is designed for such service, one cannot freely choose the coating and angle-of-incidence, according to the usual paradigm to maximize reflectivity and acceptance, and maybe cut off photon energies above a certain value.Actually, the best coating candidate is one having a large penetration depth, which is usually associated with a lower reflectivity.In fact, the damage threshold for heavy metal coatings, often used in Synchrotron Radiation user facilities is lower than that for light materials like Be, C or compounds like B4C and SiC, which have the further advantage that the absorbed incident energy is distributed among more atoms than occurs in single-component coating. If we consider a mirror working below the critical angle, too optimistically called the "total external reflection mode", the non-reflected part of the beam penetrates by 1/e into the mirror a distance d equal to [9]: with where δ and β are the unit decrements of the real part and imaginary part of the refractive index n = 1−δ−iβ.Now, if one considers an incoming, normal-incidence peak power density Pd and a reflectivity R, for a material with an atomic density (number of atoms per unit volume) ρatm, the absorbed dose per atom Datm is: As a rule of thumb, that is usually very optimistic, an atom cannot absorb more than 1 eV.Studies on several materials set the limit as low as 0.7 eV/atom in grazing incidence for Pt [10] and 0.3 eV/atom on Si.The best way to avoid damage is therefore to minimize the absorbed power as much as possible.Of course, a better reflectivity helps considerably, but from Equation ( 6), the penetration depth and the atomic density play an important role.The absolute best material is, in fact, one having a large penetration depth and several atoms among which to distribute the absorbed power.A heavy metal has a large atomic density but the penetration depth is usually very small.A very light material could have a large penetration depth but few atoms among which to distribute the power.Even if the second case is usually preferable to the first one, the best compromise is the use of compound coatings.Materials like MgF2, B4C, SiC or other borides or silicate compounds, are an excellent solution.The reflectivity in the UV, once the absorption edges are avoided, is quite good for grazing incidence and the penetration depth reasonably high.However, even if in the SXR, above the Carbon edge, most of these compounds can be used, trying to work in large UV regions, like from 50 to 5 nm wavelengths, is almost impossible.In fact, all of them, excluding carbon, have absorption edges in this region.Therefore, many studies on the damage of carbon and on silicon, as backup solutions, were performed at the beginning of the XUV FEL era [11][12][13].High density carbon, in particular, has the double advantage of a slightly higher reflectivity and atomic density.Diamond-like carbon would be the best solution but, at the moment, there are no providers able to coat mirrors with such a high-density material. A multi-coating mirror, e.g., with different single layer reflective stripes in the sagittal direction, is also a valid option, if the expected power density delivered by the source comes close to the damage limit (calculated or estimated) for one coating over part of the desired photon wavelength range.Nevertheless, since several tabletop lasers and FELs are now operating, the best thing to do is test the optics by simulating the desired operation condition.This is routinely made, for instance, at LCLS where each new coatings must go through a series of tests, the most important of which is the damage threshold measurement.However, since the beamtime on an FEL is so precious, and tests are not always possible, the second best option is to have a very large safety margin, for example, 10 or more, between the expected power adsorbed per atom and the calculated (or estimated) damage threshold. Beam Stretching The last problem, or main difference with respect standard UV optics, is related to the temporal stretch of the beam in presence of diffractive elements like gratings or multilayers.The simple fact that different photons travelling different paths can produce a temporal elongation of the beam that, if not compensated, can drastically elongate the beam. Diffraction gratings are the most common dispersive element in the UV or XUV range.An FEL source is usually quite monochromatic and a tabletop laser is even better.Nevertheless, there are situations in which a grating is still needed.We will not discuss the case of gratings used in diagnostics (see for instance [14]), e.g., to measure the spectral profile of a source, since, in this case, one is not interested in preserving the beam temporal profile.Nevertheless, some issues described here can be applicable to the diagnostic case too. The grating is, as is well known, a diffractive element able to separate different photon wavelengths.An extensive description of the use of gratings in monochromators can be found in [15].For the purpose of the discussion here, we report only two formulas; the one related to the dispersion is: where α and β are the incidence and diffraction angles (with respect the grating normal), and d is the grating groove spacing (often denoted as d-spacing, equal to 1/D where D is the grating groove density), λ is the photon wavelength (see Figure 4 for details) and n the diffraction order.From this equation, one can derive the expected resolving power (R = λ/∆λ) of a grating.In a monochromator, in fact, one needs to disperse the radiation and focus it.In the focal position, all the energies are present but spatially separated.To select only the desired one, it is necessary to use a slit having an aperture, ideally, as large as the focal dimension of the beam from the monochromator in the dispersive direction.Calling this aperture s, the contribution of the exit slit to the resolving power is: This is not the final description of how the radiation is dispersed; one must consider the source dimension, the system aberrations and the grating figure errors.However, there is another contribution to the resolution R, actually the most natural one, that is not always considered into the Synchrotron Radiation sources' monochromators.This is very important for ultra-short sources, in particular, in the presence of very narrow divergence, and is: where N is the number of illuminated grooves.From basic principles, considering the grating as a series of slits, this is the maximum resolution one can have.Any other terms can only reduce it (this is in principle valid for any contribution).Now, from Figure 4 and Equation (7), one can think of the grating as a system where two different rays arriving at the grating surface one d-spacing apart, are diffracted in a direction at which the difference in path between the two rays is equal to a multiple n of the wavelength λ.Moreover, this obvious statement has an important implication to the preservation of the temporal structure of the beam. In fact, a path difference of nλ, equivalent of several nanometers in the UV region, also produce a difference in time between the two rays, equivalent to nλ/c, with c being the speed of light (Figure 5).This looks like a very small number, actually it is, but if the total number of illuminated grooves is N, the difference in time between the two outermost rays is: In the case of a 10 nm wavelength, a groove density as low as 200 L/mm, and a footprint of the order of 50 mm FWHM, that is actually quite small, the difference in time is already of the order of 300 fs.This is acceptable in some cases but not in most. However, of course, it depends mostly on the required resolution.In fact, the beam cannot be shorter, in time, than the transform limit of its energy bandwidth.Therefore, the ideal case, for a monochromator, is to be fully transform limited.Moreover, this can be achieved in a simple way; all the contributions to the final resolution have to be negligible with respect the contribution due to the number of illuminated grooves.In this case, the beam exiting from the monochromator will be fully transform limited.If this is the case, there are no further actions to take, but if not, two more options can be taken.The first one has very low efficiency and is based on the use of a double monochromator to compensate the stretch due to the first monochromator [16].This solution preserves the time duration but is, of course, inefficient.The total transmission of a double monochromator can easily be below 1%.A different approach is the use of a grating in conical diffraction.In this configuration the gratings line are parallel to the beam and not perpendicular.The efficiency can be as high as 50% and, more important, the number of illuminated grooves is lower.Of course, the ultimate resolution is also lower than in the standard configuration, but, properly designed, can produce a transform-limited monochromatic beam.The disadvantage of the conical configuration is the mechanical complexity of the system.For further details, the article by Frassetto et al. in this same issue provides interesting reading about monochromators in conical diffraction mounting. The last issue, about the grating, is the damage.We have described the damage mechanism on mirror in the second section of this article.As mentioned, to avoid damage on the optical surface, it is important to have good reflectivity, a distribution of the power over the mirror surface (e.g., shallow angle of incidence) and a resistant coating.The first two requirements are not necessary satisfied in a grating.In fact, the usual groove profiles for grating in the UV and SXR are blazed and laminar (Figure 4).In the case of a laminar grating, the beam hits the walls of the grating at almost normal incidence (Figure 4 left).In this case, there is a very high power density deposited on these walls, and it can easily overcome the damage limit.In the case of a blazed grating (Figure 4 right) the power density can be handled by reducing at minimum the blaze angle δ, and rounding the tips of the grooves.The reduction of the blaze angle is important to distribute the power.Nevertheless, there are very few manufacturers in the world able to make this angle below 1-2°.The smoothness of the sharp tip of the blaze grating grove is probably an easier process but, again, is something needing careful optimization of the ruling process.Overall, this is something that must be considered when a grating is ordered.Some unpublished tests performed at different facilities, proved the dangerousness of using laminar gratings above certain fluence.To estimate the power absorbed by a blazed grating on its facets, a good estimation is to simulate a mirror with an angle of incidence as large as the angle of incidence on the grating faced (e.g., the angle of incidence on the grating plus the blaze angle).This approximates quite well the absorbed power.In fact, the reflectivity of a mirror at this angle is not far from the total efficiency of a grating considering all the diffraction orders.Nevertheless, as mentioned, on the groove tip, the energy is confined in a smaller volume.Therefore, an extra safety margin must be kept when dealing with gratings.An example on the use of diffraction gratings on an FEL facility and on the problem of handling the damage of it is reported in [10]. Figure 1 . Figure 1.Effect of mirror shape errors on the spot in-focus and away from the focus.The three upper pictures are for a system of four mirrors, with a combined Strehl ratio of 0.8.The lower three figures are for a Strehl Ratio (SR) of 0.97. Figure 2 . Figure 2.A possible way to specify shape errors on a mirror, when dealing with preservation of the wavefront.Since the source is diffraction limited, and the SR is dependent on wavelength, it is better to not over-specify the mirror, but require only the figure errors needed for a given length. Figure 3 . Figure 3.A schematic illustration (left) of how a mirror holder should be made for a horizontally deflecting mirror.All of the restraints must be as far as possible from the central part of the mirrors to avoid induce of deformation where the most demanding shape errors are required.The right panel shows the calculated induced deformation for such a holder on a 1 m long silicon mirror. Figure 4 . Figure 4. Profile of diffraction gratings: (Left) Laminar (or lamellar) grating, α and β are the angle of incidence and diffraction respectively, d is the grating period; (Right) Blaze (or blazed) grating, δ is the blaze angle.The light is diffracted in a direction where the difference in path between two rays arriving on the grating with a separation distance d, is equal to a multiple of the wavelength. Figure 5 . Figure 5. Pictorial description of the temporal elongation induced by a grating.The incoming beam has temporal duration δt.The difference in path between the rays must be a multiple of the wavelength.Therefore, different parts of the photon bunch travel different path lengths.As a result, the length of the beam, after diffraction from a single groove is ∆t.
2016-08-24T23:09:51.855Z
2015-01-08T00:00:00.000
{ "year": 2015, "sha1": "9c31efe7a868c10332068152fe0b74fdd41768c1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6732/2/1/40/pdf?version=1420727821", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "9c31efe7a868c10332068152fe0b74fdd41768c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249674755
pes2o/s2orc
v3-fos-license
Optimal Gathering over Weber Meeting Nodes in Infinite Grid The gathering over meeting nodes problem requires the robots to gather at one of the pre-defined meeting nodes. This paper investigates the problem with respect to the objective function that minimizes the total number of moves made by all the robots. In other words, the sum of the distances traveled by all the robots is minimized while accomplishing the gathering task. The robots are deployed on the nodes of an anonymous two-dimensional infinite grid which has a subset of nodes marked as meeting nodes. The robots do not agree on a global coordinate system and operate under an asynchronous scheduler. A deterministic distributed algorithm has been proposed to solve the problem for all those solvable configurations, and the initial configurations for which the problem is unsolvable have been characterized. The proposed gathering algorithm is optimal with respect to the total number of moves performed by all the robots in order to finalize the gathering. Introduction One recent trend in robotics is using a group of small, inexpensive and massproduced robots to perform complex tasks. The main focus of theoretical research in swarm robotics is to identify minimal sets of capabilities necessary to solve a particular problem. The gathering problem asks the mobile entities, which are initially situated at distinct locations, to gather at a common location and remain there within a finite amount of time. In this paper, the robots are deployed on the nodes of an anonymous grid graph. Robots are assumed to be anonymous (no unique identifiers), autonomous (without central control), homogeneous (execute the same deterministic algorithm) and oblivious (no memory of past observations). They do not have any explicit means of communication, i.e., they cannot send any messages to other robots. They do not have any agreement on a global coordinate system or chirality. Each robot has its own local coordinate system with the robot's current position as the origin. They are equipped with sensor capabilities in order to observe the positions of the other robots. No local memory is available on the nodes of the grid graph. The robots have unlimited visibility, i.e., they can perceive the entire graph. Robots operate in Look-Compute-Move (LCM) cycles. In the Look phase, a robot takes a snapshot of the entire configuration in its own local coordinate system. In the Compute phase, it decides either to stay idle or to move to one of its neighboring nodes. In the Move phase, it makes an instantaneous move to its computed destination. Based on the timing and activation of the robots, three types of schedulers are common in the literature. In the fully synchronous (FSYNC) setting, all the robots are activated simultaneously. The activation phase of all the robots can be divided into global rounds. In the semi-synchronous (SSYNC) setting, a subset of robots are activated simultaneously, i.e., not all the robots are necessarily activated in each round. FSYNC can be viewed as a special case of SSYNC. In the asynchronous (ASYNC) setting, there is no common notion of time. Moreover, the duration of the Look, Compute and Move phases is finite but unpredictable and is decided by the adversary for each robot. In this paper, we have considered the scheduler to be asynchronous. The scheduler is also assumed to be fair, i.e., each robot performs its LCM cycle within finite time and infinitely often. In the initial configuration, the robots are placed at the distinct nodes of the grid. The input graph also consists of some meeting nodes which are located on the distinct nodes of the grid graph. The meeting nodes are visible to the robots during the look phase, and they occupy distinct nodes of the grid. A robot can move to one of its adjacent nodes along the grid lines. The movement of the robots is assumed to be instantaneous, i.e., they can be seen only on the nodes of the input grid graph. They are equipped with global strong multiplicity detection capability, i.e., in the Look phase, they can count the exact number of robots occupying each node. In the global-weak version, a robot can detect whether a node is occupied by any robot multiplicity. Unlike in the global versions, the local versions refer to the ability of a robot to perceive information about multiplicities concerning the node in which it resides. In this paper, the optimal gathering over Weber meeting nodes problem has been studied in an infinite grid model. This is a variant of the gathering over meeting nodes in infinite grid problem, studied by Bhagat et al. [2,3]. This paper proposes a deterministic distributed algorithm for the problem with n ≥ 7 asynchronous robots. The objective constraint is to minimize the total number of moves required by the robots in order to accomplish the gathering. In this paper, we have considered Weber meeting nodes and observed that if the gathering node is a Weber meeting node, the algorithm is optimal with respect to the total number of moves made by the robots. Moreover, the Weber meeting node is not unique in general, even if the robots are non-collinear. Motivation Gathering over meeting nodes in infinite grids problem was studied by Bhagat et al. [2,3], where the robots are assumed to be deployed on the nodes of an infinite grid. The main aim of this paper is to study the problem under the optimization constraint that the sum of the distances traveled by the robots is minimized while accomplishing the gathering task. In order to complete the gathering task, the robots select a unique meeting node and move towards it in such a way that the sum of the lengths of the shortest paths from each robot to the selected meeting node is minimized. Since the robots are oblivious, the main challenge of designing a deterministic distributed algorithm lies in keeping the selected meeting node invariant while the robots move towards it. It is worth noting here that unlike in the continuous domain, where the robots are represented as points in R 2 , they are not allowed to perform infinitesimal movements with infinite precision if they are deployed on the nodes of a graph. This motivates us to consider the specified problem in a grid-based terrain where the robots are only allowed to move along the edges of the input grid graph. Earlier works In the continuous domain, the robots are represented as points in Euclidean plane [1,9,18,19,25]. Unlike in the graph model, the robots are placed on the nodes of an anonymous graph in general. In an anonymous graph, neither the nodes nor the edges of the graph are labeled, and no local memory is available on the nodes of the graph. The gathering problem in the discrete domain has been extensively studied in various topolgies like rings [24,12,21,22,23,13,14,17] finite and infinite grids [11,16], bipartite graphs [8], complete bipartite graphs [8], trees [11,17] and hypercubes [5]. Klasing et al. [24] studied the gathering problem in an anonymous ring using global weak-multiplicity detection capability. They proved that the gathering is impossible without the assumption of multiplicity detection capability. They proposed a deterministic distributed algorithm in the asynchronous model for gathering an odd number of robots. The algorithm also solves the gathering problem for an even number of robots when the initial configuration is asymmetric. D'Angelo et al [12] studied the gathering problem in an anonymous ring, where the robots have global weak-multiplicity detection capability. They proposed a deterministic distributed algorithm that solves the gathering task for any initial configuration which is nonperiodic and does not contain any edge-edge line of symmetry. Izumi et. al [21] studied the gathering problem in an anonymous ring and proposed a deterministic algorithm using local weak-multiplicity detection capability. D'Angelo et al. [13] studied the gathering problem on anonymous rings with 6 robots in the initial configuration. A unified strategy for all the gatherable configurations has been provided in this paper. D'Angelo et al. [14] studied the exploration, graph searching and gathering problem in an anonymous ring where the initial configuration is aperiodic and asymmetric. D'Angelo et al. [11], studied the gathering problem on trees and finite grids. They showed that a configuration remains ungatherable if the configuration is periodic and the dimension of the finite grid consists of at least one even side. A configuration remains ungatherable even if it admits reflection symmetry with the reflection axis passing through the edges of the finite grid. The problem was solved for all the other remaining configurations without assuming any multiplicity detection capability. Di Stefano et al. [17], studied the optimal gathering of robots in arbitrary graphs. This paper also introduced the concept of Weber points [10,26] in graphs. A Weber point of a graph is a node of a graph that minimizes the sum of the distances from it to each robot. They proposed deterministic algorithms for the gathering task on tree and ring topologies that always achieve optimal gathering unless the initial configuration is ungatherable. In [16], the optimal gathering problem in an infinite grid model was studied by Di Stefano et al. They proposed a deterministic distributed algorithm that minimizes the total distance traveled by all the robots. They proved that their assumed model represents the minimal setting to ensure optimal gathering. Cicerone et al. [8] studied the gathering problem in arbitrary graphs and proposed a necessary and sufficient result for the feasibility of gathering tasks in arbitrary graphs. They have also considered dense and symmetric graphs, like complete and complete bipartite graphs. A deterministic algorithm was proposed that fully characterize the solvability of the gathering task in the synchronous setting. Bose et al. [5] investigated the optimal gathering problem in hypercubes, where the optimal criterion is to minimize the total distance traveled by each robot. Fujinaga et al. [20] introduced the concept of fixed points or landmarks in the Euclidean plane. In the landmarks covering problem, the robots must reach a configuration, where at each landmark point, there is precisely one robot. A distributed algorithm was proposed that assumes common orientation among the robots and minimizes the total number of moves traveled by all the robots. Cicerone et al. [7] studied the embedded pattern formation (EPF) problem without assuming common chirality among the robots. The problem asks for a distributed algorithm that requires the robot to occupy all the fixed points within a finite amount of time. A variant of the gathering problem was studied by Cicerone et al. [6], where the gathering is accomplished at one of the meeting points. These are a finite set of points visible to all the robots during the Look phase. They also studied the same prob-lem with respect to the two optimal criteria, one by minimizing the total number of moves traveled by all the robots and the other by minimizing the maximum distance traveled by a single robot. Bhagat et al. [2,3] studied the gathering over meeting nodes problem in an infinite grid. It was shown that even if the robots are endowed with multiplicity detection capability, some configurations remain ungatherable. For a given positive integer k, the k-circle formation [4,15] problem asks a set of robots to form disjoint circles having k robots on the circles occupying distinct locations. The circles are centered at the set of fixed points. Our contributions This paper proposes a deterministic distributed algorithm for optimal gathering over Weber meeting nodes problem, where the initial configurations comprise at least seven robots. The robots are deployed on the nodes of an infinite grid. The optimization criterion considered in this paper is the minimization of the total number of moves made by the robots to finalize the gathering. In this paper, a meeting node that minimizes the sum of the distances from all the robots is defined as a Weber meeting node. Di Stefano et al. [17] proved that to ensure gathering by minimizing the total number of moves, the robots must gather at one of the Weber points. In our restricted gathering model, the robots must gather at one of the Weber meeting nodes to ensure gathering with a minimum number of moves. In this paper, we have shown that there exist some configurations where gathering over Weber meeting nodes cannot be ensured, even if the robots are endowed with the multiplicity detection capability. This includes the following collection of configurations: (1) Configurations admitting a single line of symmetry without any robots or Weber meeting nodes on the reflection axis. (2) Configurations admitting rotational symmetry without a robot or a meeting node on the center of rotation. In this paper, the assumption on the multiplicity detection refers to the global strong multiplicity detection capability. We have shown that, without such an assumption, there are configurations where gathering cannot be accomplished as soon as a multiplicity is created. However, there are initial configurations where gathering can be ensured over a meeting node, but not on the set of Weber meeting nodes. This includes the configuration admitting a single line of symmetry without any robots or Weber meeting nodes on the reflection axis, but at least one meeting node exists on the reflection axis. In that case, the feasibility of gathering over meeting nodes has been studied. Outline The following section describes the robot model, and the notations used in the paper. Section 3 provides the formal definition of the problem and the impossibility results for the solvability of the gathering task. Section 4 proposes a deterministic distributed algorithm to solve the optimal gathering over Weber meeting nodes problem. Section 5 describes the correctness of the proposed algorithm. Section 6 discusses the optimal gathering for the configurations where gathering over a meeting node can be ensured, but cannot be ensured over a Weber meeting node. Finally, in Section 7, we conclude the paper with some discussion for future research. Optimal Gathering over Weber Meeting Nodes The optimal gathering over meeting nodes problem has been considered in an infinite grid graph. The objective is to minimize the total distance traveled by all the robots. In order to ensure optimal gathering over a Weber meeting node, the robots must finalize gathering over a meeting node that minimizes the total distance traveled by all the robots, i.e., each robot must gather at one of the Weber meeting nodes. Terminology In this subsection, some terminologies and definitions have been proposed. Let λ t : V → N be a function denoting the number of robots on each node v ∈ V at any time t ≥ 0. An automorphism of a configuration (C(t), f t , λ t ) is an automorphism φ of the input grid graph such that f t (v) = f t (φ(v)) and The grid graph is embedded in the Cartesian plane. As a result, a grid can admit only three types of automorphisms: translation, reflection and rotation, and compositions of them. Since the number of robots and meeting node is finite, a translational automorphism is not possible. An axis of reflection defines a reflection automorphism, while the center of rotation and the angle of rotation determine a rotational Similarly, r 4 , r 5 and r 6 move towards m 2 and creates a multiplicity µ 1 . m 2 remains the unique Weber meeting node but robots will not be able to compute it correctly if they do not have global strong multiplicity detection capability. automorphism. If the configuration admits reflectional symmetry, the axis of reflection can be horizontal, vertical, or diagonal. The axis of symmetry can pass through the nodes or edges of the graph. In the case of rotational symmetry, the angle of rotation can be 90 • or 180 • . The center of rotation can be a node, a center of an edge, or the center of a unit square. • Weber meeting node: Since in the initial configuration the robots are deployed at the distinct nodes of the grid graph, λ t (v) ≤ 1, ∀v ∈ V . In the final configuration, all the robots are on a single meeting node m ∈ M . For a configuration to be final, there must exist a m ∈ M such that λ t (m) = n and λ t (v) = 0 for each v ∈ V \ {m}. The consistency of a node m ∈ M at any time t is defined as c t (m) = v∈V d(v, m) λ t (v). A node m ∈ M is defined as a Weber meeting node if it minimizes the value c t (m). In other words, a Weber meeting node m is defined as the meeting node which minimizes the sum of the distances from all the robots to itself. The Weber meeting node may not be unique in general. Let W (t) denote the set of all the Weber meeting nodes at some time t. A deterministic distributed algorithm that gathers all the robots in a Weber meeting node via the shortest paths will be optimal with respect to the total number of moves made by the robots. The robots are equipped with global strong multiplicity detection capability, i.e., they can detect the exact number of robots occupying any node. Without this assumption, the Weber meeting nodes cannot be detected correctly by the robots as soon as a multiplicity is created. As a result, the total number of moves made by the robots to accomplish the gathering might not be optimized. For example, consider the configuration in Figure 1(a). If the robots compute the Weber meeting node in this configuration, a unique Weber meeting node would be computed. Due to the robot's movement, if the configuration in Figure 1(b) is reached, then without the global strong multiplicity detection capability, robots will not be able to compute the unique Weber meeting node correctly. This example shows that without assuming Figure 2). The string s AB is defined similarly. Consider the two possible strings for each corner. Thus, there are a total of eight strings of distances of length s that are obtained by traversing M ER. If M ER is a non-square rectangle with p > q, the string s AD or s AB is associated for the corner A in the direction of the smallest side (AD in this case). If the meeting nodes are asymmetric, there exists a unique string, which is lexicographically minimum among all the possible strings (s DC in Figure 2). Otherwise, if any two possible strings are equal, then the meeting nodes are symmetric. The corner associated with the minimum lexicographic string is defined as a leading corner, and the string associated with the leading corner is defined as the string direction for the respective corner. If M ER is a square, consider the two strings associated with a corner. The string which is lexicographically smaller between the two strings is selected as the string direction for the respective corner. The two strings for a respective corner are equal if the meeting nodes are symmetric with respect to the diagonal passing through that corner of M ER. If all the robots and meeting nodes lie on a single line, then M ER is a p × 1 rect-Optimal Gathering over Weber Meeting Nodes in Infinite Grid 9 angle with A = D and B = C, and length of AD and BC is 1. Note that, in this case, s AD and s DA refer to the same string. The meeting nodes are symmetric when the strings s AD and s DA are equal. • Potential Weber meeting nodes: In general, the Weber meeting nodes in an infinite grid is not unique. If it is possible to gather at one of the Weber meeting nodes, then all the robots must decide to agree on a common Weber meeting node for gathering. Depending on the symmetricity of the set M , the number of leading corners is 1, 2 or 4 respectively. Consider the Weber meeting nodes that represents the last Weber meeting nodes in the string directions associated to the leading corners. Note that the number of Potential Weber meeting nodes can be at most eight. Let W p (t) denote the set of such Weber meeting nodes at time t ≥ 0. The set W p (t) is defined as the set of Potential Weber meeting nodes. • Key corner: Consider all the leading corners of M ER and the strings s i associated with each leading corner i. Assume that there exists at least two leading corners. Without loss of generality, assume that A and D are the leading corners and the strings parallel to AD and DA are the string directions associated with the leading corners. The string α AD is defined as follows: Starting from the corner A, scan the grid along the string direction of A, i.e., along AD and associate the pair (f t (v), λ t (v)) to each node v ( Figure 2). The string α DA is defined similarly. Consider the strings α AD and α DA . If C(t) is asymmetric, there always exists a unique string which is lexicographically smaller between α AD and α DA . If α AD is lexicographically smaller than α DA , then the corner A is defined as the key corner. If C(t) is symmetric, there may exist more than one key corner. Similarly, the strings β i , for each non-leading corner i is defined. In [17], it was proved that a Weber point remains invariant under the movement of a robot towards itself. In our restricted gathering model, where gathering can be finalized only on meeting nodes, we have the following lemma. Lemma 1. Let m be a Weber meeting node in a given configuration C(t). Suppose C(t ) denotes the configuration after a single robot or a robot multiplicity moves towards the Weber meeting node m. Then the following results hold. Proof. Similarly, if a robot multiplicity moves from some vertex a to b at time t via any shortest path towards m, then after the movement of the robot multiplicity, λ t (a) and λ t (b) becomes λ t (a) -j and λ t (b) + j, respectively, where j ≥ 2 denotes the number of robots that move from node a to b. This implies that, c t (m) = c t (m)-j and hence, min j. Therefore, the Weber meeting nodes in the new configuration C(t ) are the Weber meeting nodes of C(t) which are on some shortest path from r to m. Hence m ∈ W (t ). (2) Assume that m ∈ W (t ). This implies that m minimizes the value c t (m). The first part of the proof implies that min j ≥ 1 denotes the number of robots that move from node a to b. In other words, no node can become a Weber meeting node if it was not before the move. Therefore, m must belong to W (t) and hence W (t ) ⊆ W (t). This lemma proves that the Weber meeting node remains invariant under the movement of robots towards itself via a shortest path. In Figure 3, the configuration admits rotational symmetry. The Weber meeting node is not unique. There are three Weber meeting nodes m 3 , m 4 and m 5 in the configuration. Observation 1. Let C(0) be any initial configuration that admits rotational symmetry. Assume that the center of the rotational symmetry c contains a meeting node m. Then m is a Weber meeting node. Problem Definition and Impossibility Results In this section, we have formally defined the problem. A partitioning of the initial configurations has also been provided in this section. Problem Definition Let C(t) = (R(t), M ) be a given configuration. The goal of the optimal gathering over Weber meeting nodes problem is to finalize the gathering at one of the Weber meeting nodes of C(0). We have proposed a deterministic distributed algorithm that ensures gathering over a Weber meeting node, where the initial configuration consists of at least seven robots. If |W (t)| = 1, then all the robots finalize the gathering at the unique Weber meeting node. Otherwise, all the robots must agree on a common Weber meeting node and finalize the gathering. Partitioning of the Initial Configurations All the initial configurations can be partitioned into the following disjoint classes. (2) I 2 −: Any configuration for which M is asymmetric and |W (t)| ≥ 2 ( on c (Figure 7(b)), or on any line of symmetry. We assume that if the meeting nodes are symmetric with respect to a single line of symmetry, then l is the line of symmetry. Similarly, if the meeting nodes are symmetric with respect to rotational symmetry, then c is the center of rotational symmetry. Since the partitioning of the initial configurations depends only on the position of meeting nodes, which are fixed nodes, all the robots can determine the class of configuration in which it belongs without any conflict. Let I denote the set of all initial configurations. Lemma 2. If the initial configuration C(0) ∈ I b3 3 ∪ I b3 4 , then the gathering over Weber meeting nodes problem cannot be solved. The proof of the above lemma can be observed as a corollary to Theorem 1, proved in Bhagat et al. [2,3]. In [2,3], it was proved that I b4 3 is ungatherable. Let U denote the set of all configurations for which gathering over a Weber meeting node cannot be ensured. According to Lemma 2, this includes all the configurations, (1) admitting a single line of symmetry l, and l ∩ (R ∪ W (t)) = φ. Note that according to Observation 1, if c is a meeting node on c, then it must be a Weber meeting node. Overview of the Algorithm In this subsection, a deterministic distributed algorithm has been proposed to solve the optimal gathering problem by gathering each robot at one of the Weber meeting nodes. The proposed algorithm works for all the configurations C(t) ∈ I \ (U ∪ I b4 3 ) consisting of at least seven robots. The main strategy of the algorithm is to select a Weber meeting node among all the possible Potential Weber meeting nodes and allow the robots to move towards the selected Weber meeting node. The proposed algorithm mainly consists of the following phases: Guard Selection, Target Weber meeting node Selection, Leading Robot Selection, Symmetry Breaking, Creating Multiplicity on Target Weber meeting node and Finalisation of Gathering. In the Target Weber meeting node Selection phase, the Potential Weber meeting node for optimal gathering is selected. The Weber meeting node selected for gathering is defined as the target Weber meeting node. A set of robots denoted as guards are selected in the Guard Selection phase. Guards are selected in order to ensure that the initial M ER remains invariant. In the Leading Robot Selection phase, a robot is selected as a leading robot and placed. A unique robot is selected and allowed to move towards an adjacent node in the Symmetry Breaking phase. This movement of the robot transforms a symmetric configuration into an asymmetric configuration. All the non-guard robots move towards the target Weber meeting node, thus creating a multiplicity on it in the Creating Multiplicity on Target Weber meeting node phase. Finally, all the guards move towards the uniquely identifiable (robots have global strong multiplicity detection capability) target Weber meeting node in the Finalisation of Gathering phase and finalize the gathering. Half-planes and Quadrants Assume that the initial configuration C(0) is asymmetric. (1) C(0) ∈ I a 3 and the half-planes delimited by l contain an equal number of robots. (2) C(0) ∈ I a 4 . Assume that there exist at least two quadrants that contain the maximum number of Potential Weber meeting nodes. Suppose more than one quadrant contains either the maximum or the minimum number of robots among all the specified quadrants. In that case, the configuration is said to be balanced. If the initial configuration is not balanced, then it is an unbalanced configuration. An initial configuration C(0) satisfies the following conditions: • C 1 : there exists a unique half-plane or quadrant that contains the maximum number of Potential Weber meeting nodes. • C 2 : there exists multiple half-planes or quadrants that contain the maximum number of Potential Weber meeting nodes. Any configuration C(0) satisfying condition C 2 is said to satisfy C 21 , if C(0) is balanced. Otherwise, it satisfies C 22 , if the initial configuration is unbalanced. • C 3 : there does not exist any Potential Weber meeting node on the halfplanes or on the quadrants. Demarcation of the Half-planes for fixing the target Assume that the meeting nodes are symmetric with respect to a single line of symmetry l. Note that | W p (t) |≤ 2. Further, assume that | W p (t) |= 2 and C(0) does not satisfy C 3 . This implies that there exists at least one Potential Weber meeting node located on the half-planes. Note that, if l is a diagonal line of symmetry, then Demarcation of the half-planes for fixing the target Initial Configuration C(0) The unique half-plane containing the Potential Weber meeting nodes satisfy C 21 ∧ l is a horizontal or vertical line of symmetry The unique half-plane not containing the key corner satisfy C 21 ∧ l is a diagonal line of symmetry ∧ ∃ a unique leading corner The half-plane which lies in the direction of AD, if α AD is lexicographically larger than α AB satisfy C 21 ∧ l is a diagonal line of symmetry ∧ ∃ two leading corners The half-plane containing the corners A and D, if α AD is lexicographically larger than α CD satisfy C 22 The unique half-plane with the maximum number of robots (Figure 8 (a) and 8 (b)). Demarcation of Quadrants for fixing the target First, consider the case when the meeting nodes are symmetric with respect to rotational symmetry without multiple lines of symmetry and W p (t) ≥ 2. The quadrant H ++ is defined according to Table 2. The other quadrants are defined as follows. Demarcation of the quadrants for fixing the target Initial Configuration C(0) H ++ satisfy C 1 The unique quadrant containing the maximum number of Potential Weber meeting nodes satisfy C 21 ∧ the angle of rotation is 180 • ∧ ∃ at least one quadrant that contains the Potential Weber meeting nodes as well as the leading corners The unique quadrant containing the leading corner with which the largest lexicographic string α i is associated, and that contains the maximum number of robots satisfy C 21 ∧ the angle of rotation is 180 • ∧ the quadrants that contain the Potential Weber meeting nodes, do not contain the leading corners The unique quadrant containing the non-leading corner with which the largest lexicographic string β i is associated, and that contains the maximum number of robots satisfy C 21 ∧ the angle of rotation is 90 • The quadrant containing the corner with which the largest lexicographic string α i is associated, and that contains the maximum number of robots satisfy C 22 The unique quadrant with the maximum number of robots satisfy C 3 ∧ unbalanced The unique quadrant containing the minimum number of robots satisfy C 3 ∧ balanced The unique quadrant that contains the smallest lexicographic string α i associated with the leading corner and containing the minimum number of robots Phases of the Algorithm The proposed algorithm mainly consists of the following phases. Guard Selection In this phase, a set of robots is selected as guards in order to keep the initial M ER invariant. If there does not exist any meeting nodes on a side of the boundary of M ER, then there must exist at least one robot on that particular side of the boundary. Guards are selected in such a way that they remain uniquely identifiable. If a side of the boundary of M ER contains at least one meeting node, then a guard robot is not required for that particular side of the boundary. Therefore, consider the case when the boundary of M ER does not contain any meeting nodes. Consider the robots which are on the boundary of the M ER. First, assume that C(t) is asymmetric. Let G denote the set of guards. Let G C denote the set of guard corner and is defined as follows. • The unique leading corner, if the meeting nodes are asymmetric. • The leading corner contained in H + , if the meeting nodes are symmetric with respect to a horizontal or vertical single line of symmetry l. The unique key corner contained in H + , if the meeting nodes are symmetric with respect to a diagonal line of symmetry. • The leading corner contained in H ++ , if the meeting nodes are symmetric with respect to rotational symmetry. The robot positions on the sides adjacent to the unique guard corner and are closest to the guard corner are considered as guards. Similarly, the robots that are farthest from the guard corner measured along the string direction and lying on the sides non-adjacent to the guard corner are also considered as guards. Note that, in each case, there are exactly four guard robot positions that are selected in this phase ( Figure 10(a)). If C(t) is symmetric with respect to a unique line of symmetry l and l is a horizontal or vertical line of symmetry, there are exactly two leading corners. Consider the robot positions on the sides adjacent to the leading corners and which are closest to the leading corners. These two robots and their symmetric images are selected as guards. The robots which are farthest from the leading corners and lying on the side which are non-adjacent to the leading corners are also selected as guards. Hence, there are exactly six guard robots that are selected when C(t) is symmetric with respect to l (Figure 10(b)). Otherwise, if l is a diagonal line of symmetry and there exists a unique leading corner, then the robots positions on the sides adjacent to the leading corner and are closest to the leading corner are selected as guards. The robot positions on the sides non-adjacent to the leading corner and farthest from the leading corner are also selected as guards. Note that they are symmetric images of each other. If there are two leading corners, the robots which are closest to the leading corners and lying on the sides adjacent to the leading corners are selected as guards. Note that, if C(t) is symmetric with respect to rotational symmetry, then since the center of rotational symmetry is also the center of fixed meeting nodes and gathering is finalized in the center, the Guard Selection phase is not executed in this case. Target Weber meeting node Selection In this phase, the Weber meeting node for gathering is selected. The target meeting node must remain invariant during the execution of the algorithm. Depending on the class of configuration to which C(t) belongs, the target Weber meeting node is selected according to Table 3. The pseudo-code corresponding to this phase is given in Algorithm 1. Consider the case when the C(t) ∈ I a 4 and there exists a Weber meeting node on the quadrants. Further, assume that |W p (t)| ≥ 2. If there exist two string directions corresponding to the unique leading corner in H ++ , the target Weber meeting node is selected as the Potential Weber meeting node in H ++ which abc Optimal Gathering over Weber Meeting Nodes in Infinite Grid 19 Target Weber meeting node Selection Configuration C(t) Target Weber meeting node Admitting a unique Weber meeting node The unique Weber meeting node Admitting a unique Potential Weber meeting node The unique Potential Weber meeting node I 3 ∧ there exists a Weber meeting node on l The northernmost Weber meeting node on l I a 3 ∧ there does not exist any Weber meeting node on l ∧ |Wp(t) = 2| ∧ l is a horizontal or vertical line of symmetry The Potential Weber meeting node in H + . Ties are broken by considering the Potential Weber meeting node which appears last in the string direction associated to the leading corner in H + I a 3 ∧ there does not exist any Weber meeting node on l ∧ |Wp(t) = 2| ∧ l is a diagonal line of symmetry ∧ there exists a unique leading corner The Potential Weber meeting node in H + which appears last in the string direction associated to the unique leading corner The Potential Weber meeting node in H ++ which is farthest from the leading corner contained in H ++ in the string direction. appears first in the string α i . We have the following observation. Observation 2. If the meeting nodes are symmetric with respect to a unique line of symmetry l, and there exists at least one meeting node on l, then the meeting nodes on l are orderable. The northernmost meeting node on l is defined as the meeting node on l which is farthest from the leading corner(s). Similarly, the northernmost robot on l is defined. Leading Robot Selection If the initial configuration is balanced and asymmetric, a robot r is selected as a leading robot in the Leading Robot Selection phase (Figure 11). The leading robot moves towards the half-plane or quadrant containing the target Weber meeting node m. While r reaches the half-plane or the quadrant containing m, the configuration transforms into an unbalanced configuration, and the asymmetry of the configuration remains invariant. Since the initial configuration is balanced, assume that C(t) ∈ I 3 ∪ I 4 . Further, assume that the initial configuration does not satisfy the condition C 3 . Depending on the class of configuration to which C(t) belongs, the leading robot is selected according to Table 4. In case, the configuration is in I a 4 and there exists a robot on l (resp. l ), the leading robot first move along the line l (resp. l ) and when it becomes collinear with m, it starts moving along l (resp. l). Symmetry Breaking In this phase, all the symmetric configurations that can be transformed into asymmetric configurations are considered. A unique robot is identified that allows the The robot closest to the target Weber meeting node and lying on l or l . Ties are broken by considering the robot either on l or l which is closest from the leading corner contained in H ++ in the string direction I 4 ∧ there does not exist any robot on l and l ∧ there exists a non-guard robot in a quadrant adjacent to H ++ The robot lying on a quadrant adjacent to H ++ and closest to the target Weber meeting node. Ties are broken by considering the robot, which is closest from the leading corner contained in H ++ in the string direction I 4 ∧ there does not exist any robot on l and l ∧ there does not exist any non-guard robot in the quadrants adjacent to H ++ The robot lying on the quadrant non-adjacent to H ++ and closest to the target Weber meeting node. Ties are broken by considering the robot, which is closest from the leading corner contained in H ++ in the string direction transformation. We have the following cases. (1) C(t) ∈ I b2 3 . In this class of configurations, at least one robot exists on l. Let r be the northernmost robot on l. r moves towards an adjacent node that does not belong to l, and the configuration becomes asymmetric. (2) C(t) ∈ I b2 4 . In this class of configurations, there exists a robot (say r) on c. The robot r moves towards an adjacent node. If the configuration admits rotational symmetry with multiple lines of symmetry and there is a robot r at the center, r moves towards an adjacent node. This movement creates a unique line of symmetry l . However, the new position of r might have a multiplicity. If that happens to be the northernmost robot on l , moving robots from there might still result in a configuration with a line of symmetry. Even so, the unique line of symmetry l would still contain at least one robot position without multiplicity, and the number of robot positions on l will be strictly less than the number of robots on the line of symmetry in the original configuration. Thus, the repeated movement of the robot on l guarantees to transform the configuration into an asymmetric configuration. Creating Multiplicity on Target Weber meeting node The target Weber meeting node m is selected in the Target Weber meeting node Selection phase. Since there is a unique target Weber meeting node m, all the nonguard robots move towards m in the Creating Multiplicity on Target Weber meeting node phase. Note that, since the guards do not move during this phase, the M ER remains invariant. As a result, m remains invariant. Eventually, a robot multiplicity is created on m, while the non-guards moves towards it. Depending on the class of configuration to which C(t) belongs, the following cases are to be considered. (1) C(t) ∈ I 1 : All the robots moves towards the unique Weber meeting node m. (2) C(t) ∈ I 2 : All the non-guards move towards the unique target meeting node m. (3) C(t) ∈ I 3 : If C(t) ∈ I a 3 and there exists a Weber meeting node on l, each non-guards move towards the target meeting node m. Next, consider the case when C(t) ∈ I a 3 and there does not exist any Weber meeting node on l. A leading robot in the Leading Robot Selection phase transforms a balanced configuration into an unbalanced configuration. All the non-guard robots from H − move towards m. This movement is required in order to ensure that H + remains invariant. While such robots reach H + , all the non-guard robots in H + move towards m, thus creating a multiplicity on m. Finally, if C(t) ∈ I b2 3 , each non-guard which is closest to m, moves towards m either synchronously or there may be a possible pending move due to the asynchronous behavior of the scheduler. Ties are broken by considering the closest robots which are farthest from the leading corners in their respective string directions. In this phase, all the guards move towards m. During their movement, they do not create any multiplicity on a Weber meeting node other than m. In order to ensure this, all the guards first move along the boundary of M ER, and when it becomes collinear with m, it starts moving towards m. A guard robot moves by minimizing the Manhattan distance between m and itself. This implies that during their movement, no other multiplicity would be created on any other Weber meeting node and gathering would be finalized on m. Optimal Gathering() Our main algorithm Optimal Gathering() considers the following cases. If C(t) ∈ I 1 , then each robot finalizes the gathering on the unique Weber meeting node. Consider the case when the meeting nodes are asymmetric. There exists a unique Potential Weber meeting node. The guards are selected in the Guard Selection phase. Each non-guard moves towards the unique Potential Weber meeting node, creating a multiplicity on it. Finally, the guards moves towards the multiplicity and finalizes the gathering on it. Next, consider the case when the configuration is balanced and asymmetric. A leading robot is selected in the Leading Robot Selection phase, which transforms the configuration into an unbalanced configuration. The guards are selected in the Guard Selection phase. In the Creating Multiplicity on the Target Weber meeting node phase, each non-guard moves towards the target Weber meeting node, selected in the Target Weber meeting node Selection phase. Finally, the guards moves towards the multiplicity and finalizes the gathering. If C(t) is symmetric and there exists a Weber meeting node on l ∪ {c}, the gathering is finalized on the target Weber meeting node m selected in the Target Weber meeting node Selection phase. Otherwise, if the configuration is symmetric and there exists a robot on either l or c, then in the Symmetry Breaking phase, the configuration is transformed into an asymmetric configuration. Note that, in case C(t) ∈ I b 3 , there may exist exactly six robots that are selected as guards. In case n = 7, there must exist at least one robot position on l. The northernmost robot on l moves towards an adjacent node away from l, if there does not exist any Weber meeting nodes on l. Hence, the configuration becomes asymmetric, and the algorithm proceeds similarly, as in the asymmetric case for n = 7. Otherwise, if there exists at least one Weber meeting node on l, the northernmost Weber meeting node m on l is selected as the target Weber meeting node. The closest robot on l and the northernmost in case of a tie, moves towards m. While the robot moves towards m, it remains invariant. After the robot reaches m, m is uniquely identifiable and the gathering is finalized in the Finalization of Gathering phase. Correctness In this section, we describe the correctness of our proposed algorithm. Lemmas 3 and 4 proves that the leading robot remains invariant during the movement towards its destination. Lemma 3. If C(t) ∈ I a 3 , then in the Leading Robot Selection phase, the leading robot remains the unique robot while it moves towards the half-plane H + . Proof. Let C(t) be any balanced configuration that belongs to I a 3 . Since the configuration is balanced and asymmetric, the number of robots in the two half-planes delimited by l are equal and there exists a unique key corner. If there exists at least one robot position on l, then the northernmost robot on l is the leading robot. The northernmost robot moves towards an adjacent node away from l, and the configuration becomes unbalanced. Consider the case when there does not exist any robot position on l. Without loss of generality, assume that l is a vertical line of symmetry. Let r be the leading robot in H − selected in the Leading Robot Selection phase. Without loss of generality, let A be the unique key corner and α AD = a 1 , a 2 , . . . , a pq is the unique smallest lexicographic string associated to the corner A. Similarly, let B be the other leading corner and α BC = b 1 , b 2 , . . . , b pq be the string associated to B. Let u i and v i denote the nodes, which the positions a i and b i represent in α AD and α BC , respectively. Since the meeting nodes are symmetric, f t (u i ) = f t (v i ), for each i = 1, 2 . . . , pq. As α AD = a 1 , a 2 , . . . , a pq is the unique smallest lexicographic string among the α i s, there must exist a position k such that λ t (u k ) = 0 < λ t (v k ) = 1. Without loss of generality, let i be the position of the leading robot in α AD . Let k be the first position, where λ t (u k ) and λ t (v k ) differs. Note that, λ t (u k )=0 and λ t (v k ) = 1. We have to prove that after the movement of the leading robot, α AD < l α BC , where < l denotes the relation that α AD is lexicographically smaller than α BC . Assume that at time t , the leading robot moves towards an adjacent node. Depending on the possible values of i and k in α AD , the following cases are considered. Case 1. The position of i is less than k in α AD . While the leading robot moves towards l, λ t (u i ) becomes 0, but λ t (v i ) equals 1. Hence, after the movement of the leading robot towards an adjacent node, α AD < l α BC . Case 2. The position of i is equal to k in α AD . Since each robot is deployed at the distinct nodes of the grid in the initial configuration, this case is not possible. Case 3. The position of i is greater than k in α AD . While the leading robot moves towards l, the position k remains invariant. Hence, after the movement of the leading robot towards an adjacent node, α AD < l α BC . Note that, after a single movement of the leading robot towards l, it becomes the unique robot that is eligible to move towards H + . Since α AD remains the unique lexicographically smallest string at t , H + remains invariant. Clearly, after a finite number of movements towards l, H + remains invariant, and ultimately, the configuration becomes unbalanced. The proof is similar when the meeting nodes admits a horizontal or a diagonal line of symmetry. , then in the Leading Robot Selection phase, the leading robot remains the unique robot while it moves towards the target Weber meeting node. Proof. Let C(t) be any balanced configuration that belongs to I a 4 . Since the configuration is balanced and asymmetric, there exist at least two quadrants that contain the maximum number of Potential Weber meeting nodes with the maximum number of robots on such quadrants. We have to prove that while the leading robot moves towards the target Weber meeting node, the quadrant H ++ remains invariant. First, consider the case when the leading robot r is on either l or l . Note that in this case, r may be one or more than one node away from H ++ . There is nothing to prove when r is one node away from H ++ . In this case, a move of r transforms the configuration into an unbalanced configuration. Therefore, consider the case when r is more than one node away from H ++ . Without loss of generality, let r be on l. Let M ER = ABCD be such that the corner C is the corner diagonally opposite to A and the corners A and B are separated by line l. Similarly, A and D are the corners separated by line l . H ++ is the quadrant containing A. Let α AD = a 1 , a 2 , . . . a pq and α BC = b 1 , b 2 , . . . b pq be the strings associated to the corners A and B. While r moves along l, we have to prove that that α AD remains lexicographic larger than α BC . It is noteworthy that α AD is lexicographic larger than α CB and α AD while r moves. Note that, we have consider the case when the string directions are along the width of the rectangle. Let i be the position of leading robot in α AD and α BC . Let u i and v i denote the nodes, which the positions a i and b i represent in α AD and α BC , respectively. Since the meeting nodes are symmetric, f t (u i ) = f t (v i ), for each i = 1, 2 . . . , pq. After a movement of the leading robot along the line l, note that H ++ remains invariant. After a finite number of movements, the robot r becomes one node away from H ++ , and the proof proceeds similarly as before. Next, consider the case when the leading robot is on a quadrant adjacent to H ++ . Without loss of generality, assume that the leading robot is on H +− . While the leading robot moves, it can be observed that α AD is lexicographically larger than α CB and α DA . We have to prove that α AD remains lexicographic larger than α BC while the leading robot moves. Let i and j be the positions of the leading corner in α AD and α BC , respectively. Note that i < j, as the leading robot is selected on H +− . Let k be the first position for which b k < a k . We have the following cases. Case 1. i < j < k. Note that u i−1 cannot be a robot position, otherwise r would not be selected as a leading robot. After a move of r, u i−1 is a robot position but v i−1 cannot be a robot position. Case 2. i < j = k. Since each robot is deployed at the distinct nodes of the grid in the initial configuration, this case is not possible. Case 3. i < k < j. After a move of r, u i−1 is a robot position, but v i−1 cannot be a robot position, as k is the first position where a k > b k . The proof is similar to the previous case. Case 5. k < i < j. After a move of r, it may be the case that k = i − 1 in α AD . In that case, λ t (u i−1 ) = 2, but λ t (v i−1 ) = 0. Otherwise, the proof is similar as The proof is similar when the string directions are along the lengths of M ER. Next, consider the case when the leading robot r is selected on a quadrant non-adjacent to H ++ . Without loss of generality, we assume that r first starts moving towards l . While the leading robot moves, it can be observed that α AD is lexicographically larger than α CB and α DA . We have to prove that α AD remains lexicographic larger than α BC while r moves. Let i and j be the positions of the leading corner in α AD and α BC , respectively. Note that i > j, as the leading robot is selected on H −− . We have the following cases. Case 1. i > j > k. Note that u i−1 cannot be a robot position, otherwise r would not be selected as a leading robot. After a move of r, a i−1 ≥ b i−1 , depending on whether there exists a robot position on b i−1 or not. Case 2. i > j = k. Since each robot is deployed at the distinct nodes of the grid in the initial configuration, this case is not possible. Case 4. k > i > j. Note that a i−1 cannot be a robot position before the move, otherwise r would not be selected as a robot position. After a move of r, a i−1 is a robot position, but b i−1 cannot be a robot position as k is the first position where a k and b k differ. Case 5. j < k < i. After a move of r, it may be the case that k = i − 1 in α AD . In that case, λ t (u i−1 ) = 2, but λ t (v i−1 ) = 0. Otherwise, the proof is similar as a k > b k . From all the above cases, the leading robot remains invariant while it moves towards its destination. The next three lemmas prove that the target Weber meeting node remains invariant in the Creating Multiplicity on the Target Weber meeting node phase. Lemma 5. If C(t) ∈ I 2 ∪ I b1 3 ∪ I b1 4 , then the target Weber meeting node remains invariant in the Creating Multiplicity on Target Weber meeting node phase. Proof. In the Creating Multiplicity on Target Weber meeting node phase, all the non-guard robots move towards the target Weber meeting node. According to Lemma 1, the Weber meeting node remains invariant under the movement of robots towards itself. The M ER remains invariant unless the guard moves. The following cases are to be considered. number of robot positions. Considering such quadrants and the corners contained in those quadrants. H ++ is the quadrant containing the largest lexicographic string among those α i s that are associated with the leading corners contained in such quadrants. A leading robot is selected in the Leading Robot Selection phase and is allowed to move towards the target Weber meeting node in H ++ . While the leading robots moves towards the target Weber meeting node in H ++ , H ++ remains invariant according to Lemma 4. As a result, the configuration becomes unbalanced. The rest of the proof follows similarly, as in the unbalanced case. Case 2. C(0) ∈ I b2 4 . Assume that at time t > 0, the robot on c move towards one of the adjacent nodes which transforms the configuration into a configuration which may be asymmetric or admits a single line of symmetry. Proceeding similarly, as in the case of C(0) ∈ I a 3 ∪ I a 4 , at any arbitrary instant of time t > 0, C(t) remains asymmetric and hence C(t) / ∈ I b3 4 , where t ≥ t . Theorem 10. If the initial configuration belongs to the set I \ U , then algorithm Optimal Gathering() ensures gathering over Weber meeting nodes. Proof. Assume that C(0) ∈ I \ U. If C(t) is not a final configuration for some t ≥ 0, each active robot executes algorithm Optimal Gathering(). According to the Lemmas 8 and 9, any initial configuration C(0) ∈ I \ U, would never reach a configuration C(t) ∈ U, at any point of time t > 0 during the execution of the algorithm Optimal Gathering(). The following cases are to be considered. Case 1. There exists a unique Weber meeting node. All the robots move towards the unique Weber meeting node and finalize the gathering. Case 2. There exists more than one Weber meeting node. The target Weber meeting node is selected in Target Weber meeting node Selection phase. According to the Lemmas 5, 6 and 7, the target Weber meeting node remains invariant during the execution of the algorithm Optimal Gathering(). If C(0) is a balanced configuration, then a leading robot is selected in Leading Robot Selection phase. Lemmas 3 and 4 ensure that the leading robot remains invariant during its movement. Without loss of generality, assume that m is the target Weber meeting node. Assume that, at any point of time t, there exists at which at least one robot r that has completed its LCM cycle. If r is a non-guard robot, then it must have moved at least one unit distance towards m at time t > t. Since, each non-guard robot moves towards m via a shortest path in the Creating Multiplicity on Target Weber meeting node phase, this implies that eventually at time t > t , there exists a robot multiplicity on m. Finally, in the Finalization of Gathering phase, since the robots have global strong-multiplicity detection capability, all the guard robots move towards m and finalize the gathering without creating any other multiplicity on a meeting node. Since each robot finalizes the gathering, by moving towards m via a shortest path, gathering over Weber meeting nodes is ensured. Optimal Gathering for C(t) ∈ U We have proposed a deterministic distributed algorithm that ensures gathering over a Weber meeting node for any initial configuration C(0) ∈ I \ U. Let U ⊂ U denote the set of all the initial configurations which admit a unique line of symmetry l and no Weber meeting nodes or robot positions exist on l. However, there exists at least one meeting node on l. The set U includes the initial configurations for which gathering is feasible on a meeting node. Note that, if C(t) ∈ U \ U , then it is ungatherable. To ensure gathering deterministically, the target point must lie on l. At this point of time, one optimal feasible solution for a configuration C(0) ∈ U would be to finalize the gathering at a meeting node m ∈ l at which the total number of moves is minimized. Ties may be broken by considering the northernmost such meeting node. Another very important assumption that is not highlighted much in the literature is that initially, all the robots are static. The correctness of our proposed algorithm fails to hold when the optimal target point is dynamically selected. As a consequence, termination may not be guaranteed with optimal number of moves. For example, we consider one possible execution for an initial configuration C(0) = ({r 1 , r 2 }, {m 1 , m 2 , m 3 , m 4 }) in figure 12(a). At t = 0, m 3 and m 4 are the Weber meeting nodes. Between m 1 and m 2 , the number of total moves will be minimized if the robots gather at m 1 . While r 1 and r 2 start moving towards m 1 , there may be a pending move due to the asynchronous behavior of the scheduler. Consider the case when r 2 has completed its LCM cycle while r 1 's move is pending. At t = t 1 > 0, m 3 becomes the unique Weber meeting node ( figure 12(b)). At t 2 > t 1 , assume that r 1 has reached m 1 and r 2 has moved by one hop distance towards m 3 . At t 2 , m 1 becomes the unique Weber meeting node (figure 12(c)). Next, the gathering will be finalized eventually at m 1 . Initially, the minimum number of moves required to finalize the gathering is 8 ( figure 12(a)). The number of moves required to finalize the gathering in this execution is 10. It is not guaranteed that the minimum number of moves required to finalize the gathering in the initial configuration is achievable. Conclusion In this paper, the optimal gathering over Weber meeting nodes problem has been investigated over an infinite grid. The objective function is to minimize the total distance traveled by all the robots. We have characterized all the configurations for which gathering over a Weber meeting node cannot be ensured. For the remaining configurations, a deterministic distributed algorithm has been proposed that solves the gathering over Weber meeting nodes for at least seven robots. One future direction of work would be to consider the min-max gathering over meeting nodes problem, where the objective function is to minimize the maximum distance traveled by a robot. Since there remain some initial symmetric configurations, for which gathering over Weber meeting nodes cannot be ensured, it would be interesting to consider randomized algorithms for those configurations. Another direction for future interest would be to consider multiplicities in the initial configuration.
2022-02-08T04:00:24.654Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "3e2ea834a74ca8bc83e3d6503aeb397cebe48f42", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3e2ea834a74ca8bc83e3d6503aeb397cebe48f42", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
8440539
pes2o/s2orc
v3-fos-license
Easy as Pi: A Network Coding Raspberry Pi Testbed : In the near future, upcoming communications and storage networks are expected to tolerate major difficulties produced by huge amounts of data being generated from the Internet of Things (IoT). For these types of networks, strategies and mechanisms based on network coding have appeared as an alternative to overcome these difficulties in a holistic manner, e Introduction Upcoming 5G technology is targeting the controlling and steering of the Internet of Things (IoT) in real-time on a global scale.This will break new ground for new markets such as driverless vehicles, manufacturing, humanoid robots, and smart grids.The number of wireless devices is expected to increase by five times to up to 50 billion devices [1].It is generally believed that those devices will not be connected in the same manner as current devices are connected today.Centralized systems will collapse in terms of capacity, while distributed systems appear as an alternative.Therefore, we believe mesh technologies will play a major role in the communication architecture in future systems.Mesh technology has been known for sensor and ad hoc networks or mobile cloud scenarios, but the technical requirements on 5G mesh-based communication systems are dramatically increasing.Future mesh networks need to support high data rate, low latency, security, network availability and heterogeneous devices to ensure high Quality of Experience (QoE) for the final user.In state-of-the-art systems, those requirements are traded-off with each other, but in the 5G context, we cannot do this anymore. Introduced by Ahlswede et al. [2], network coding constitutes a paradigm shift in the way that researchers and industry understand and operate networks, by changing the role of intermediate relays in the process of transmission of information.Relays are no longer limited to storing and forwarding data, but also take part in the coding process, through a process called recoding, where the relay generates new linear combinations of incoming coded packets without previously decoding the data.Network coding allows the increase of throughput, reliability, security and delay performance of the networks.In previous works, we have shown that Random Linear Network Coding (RLNC) [3,4] is able to satisfy the aforementioned technical requirements.We have actually shown how to increase the throughput [5], reduce the delay [6] or support heterogeneity for coding enabled communication nodes [7]. In our prior works, the C++11 Kodo library [8] was used as the common building block containing the basic RLNC functionalities.Most of the work was focusing on small mesh networks with a handful of communication nodes, though the expected scenarios are fairly beyond this order of magnitude.Despite this successful deployment in real systems, many of these protocols and contributions have been implemented in separate testbeds and the experiences are hard to reproduce.Deploying a large-scale and configurable testbed for networking and storage can be challenging, not only due to the inherent costs of the hardware, but due to maintenance challenges and ability to replicate results consistently.The latter requires not only the devices to run the same Operating System (OS), but also have exactly the same configurations and software packages.There is a need to evaluate large-scale network deployments of low-cost devices in a quick, easy-to-deploy, reproducible and maintainable fashion. The emergence of powerful and inexpensive single-board computers opens new possibilities in this area.By running a standard OS, they allow implementations that are compatible with higher end devices.In addition, they utilize stable software supported by their communities.For example, the Iridis-pi platform [9] provides a detailed description of a Raspberry Pi (Raspi) [10] testbed ideal for educational applications.Here, the authors present computational speed benchmarks, inter-node communication throughput and memory card writing speeds for data storage to assess the testbed performance.This work indicates only a basic description of how to set up the required software and also mentions that its maintenance could be time-consuming.Moreover, this work does not consider possible network coding applications.Different studies of IoT applications consider using the Raspi for data processing: In [11], the Raspi is the processing unit that coordinates and controls the activity of an isle of lamps on a public road and reports it to a monitoring center.A use case regarding remote environment surveillance using the Raspi and the Arduino [12] technologies is presented in [13].Here, both devices report air pressure, humidity and temperature of the locations of cultural paintings plus high-resolution images of the paintings themselves.This data is sent to a monitoring center to ensure the preservation of the paintings.Furthermore, authors in [14] consider FingerScanner, a technology that utilizes the Raspi to act as the data server in a finger scanning application that collects the fingerprints.Even though all these applications consider the use of the Raspi as a core block, they provide few to no descriptions of their procedures to configure the Raspi.These applications become cumbersome to maintain as their considered systems could potentially scale when aiming to serve more users.The current way that the data is sent in the considered networks for these IoT applications will not be feasible in future 5G systems as mentioned previously. Given this set of specific needs, in this work, we present the design, key step-by-step instructions and mechanisms to setup, configure and maintain an inexpensive testbed using potentially several Raspi devices for networking (wireless or wired) and storage applications including RLNC functionalities into the testbed through Kodo.The architecture itself is not bounded to the networking area and can be used for other applications that require replicable results with the Raspi.Our work for the testbed procedure is organized as follows: Section 2 introduces the testbed system.In Section 3, we provide details about the testbed setup, scripts, configuration files and connectivity.In Section 4, we elaborate on the need and setup for an overlay filesystem for our testbed in order to have both persistent and non-persistent data on it as an optional step.Section 5 describes a set of automation and monitoring tools that can be included in the testbed to simplify the execution of routinary and repetitive tasks.Section 6 elaborates on the compilation of the Kodo library for the Raspi.Conclusions and future work are reviewed in Section 7. Finally, a set of alternative commands, in case the ones presented in this work might not be executed, are discussed in the Appendices. Testbed Overview and Design Criteria A sketch of the testbed is depicted in Figure 1.The testbed consists of up to 100 Raspis of different models.More specifically, in our design, we consider: Raspberry Pi 1 model B rev. 2, Raspberry Pi 2 model B V1.1 and Raspberry Pi 3 model B V1.2.All Raspis are each equipped with a 8 GB Secure Digital (SD) memory card, a wired and wireless network interface and a power supply.All the Raspi are connected to a common Local Area Network (LAN) that provides internal and external connectivity.Without loss of generality, in our case, they are connected to a university network using their wired Ethernet interface that is named eth0 according to the legacy naming convention of Ethernet interfaces in Linux [15].We consider the university network since our testbed is used by students and academic staff to perform measurements and experimentation of controlled and reproducible scenarios as part of academic research.The testbed description and procedures for setting it up are not restricted to this academic scenario.All Raspis are configured to run a Secure Shell (SSH) daemon for easy remote access within the university network.We requested the university Information Technology (IT) department to configure the university Dynamic Host Configuration Protocol (DHCP) server to assign each Raspi a static Internet Protocol (IP) address.This eliminates the demand for monitors and keyboards with the Raspis for non-graphical applications.Finally, our design aims to configure all Raspis identically from a customized bootable image in their respective memory cards, while still allowing the end-users to store files locally in each of the Raspis.We will refer to the testbed administrator as the person(s) in charge of setting up and configuring the testbed with administrator privileges from the OS point of view.The setting and configuration procedures are performed by the testbed administrator in a PC running a Linux distribution as shown in Figure 1.Although in principle the administrator Linux distribution is not a restriction, we present our procedure in a Debian-based Linux distribution.Our basic design considers to create a customized image to store it later on a memory card for each Raspi.Once configured, we store the resulting image file in a Hyper Text Transfer Protocol (HTTP) server as backup and in case the testbed administrator requires the making of new changes to this file.In our case, we store all files at Zenodo [16], but the testbed administrator should copy the our files to his/her own HTTP server to get read/write permissions.We also put all the required configuration files and scripts for the Raspis setup in the HTTP server so there is a single place where system setup is stored and could be modified.This simplifies the system maintenance, as it may not always be desirable to make persistent changes on the Raspis-for example, when different users are interested in running experiments on a rebooted testbed.We later present how to utilize stacked filesystems to enable both persistent and temporary storage to have this capability.Its purpose is to remove non-desired data after a reboot while keeping the original customized image structure.This step of the procedure is optional if the testbed administrator decides to keep only persistent changes regardless of the testbed use.Finally, we include a set of automation, monitoring and cross-compilation tools over the top of our system in order to simplify the execution of repetitive and long tasks, be able to follow the progress of long task processes and compile relevant C++ source code for the testbed administrator. OS Image Setup In this section, we review the steps to create a common OS image for all the Raspis.The image setup is composed of three major steps: select and download the OS image file, alter the image structure and configure the OS files.We proceed to detail all these steps providing brief discussions to our setup choices when required.To perform these steps, we indicate with command-line blocks the required sequential commands to be typed by the testbed administrator on his/her PC to obtain the desired setting.In all the command blocks in the paper, we indicate if a command needs to be run with root permissions (#) or common user permissions ($).These signs will prefix the commands. OS Selection and Download To get started, we first need to install an OS that works properly on all the Raspi models.We will download and setup the image in the testbed administrator PC using a Debian-based distribution.An alternative to this method, is to create a tailoared Linux distribution for the Raspi platform using the Yocto Project [17].However, this process would require assembly and compilation of all the software for the Raspi platform from scratch, which goes beyond the scope of our work.We use the popular Debian-based Raspbian Linux [18] given that is the recommended and default OS for the Raspi.Raspbian is made available in two bundles: Raspbian and Raspbian Lite.The difference between the two is that Raspbian contains a pre-installed desktop environment for user interaction, and Raspbian Lite by default only permits interaction through a command shell.Given that the Raspis in our testbed are not connected to monitors, we decide to work with Raspbian Lite.If required, a desktop environment can be installed using the package manager later. The latest Raspbian Lite bundle can be downloaded from the Raspbian official webpage [18].At the time of this writing, the latest available bundle was 2016-05-27-raspbian-jessie-lite.zip.To ensure that the content of the bundle does not change, this procedure is based on that particular version of Raspbian Lite, which we have made available at [16].All other files used in this paper are also available there.The testbed administrator has to move these files to his/her own HTTP server.To get started, the testbed administrator must open a Linux shell (terminal) on his/her PC and declare the environment variables shown in the command block below.We show the whole procedure by performing the role of the testbed administrator. In this code block, the ${URL} and ${IMAGE} variables specify where the Linux bundle is located and ${WORKDIR} specifies a working directory where the Raspbian Lite bundle will be downloaded and customized.If the testbed administrator allocates his/her files into another location, then it will be required to change the ${URL} environment variable.Notice that even though we use the $ and # signs in the shell, in general, these signs will be particular to the testbed administrator OS shell.Next, we create the working directory and change to it with the cd command.To download the image, we utilize the wget command before unpacking the zip file as follows: 1 $ mkdir -p ${WORKDIR} 2 $ cd ${WORKDIR} 3 $ wget ${URL%/}/${IMAGE}.zip 4 $ unzip ${IMAGE}.zip Image Customization After Raspbian Lite has been unpacked, there should be an .imgfile in the working directory ${WORKDIR}.fdisk can be used to display the content of the image.We parse the arguments -u sectors to display the sizes in sectors and -l to display the partitions within the image.The fdisk command should output to the terminal something similar to: The output provides relevant information about the image.The image is in total 2,709,504 sectors (1.3 GiB) in size and contains two partitions.The first partition starts at sector 8192 and the other partition starts at sector 137,216.The first partition type is FAT32 with a size of 63 MB and the second partition is of type Linux with a size of 1.2 GB.This indicates that the first partition is a boot partition, and the second one is a traditional Linux filesystem.In this case, the root filesystem, i.e., /. Image Resizing Given that we want to customize the root filesystem in the Raspis, we need to expand the image file since 1.2 GB might not be enough to store the existing root filesystem plus additional files and software packages.Thus, we need to increase the partition size.The following procedure illustrates how the image and its root filesystem can be expanded by one GB.First, to expand the image one GB, we execute: 1 $ dd if=/dev/zero bs=1M count=1024 >> ${IMAGE}.img&& sync Later, we use fdisk with the same arguments as before to see that the image is now one GB larger: Now, in the above command block output, we observe that the change has taken effect by noticing the total available image size is 2.3 GiB.To expand the root filesystem, we replace the Linux partition with a new partition one GB larger.The starting point of this new partition should be the same as the old one.We make use of fdisk to alter the partition table in the commands below.They (i) delete partition number 2; (ii) create a new primary partition and (iii) set the new partition starting point.The new partition starting point value is 137,216 in our case; Finally, we (iv) write the new partition table to the image file.This is made as follows: Loopback Device Setup After successfully resizing the image file, we use a loopback device to make the Raspbian image available as a block device in the filesystem.For this command to work, the testbed administrator distribution must have the util-linux package with version 2.21 or higher.Otherwise, the -P argument of losetup will appear as invalid.If the version of losetup can not be updated for some reason, an alternative option for this part is presented in Appendix A.1 of the Appendices. If the previous command was succesful, the lsblk command can be used to list the available block devices in the filesystem as follows: The image block device appears as /dev/loop0.This block device has two partitions associated with it, e.g., loop0p1 and loop0p2.Finally, we check the filesystem of the block device with e2fsck and resize it with the resize2fs command: 12 The filesystem on /dev/loop0 is now 583680 blocks long. Block Device Mounting For browsing and altering the files in the image, we mount the block device partitions into a particular path of our ${WORKDIR} in order to customize them.We mount the block device partition that contains the root filesystem and later the boot partition.This is done by creating an empty directory that is used as a mountpoint.We name it root and create it in the working directory before mounting the root filesystem onto the mountpoint.We mount the root filesystem as follows: The root filesystem mounted in ${ROOTDIR} already has a boot directory that can be used as the mount point for the boot partition in the block device /dev/loop0p1.This is convenient because the final edited partition from ${ROOTDIR}/boot will be mounted on this same directory when a Raspi starts up with a memory card containing the raw final image.Hence, to mount boot partition we do: In this way, it is now possible to change all files within the Raspbian image as desired by editing the files in ${ROOTDIR}.We take advantage of this to edit configuration files, append new files and even update and install packages. Image OS Files and Configuration Scripts Setup In general, the Raspis should be setup as similarly as possible.However, some particularities exist to differentiate the devices in principle.In addition, scripts containing further configurations for the Raspis are desirable to be distributed as part of the common image.Therefore, we present here the steps to setup basic properties of the Raspis and distributing configuration scripts to each of them through the image.For this, we first indicate how to obtain and put our configuration scripts in the image.Later, we describe the tasks performed by these configuration scripts.Finally, we indicate how and in which order the scripts are executed to configure all the devices.Any testbed administrator might modify or include other tasks according to his / her needs as we will show. Image Default Configuration Scripts Download In our case, we have our default configuration scripts stored in a file rasp_config.ziplocated in the same URL of the HTTP server where the image was retrieved from, i.e., the one in the environment variable ${URL}.We first download this compressed file with wget and extract it locally into our Raspbian Lite image.These commands and the output of the last one are shown as follows: 1 $ wget ${URL%/}/rasp_config.zip 2 $ unzip rasp_config.zip-d ${ROOTDIR}/home/pi/ 3 Archive: rasp_config.zipThe unzippped files are one configuration file and three configuration scripts in the newly created ${ROOTDIR}/home/pi/rasp_config/ folder in the image.We describe which features that we require all the Raspis to have and how are they achieved with these configuration scripts. Device Hostnames The hostname helps the user to physically distinguish the devices from each other.In our case, we require the devices in our testbed to have different hostnames.We define the hostnames based on the Medium Access Control (MAC) addresses of the Raspis wired Ethernet interface. Prior to this stage, the MAC address of a network card can be found using the command ifconfig or ip addr on a given Raspi.We store the MAC addresses and hostnames of the Raspis in the configuration file ${ROOTDIR}/home/pi/rasp_config/nodes.csv.A sample of our file is shown as follows: The testbed administrator has to insert the MAC addresses and hostnames of his/her Raspis obtained previously in the format shown in the configuration file.For each given Raspi, there is a MAC address and the corresponding hostname.This file will be employed by the ${ROOTDIR}/home/pi/rasp_config/set_hostname Bourne Again SHell (Bash) script to assign the hostname of each Raspi.The script content is the following: ------------------------------------------1 #!/usr/bin/env bash The script (in lines): (1) tells the system to interpret the script using Bash; (3)(4) gets the path to the script itself and the list of hostnames; (5) gets the MAC address of the node itself; (6) gets the current hostname; (7) gets the new hostname from the hostname list; and (10)(11)(12)(13)(14) assigns the new hostname to the Raspi where the script will be executed. Updating Default Configuration Files and Scripts Besides the single script with its configuration file introduced up to this point of our procedure, it is possible that the testbed administrator may require to add other scripts to configure his/her Raspis. We want to ensure that all the Raspi configuration scripts of any testbed administrators are obtained in a simple way.We automate this task by including the ${ROOTDIR}/home/pi/rasp_config/update_rasp_config script in our procedure.The purpose of this script is to make all the Raspis fetch all the configuration scripts located with the image during a testbed start up. For the above scripts to work in the Raspis, it is required that the Raspis MAC addresses are found in nodes.csv.In addition, it should be noted that for other testbed administrators besides ourselves, the URL for file fetching and the configuration scripts themselves can be modified to fit their requirements.If required for a testbed administrator, the rasp_config.zipwill need to be edited to include all the required configuration files and scripts.In addition, it might be necessary to edit the URL in the script update_rasp_config to store and fetch from a different location.Nevertheless, both the URL and configuration files presented here can be used as a starting boilerplate if desired. Configuration Scripts Execution Order To actually make the Raspis change hostnames and any other considered configurations, we have to make each Raspi call the above scripts when it starts up.After finishing the setup process, all the unzipped files presented in Section 3.6.1 should be locally available at each Raspi after getting the root filesystem.We first need to run the update script before running any other configuration scripts.To do this after boot up, we include a call for the update script in ${ROOTDIR}/etc/rc.localbefore exit 0 in the file: # sed -i '/^exit 0/i bash /home/pi/rasp_config/update_rasp_config' ${ROOTDIR}/etc/rc.localIf it is required to have more configuration scripts, adding them in the rc.local file makes maintenance by the testbed administrator difficult since this needs to be both in the image and the downloaded rasp_config.zip.To avoid this problem, we include the ${ROOTDIR}/home/pi/rasp_config/main script that calls all other configuration scripts (besides update_rasp_config) in a sequential order.This script content is: -----------------------------------2 #!/usr/bin/env bash -2 -1 bash /home/pi/rasp_config/set_hostname -1 # Any other required configuration scripts...In this way, the automation process is simplified since we do not need to modify ${ROOTDIR}/etc/rc.localagain after the image has been written to the memory cards.Now, we insert a call to the main script in ${ROOTDIR}/etc/rc.local as follows: # sed -i '/^exit 0/i bash /home/pi/rasp_config/main' ${ROOTDIR}/etc/rc.localFinally, ${ROOTDIR}/etc/rc.localshould look like the following: ------------------------5 ... -6 bash /home/pi/rasp_config/update_rasp_config -7 bash /home/pi/rasp_config/main -8 exit 0 Notice that set_hostname is now called by the main script instead.The update script is still called directly.This ensures that all configuration scripts are updated before executed.Changes to the update script itself will first take effect at the next system startup. Image Package Updating by Changing the Apparent Root Directory Besides adding and configuring files within the image, the testbed administrator may want to install and update the software packages within the image before it is written to all the memory cards that goes into the Raspis.From any Linux x86 machine as the testbed administrator PC, this can be done using chroot command in the Quick Emulator (QEMU) [19] hypervisor for Advanced RISC Machine (ARM) processors. chroot is a method in Linux that modifies the apparent root filesystem location from / to any other path.Consequently, in our case, we can use the Raspbian Lite image root filesystem within the testbed administrator Linux distribution.Then, QEMU allows the execution of commands for the Raspi image (ARM instructions) through the ones from the testbed administrator PC architecture.Due to the ARM processor that the Raspis employ, installation of the QEMU related software is required first and verification that QEMU is ARM enabled.To do so, run the following commands: -1 # apt-get install binfmt-support qemu qemu-user-static -2 # update-binfmts --display qemu-arm -3 qemu-arm (enabled): In the previous output, the testbed administrator must be sure that the second command writes qemu-arm (enabled) as indicated.If that is not the case, then it should be possible to enable it by running: -1 # update-binfmts --enable qemu-arm Provided that qemu-arm is enabled, we should now be able to chroot into our Raspbian lite image.There are a few commands to be performed before actually changing root into the root partion of the image.First, to get internet access from within the Raspbian lite image, copying the testbed administrator Linux distribution resolv.conffile into the image root filesystem is required.To do this, it is necessary to run the following: -1 $ cd $ROOTDIR -2 # cp /etc/resolv.conf${ROOTDIR}/etc/resolv.confNow, because of the ARM architecture, the /usr/bin/qemu-arm-static command needs to be copied into the image before continuing by running: -1 # cp /usr/bin/qemu-arm-static ${ROOTDIR}/usr/bin Before changing the root, it is necessary to populate the directories proc, sys and dev for the image to get control as the testbed administrator apparent root filesystem.This is made by the following commands: -1 # mount -t proc proc proc/ -2 # mount --bind /sys sys/ -3 # mount --bind /dev dev/ -4 # mount --bind /dev/pts dev/pts Finally, run the following command to change root: -1 # chroot ${ROOTDIR} /usr/bin/qemu-arm-static /bin/bash If successfully executed, our terminal should have changed the prompt, indicating that we are the root user in the Raspbian lite root filesystem as the apparent root.In case the chroot command is not successful, we provide an alternative command in Appendix A.2 of the Appendices.To be aware of the mode that we are working now, we change the prompt title to indicate that it is a chroot environment as follows: The Raspbian lite image should now be possible to use almost as if it had been booted in a Raspi.A major difference is that the testbed administrator PC is likely significantly faster than a Raspi.Hence, enabling updates, upgrades and installing new software packages should be faster than in a Raspi.Still updating and upgrading the packages for the Raspi might take some amount of time.To update the system package list, run the following command: -1 (chroot) # apt-get update We further install some packages that we consider useful: -1 (chroot) # apt-get install vim git screen vim is the improved vi editor for Linux, git for managing Git repositories and screen [20] for better handling of long-runnning processes.When writing the image to a memory card, all the changes that have been made to the image so far will exist in all Raspis after it. Overlay File System In principle, our procedure modifies the image file only once in the testbed administrator PC when its setup is made.In addition, keeping this image in the Raspis provides the same initial system for all the devices.If we do not make any further modifications during the image setup, any files created after the initial boot of a Raspi will remain in the memory card.This is cumbersome to maintain since the size of the memory card is relatively small (8 GB), and there might be various users utilizing the testbed.In addition, different testbed users could be interested in running their experiments in a fresh rebooted system with the original customized image.We emphasize that this step is not necessary if the tesbed administrator wants to consider only persistent storage for its devices.A use case for this scenario could be a single user for the testbed or when a testbed administrator only wants to setup a few Raspis. If both persistent and non-persistent storage are required for the Raspis, we present here the steps to setup an overlay filesystem.This type of filesystem enables an upper filesystem to overlay into a lower filesystem.Whenever a file is requested, the upper filesystem will forward the request to the lower filesystem in case it does not have it itself.If the upper filesystem has the requested file, it will simply return the file.This idea can be used in our setup to mount the root filesystem (i.e., Raspbian Lite) in the Raspis during startup as read-only filesystem.On the one hand, the image configuration files will remain after a reboot but the local data in these directories will be erased after a reboot.To enable the possibility of persistent changes, we overlay the upper filesystem that is mounted in the Raspi Random Access Memory (RAM), i.e., /tmp as rewritable on top of the lower root filesystem.Reading a file may return a file from the lower filesytem, but if it is stored, it will be saved in the upper filesystem.Accessing this file again will return the stored file from the upper layer.After a reboot, all the stored files in the upper filesystem will be retrieved, but the ones in the lower filesystem that are not part of the original image will be removed. Filesystem Installation Assuming that we are still in the chroot environment of the Raspbian Lite root filesystem for installing packages, we can setup the overlay filesystem at this point of the procedure.There already exists implementations overlaying the root filesystem.We use an implementation available at the Git repository in [21].Since we have installed git in a previous step, we clone the repository.The command block below stores it in /tmp which is really mounted in RAM.All the files stored here will disappear when the system is rebooted. -1 (chroot) # OVERLAYROOTDIR="/tmp/overlayroot" -2 (chroot) # git clone https://github.com/chesty/overlayroot.git ${OVERLAYROOTDIR} Before enabling the overlaying filesystem, it is necessary to generate an initial RAM filesystem or initramfs.This is an initial filesystem that is loaded into RAM during the startup process of a Linux machine to prepare the real filesystem.For this purpose, we need the BusyBox package by running: -1 (chroot) # apt-get install busybox To create and activate the overlaying filesystem, we need to first add the required system scripts to do so.This is done as follows: -1 (chroot) # cp ${OVERLAYROOTDIR}/hooks-overlay /etc/initramfs-tools/hooks/ -2 (chroot) # cp ${OVERLAYROOTDIR}/init-bottom-overlay /etc/initramfs-tools/scripts/init-bottom/ -3 (chroot) # echo "overlay" > /etc/initramfs-tools/modules To generate the initial RAM filesystem, we have to utilize the mkinitramfs command.This searches by default for the available kernel modules in the system.Since we are in chroot mode, we need to specify the correct kernel modules to search for.The available kernel modules are located in /lib/modules.To see them, we just run: Although these commands might output some warnings, they should successfully generate working initial RAM filesystems.Later, an initial RAM filesystem will need to be called by the bootloader.In Raspbian, this is done by adding a command to config.txtfile in the boot partition.If the system should be run in a Raspi version 1, then use init.gzby executing only the first code line below; otherwise, use init-v7.gzby executing only the second code line: -1 (chroot) # echo "initramfs init.gz">> /boot/config.txt# For Raspberry Pi version 1 -2 (chroot) # echo "initramfs init-v7.gz">> /boot/config.txt# For Raspberry Pi version 2 or 3 After this point, it is no longer required to be in chroot mode.The following commands exit the chroot environment, unmount all partitions and detach the loopback devices: For the --recursive option to work properly, it is necessary that the package util-linux version is greater than or equal to 2.22.Otherwise, an alternative is to either update the package or follow the procedure in Appendix A.3 of the Appendices. Persistent and Non-Persistent Image Directories Provided the stacked filesystem is configured, it is now possible to have directories where files are removed or not upon rebooting the Raspis.The following procedure creates an extra partition in the image for the Raspi user home directory that will be made storage persistent.We first expand image according to the desired home directory size, but avoid to making the image bigger than the target memory card size. -1 $ dd if=/dev/zero bs=1M count=1024 >> ${IMAGE}.img&& sync We create a partition for the home directory after the root partition.To do this, we again use fdisk to find the next available sector in the image.To verify the new available space for the full image and observe the next available sector, we run: We notice that one GB is now available to be used in the partitions.In addition, we observe the new partition should start at sector 4806656.To create it, we use fdisk as follows: We create a loopback device again and format the new partition, as follows: If the -P option is not available for losetup, we provide an alternative command line in Appendix A.1.Finally, if the previous filesystem formatting was successful, the filesystem is now available for use.We need to inform the Raspbian OS to mount the home partition that we have just created.This can be done by adding an entry in fstab as follows: -1 # mount ${DEV}p2 ${ROOTDIR} -2 # sed -i '$a /dev/mmcblk0p3 /home ext4 defaults,noatime 0 2' ${ROOTDIR}/etc/fstab If the last command was executed correctly, the ${ROOTDIR}/etc/fstab file should have the new line.The resulting file should look like the following: -------------------- Originally, the home folder is located in the root filesystem.However, we have to move its content to the new home partition and store it properly.We do that as follows: 0 # mount ${DEV}p3 ${ROOTDIR}/mnt 0 # mv ${ROOTDIR}/home/* ${ROOTDIR}/mnt/ Now, unmount again all the partitions and detach the loop devices as follows: If the --recursive option is not available, then follow the procedure in Appendix A.3 of the Appendices.If the steps are successfully executed up to this point, the customized image is available in the ${IMAGE}.imgfile and is ready to be deployed into the Raspis.In the following section, we indicate how to proceed with the writing of the image into various memory cards. Writing Customized Image to SD Memory Cards For a basic system setup, the final step is to write the customized image to all the memory cards before they can be used in the Raspis.For our current considered system, we do this manually for each card.The testbed administrator needs to insert each memory card in his/her PC and follow the procedure in this section.A given card will be available as /dev/mmcblkX or /dev/sdYX where X is a natural number and Y is a letter. It is very important to write to the correct device as everything will be overwritten.To avoid removing information from the wrong device, a testbed administrator can use the commands lsblk and/or df -h before and after inserting the memory card to deduce its correct device name.For our case, the device was /dev/mmcblk0.Once identified, to write the image to a memory card, the following command is used: 0 # dd if=${IMAGE}.imgof=/dev/mmcblk0 bs=4M && sync The previous dd and sync commands for copying the image to the memory card and flushing the remainder in memory to the filesystem will take tens of minutes depending on the memory card speed and the size of the image.After this is made, it is only necessary to eject the memory card and now plug it in a Raspi so it can boot up. Automation and Monitoring Tools Within the daily testbed use, there exists frequent tasks that require a set of various commands in a given Raspi.This could be tedious, prone to errors and time-consuming to realize every time the task is required to be made.Therefore, in this section, we introduce a set of tools that help to automate and monitor routinary task execution in the Raspis and show relevant example commands with them.To be able to run all the following commands, it is necessary to have SSH connectivity with the Raspis; otherwise, the commands need to be run locally on a Raspi making necessary to use a keyboard and a monitor.The testbed administrator needs to put the memory cards in the Raspis and turn them on for them to be able to boot.The devices should now be bootable. Fabric Controlling multiple devices using SSH from a single PC often leads to many repetitive tasks.Among these, we can mention: (i) rebooting a set of devices; (ii) installing applications in multiple devices; and (iii) copying files to/from multiple devices.Fabric [22] provides a Python library that simplifies the management of working with many devices from a single PC.First, the testbed administrator creates a directory to hold the Fabric source code: 0 $ export CODEDIR="${HOME}/code" 0 $ mkdir -p ${CODEDIR} 0 $ cd ${CODEDIR} Then, the ${CODEDIR}/fabfile.pyfile below provides a script with some basic functionalities that can perform the few items above (i-iii).In general, other administrators may require different functionalities, but this is out of the scope of this work.The following file serves as a starting boilerplate: ${CODEDIR}/fabfile.py ----------------------2 from fabric.apiimport env, task, sudo -2 # Python Fabric script to run commands on multiple hosts through ssh -2 # -2 # Run script as 'fab <task>', where <task> is one of the scripts functions -2 # marked as a tesk.The task marked as 'default' will be run if <task> is not -2 # specified The previous fabfile shows three functions that perform our example tasks.These functions utilize variables and subsequent functions from the Fabric Application Programming Interface (API) such as env, task and sudo among others.Each of these API functions permits defining environment variables, creating the administrator tasks through decorators or running the mentioned task in sudo mode, respectively.When a task is called from the terminal, Fabric searches the directory for the fabfile.pyfile and executes the desired task.The syntax for executing a task with arguments is in the which does not terminate even with connectivity interruptions.Users can attach and detach from a screen session as desired.The following procedure presents how to use screen with SSH to: (i) login to a generic Raspi; (ii) open a screen session; (iii) execute an example command; (iv) detach from the screen session; (v) terminate the SSH connection; (vi) login to the Raspi again; and (vii) attach to screen session to see the program still running.From the testbed administrator PC, we start by establishing an SSH connection to a Raspi and open a screen session: 0 $ ssh pi@<RASP_IP> 0 ... 0 pi@<RASP_IP>'s password: 0 1 The programs included with the Debian GNU/Linux system are free software; 1 the exact distribution terms for each program are described in the 1 individual files in /usr/share/doc/*/copyright.To enter in the screen session after the introduction message, we have to press either the Space or Return key in the keyboard to clear the shell.After doing so, we should be in a screen session although its appearance is the same as a regular terminal shell.Inside this example session, we execute a program that never ends: The top command simply continuously shows the table of processes executed on the Raspi like in any Linux distribution.When top is running, we first press Ctrl+a and later Crtl+d in the keyboard to detach from the screen session.We now terminate the SSH connection and login again to verify that the top command is still running.Without using screen, the top program should terminate since its hosting shell was terminated.To log out, we run: 0 $ exit 0 logout 0 Connection to <RASP_IP> closed.0 $ ssh pi@<RASP_IP> Now that we are logged in to the Raspi again, we first check the available detached sessions by running: 0 $ screen -list 0 There is a screen on: From the command output, we can see that the session is still running in our generic Raspi number XX and that no user is currently attached to the session.To attach to the session, we execute: 0 $ screen -r 824.pts-0.raspXXAfter attaching again, we should see top still running.screen has more functionalities that can be used in this or other contexts, but this is outside the scope of this work.To terminate the screen session, first terminate top by pressing q in the keyboard.Once top is terminated, we need to type exit two times in order to first exit the screen session and then terminate the SSH connection.An output should be as follows: 0 [screen is terminating] 0 pi@<RASP_IP>:~$ exit 0 logout 0 Connection to <RASP_IP> closed.0 $ Cross-Compilation: From the PC to the Raspberry Pi An important case of a computational expensive task is to compile software packages and large libraries.Given the computing capabilities of the Raspi, such tasks can be challenging if not prohibitive in terms of Central Processing Unit (CPU), memory or space usage and/or compilation time.In this section, we present a procedure of how to cross-compile C++ source code from the testbed administrator PC for the ARM architecture of the Raspis.By doing this, we take advantage of the (typically) much higher computing power of the testbed administrator PC in order to save time and computational resources.Hence, we give an example of compiling a simple C++ program and copying the generated binaries with SSH to run locally on a Raspi. Furthermore, given that our testbed purpose is for network coding applications, we also present how to cross-compile Kodo [8], a C++11 network coding library to perform encoding, decoding and recoding operations.In this way, we aim to present a fully configurable and manageable testbed with the capabilities to evaluate network coding protocols with several Raspis and locally store measurements from different evaluations.Therefore, we also show how kodo-cpp, a set of high-level C++ bindings for Kodo, can be cross-compiled for applications with the Raspi. Toolchain Setup To compile in a given architecture that is aimed for a different one, the testbed administrator needs to install a toolchain on his/her PC.The toolchain is mandatory due to the different processor architectures where the source can be compiled from.Given that compiling a toolchain can be an arduous task, we get the toolchain recommended for the ARM architecture of the Raspis.This toolchain is available from [16] and it already contains the binaries for different compilers based on gcc 4.9.We extract the binaries adjusting them to our coding style and compiling convention.For this, we use the ${TOOLCHAIN} directory as the working directory.The testbed administrator may choose some other working directory of its preference if desired.First, we create the toolchain directory: 0 $ export TOOLCHAINDIR="${HOME}/toolchains" 0 $ mkdir -p ${TOOLCHAINDIR} 0 $ cd ${TOOLCHAINDIR} Later, we download a Raspi toolchain with the binaries for a 64-bit Linux distribution available in [16].Finally, we unzip the downloaded file.This is made as follows: 0 $ wget https://zenodo.org/record/154328/files/raspberry-gxx493-arm.zip0 $ unzip raspberry-gxx493-arm.zip Instead of calling the ARM cross compiler using its full path, we make the binaries accessible from the command shell systemwide.A way to do this is by adding the following commands in the ${HOME}/.profileas follows: 0 $ sed -i '$a export TOOLCHAINDIR=\"$HOME/toolchains\"' ${HOME}/.profile0 $ sed -i '$a export TOOLCHAINBINARY=\"raspberry-gxx49-arm-g++\"' ${HOME}/.profile0 $ sed -i '$a PATH=\"\$PATH:${TOOLCHAINDIR}/arm-rpi-4.9.3-linux-gnueabihf/bin\"' ${HOME}/.profileThis helps the OS to recognize the location of the compiler command when a new shell is opened.The .profile should now contain the lines we inserted.There might be other code in the file of other testbed administrators.We recommend to leave other parts unmodified. To update the ${PATH} variable and the .profile,we use the source command for the changes take effect in the administrator system: 0 $ source ${HOME}/.profileA working ARM cross-compiler in the testbed administrator PC should output the following: 0 $ ${TOOLCHAINBINARY} --version 0 raspberry-gxx49-arm-g++ (crosstool-NG crosstool-ng-1.22.0-88-g8460611)4.9.3 0 Copyright (C) 2015 Free Software Foundation, Inc. 0 This is free software; see the source for copying conditions.There is NO 0 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Cross-Compile Example The following shows: (i) how to cross compile the classic hello_world C++ example for the Raspi ARM architecture and (ii) how to copy and execute the binary in a Raspi using Secure Copy (SCP) and SSH.First, we create the file hello_world.cpp.For simplicity, we create it in the directory where we stored the fabfile.pyfile with the following content using any text editor: std::cout << "Hello World!" << std::endl; This should produce a binary hello_world that is executable on the Raspi.We copy it to a Raspi using SCP and using Fabric instead if we are interested in deploying a compiled binary for many Raspis.0 $ scp hello_world pi@<RASP_IP>:~/ After the executable has been copied to the Raspi, we login through SSH to it: 0 $ ssh pi:<RASP_IP> We can list the directory content after we have logged into the Raspi and verify that the compiled hello_world binary is there: 0 pi@<RASP_IP>:~$ ls 0 hello_world rasp_config Finally, we simply execute the hello_world to confirm that the cross-compiling of hello_world worked properly: 0 pi@<RASP_IP>:~$ ./hello_world0 Hello World! Cross-Compile Kodo As we originally mentioned, Kodo is a C++11 network coding library that permits implementation of network coding functionalities by allowing any network protocol designer to use and test the primitive encoding, decoding and recoding operations of RLNC.In this way, a designer only needs to focus on the design and test of a network coding-based protocol.Kodo is available through programming bindings for a variety of popular programming languages.This procedure will present how to configure the Kodo C++ bindings kodo-cpp to cross-compile applications that can run in Raspi.kodo-cpp provides a simple interface to the underlying C++11 code that exists in the libraries kodo-core for the object structure and kodo-rlnc for the RLNC codec implementation.More details about Kodo are provided in the code documentation [25]. To use Kodo for research, it is necessary to obtain a research free license.To do this, a request form needs to be filled in [26] and wait for it to be processed by the Kodo developers.Once the access for Kodo has been granted, the source code can be pulled from its Git repositories to be compiled.Assuming that the testbed administrator already has access, we clone the kodo-cpp repository locally in $CODEDIR and change directory into the repository by doing: 0 $ cd ${CODEDIR} 0 $ git clone git@github.com:steinwurf/kodo-cpp.git 0 $ cd kodo-cpp We first configure kodo-cpp to build executables for the ARM architecture using the Raspi toolchain and later build them by running: 0 $ python waf configure --cxx_mkspec=cxx_raspberry_gxx49_arm 0 ... 0 'configure' finished successfully (X.XXXs) 0 $ python waf build 0 ... 0 'build' finished successfully (XmXX.XXs) If the configuration and build steps are successful, the binaries should have been created.To be able to use them, we need to create a shared library that we will use in the Raspi.To do this, we run the following command: 0 $ python waf install --install_shared_libs --install_path="./shared_test"0 ... 0 'install' finished successfully (X.XXXs) Now, we copy the shared library, binary files and related headers to the Raspi home directory as follows: 0 $ scp -r shared_test/include shared_test/libkodoc.so pi@<RASP_IP>:~/ Alternatively, and for the testbed administrator reference, Kodo can also generate static libraries.We log in to the Raspi and execute the unit tests and one of the binaries by running: 0 $ ssh pi@<RASP_IP> 0 $ ./kodocpp_tests0 ... Conclusions Observing the expectation of the IoT and lack for a low-cost, easy-to-configure testbed in this area for reproducible research, we provide an in-depth description of the new Aalborg University's Raspi testbed for network coding evaluation and how to guarantee replicability and scaling management of this system.The description shows how to set up interconnected Raspis with memory cards for local storage, a Raspbian Lite image, network connectivity and proper system administration privileges.Using the presented procedure permits setting up a Raspbian Lite image for the Raspis.A tailored Linux distribution might be created from the scratch using the Yocto project.However, to assemble and compile the software for the Raspi can be a tedious and time-consuming task.However, this method could be adequate for an expert user.We hope this work permits researchers to replicate setups and scenarios for evaluating their strategies in a rapid and manageable way.Future work in the use of Raspi devices will focus on expanding the setup and automation of tasks to run the testbed, configure specified network topologies (e.g., with specific connectivity or packet loss ratios), reserve the use of these sub-networks for running tailored experiments and open the use of the testbed beyond our team at Aalborg University.Future work in this area will consider making the testbed fetch the image through the HTTP server.This is expected to simplify the maintenance of the memory cards. 1 } We save the previous file and compile it for Raspi in the testbed administrator PC by doing:0 $ ${TOOLCHAINBINARY} hello_world.cpp-o hello_world 0 [ PASSED ] tests.0 $ ./encode_decode_simple0 Data decoded correctly If the Kodo cross-compilation worked properly, both the unit tests and binaries run should provide the shown outputs. 1 2 Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 2 permitted by applicable law.
2017-01-15T08:35:26.413Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "dd5cb659b7cf9686ef08c050a41743ddca39a71d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/5/4/67/pdf?version=1476344915", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6f832bf6b2b5248bd61e9c984174522164610ad5", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
4354553
pes2o/s2orc
v3-fos-license
Reducing post-tonsillectomy haemorrhage rates through a quality improvement project using a Swedish National quality register: a case study Purpose Tonsillectomy (TE) is one of the most frequently performed ENT surgical procedures. Post-tonsillectomy haemorrhage (PTH) is a potentially life-threatening complication of TE. The National Tonsil Surgery Register in Sweden (NTSRS) has revealed wide variations in PTH rates among Swedish ENT centres. In 2013, the steering committee of the NTSRS, therefore, initiated a quality improvement project (QIP) to decrease the PTH incidence. The aim of the present study was to describe and evaluate the multicentre QIP initiated to decrease PTH rates. Methods Six ENT centres, all with PTH rates above the Swedish average, participated in the 7-month quality improvement project. Each centre developed improvement plans describing the intended changes in clinical practice. The project’s primary outcome variable was the PTH rate. Process indicators, such as surgical technique, were also documented. Data from the QIP centres were compared with a control group of 15 surgical centres in Sweden with similarly high PTH rates. Data from both groups for the 12 months prior to the start of the QIP were compared with data for the 12 months after the QIP. Results The QIP centres reduced the PTH rate from 12.7 to 7.1% from pre-QIP to follow-up; in the control group, the PTH rate remained unchanged. The QIP centres also exhibited positive changes in related key process indicators, i.e., increasing the use of cold techniques for dissection and haemostasis. Conclusions The rates of PTH can be reduced with a QIP. A national quality register can be used not only to identify areas for improvement but also to evaluate the impact of subsequent improvement efforts and thereby guide professional development and enhance patient outcomes. Introduction Tonsillectomy (TE) is one of the most frequently performed ENT surgical procedures, with over 700,000 operations performed in the United States each year [1]. In Sweden, with almost 10 million inhabitants, approximately 13,500 tonsil procedures are performed every year, half of which are TEs [2]. There are two main indications for tonsil surgery: (1) upper airway obstruction in children resulting in sleep-disordered breathing, and (2) infection-related problems (recurrent tonsillitis, chronic tonsillitis or peritonsillar abscess) [3]. Patients undergoing tonsil surgery due to upper airway obstruction are typically younger (incidence peaks at ages 3-5 years) and predominantly male; in contrast, patients undergoing tonsil surgery because of 1 3 infection-related problems are typically older (incidence peaks at 16-18 years) and predominantly female [4]. Post-tonsillectomy haemorrhage (PTH) is the most feared complication of TE. A PTH is a potentially life-threatening event that often requires acute re-admission to hospital and sometimes a return to theatre. Reported rates of PTH vary in the literature; recent large studies indicate a range between 6 and 15% [5][6][7][8]. Fatal outcomes after PTH are rare, but should not be overlooked: a large Swedish cohort study documented a mortality rate after tonsil surgery (including both total and partial TE) of 1/40,000 [9]. In Austria, five children below the age of 6 years died after severe PTH in 2006-2007 [10]. Thus, the rate of PTH is of one the most important quality and safety indicators in tonsil surgery. The National Tonsil Surgery Register in Sweden (NTSRS) was initiated in 1997 by The Swedish Association for Otorhinolaryngology, Head and Neck Surgery. The aim of the NTSRS is to monitor patient-related outcomes (e.g., symptom relief after surgery), complications, and clinical practice patterns to identify trends, initiate and perform research projects and stimulate local clinical improvement programmes. The NTSRS collects data on demographics, level of care (inpatient/outpatient), indication for surgery, dissection technique, haemostasis technique, incidence of postoperative haemorrhage and patient-reported outcome measures regarding postoperative pain, infections, haemorrhage, and symptom relief. The data management procedures of and results from the NTSRS have been described previously [3,[11][12][13]. The NTSRS is managed by a steering committee of experts in the field of tonsil surgery. In 2013, the NTSRS covered 81.2% of all patients who underwent tonsil surgery for benign indications [2]. Since the start of the register, all participating ENT centres have had complete access to their own data, enabling in-depth analyses that include comparisons of processes and outcomes to the Swedish average rates. A public annual report has been published since 2012 containing analyses and comparative data from every participating ENT centre. The annual reports have revealed wide variations in PTH rates among ENT centres, with a range from 0 to 25% [14]. The NTSRS has also shown that the same centres have placed at the top or bottom of the PTH rate list over several consecutive years. These persistent differences in PTH rates among surgical centres in Sweden indicate a potential gap between local practice and best practice and show that centres with high rates of PTH have the potential to reduce these rates. The reduction of PTH rates in centres with high rates was identified by the NTSRS steering committee as a highpriority goal for a structured quality improvement project (QIP). Examples from other clinical fields have shown that structured QIPs using a national quality register can improve clinical results [15][16][17]. Therefore, in 2013, the NTSRS initiated a QIP to decrease the incidence of PTH. The project was planned by the NTSRS steering committee and managed by two of the authors (AHS, JS) with support from quality improvement experts from the Centre of Quality Registers Västra Götaland in Sweden. The project received financial support from the National Programme for Quality Registries. The aim of the present study was to describe and evaluate the multicentre QIP initiated to decrease PTH rates. Settings and design of the QIP In 2013, six surgical centres, all with PTH rates above the Swedish average rate, were invited and agreed to participate in the QIP. The participating centres were all public county hospitals with ENT residency programmes. At each centre, the head of the department appointed an ENT surgeon as a local project manager. All project managers were given 2 weeks free from their regular work to participate in the project. The project started in October 2013 and ended in April 2014. The QIP started with a 2-day workshop at which the local project managers were updated on best practices and evidence-based medicine regarding tonsil surgery. During the workshop, the participants mapped the tonsil surgery process at their respective centres. Then, each participant created an individual action plan based on the discrepancy between best practices and local practices and containing remedial actions to reduce PTH. The workshop included the following: • Lectures on quality improvement tools, such as the plan-do-study-act method and the Ishikawa (cause-andeffect) diagram [18]. • Presentations of clinical practice in different countries/ centres and the related outcomes. • Update on scientific evidence and best practice regarding tonsil surgery and PTH. The benefits of cold instruments for both dissection of the tonsils and haemostasis during surgery was propagated based on the substantial, but under-applied scientific evidence that cold techniques for both dissection and haemostasis reduce PTH rates [11,19,20]. • Training on how to analyse and use NTSRS datasets to characterize local clinical practice. • Planning clinical improvement efforts for each centre based on the gap between current local clinical practice and best practice. 3 Back at their respective centres, the local project managers presented the improvement plans to the heads of their departments and their fellow ENT surgeons. Local improvement plans were agreed upon and implemented as an integrated part of the department's regular work. The improvement plans often included other staff members, such as theatre nurses and anaesthetic personnel. The project lasted 7 months. The timing and content of the implementation process differed among the participating surgical centres, and not all changes were implemented at the same time. During the project period, the NTSRS project leaders regularly supported the local project manager by phone and e-mail. At a follow-up meeting after 7 months, each centre reported the changes they had made in practice. The NTSRS was reviewed to assess whether these changes had led to subsequent changes in outcomes, such as decreased PTH rates. The official project ended with this meeting, but the efforts continued at the centres, and both local stakeholders and the NTSRS steering group could continue to monitor the results online via the NTSRS. Study design and data sources A case study design was used to describe and evaluate the QIP since such designs lend themselves well to illuminating multifaceted changes over time in relation to different contexts [21]. To evaluate the impact of the 7-month QIP, the authors identified a control group consisting of 15 surgical centres in Sweden that had PTH rates similar to those of the 6 QIP centres (8-17%) 12 months prior the start of the programme. Data for the 12 months prior the start of the QIP (baseline) were compared with the 12 months after the QIP (follow-up) for both groups. The demographics of the study population and indicators for tonsil surgery were retrieved from the NTSRS. The NTSRS uses four questionnaires for collecting data (administered preoperatively, postoperatively, 30 days after surgery and 6 months after surgery), as detailed previously [3,11,13]. The outcome data for this study (PTH rates) were collected via a questionnaire completed by the patient 30 days after surgery. The response rate for the 30-day postoperative questionnaires was 53% in 2013. A more complete data set was desirable for PTH; therefore, data from the NTSRS was supplemented with data from the National Patient Register (NPR). The NPR is managed and administered by The National Board of Health and Welfare, a government agency under the Ministry of Health and Social Affairs. Registration in the NPR is mandatory by law for public and private care providers (except primary care) in Sweden. The NPR contains individually based information, including surgery and postoperative complications such as PTH [22]. The two registries were merged on an individual level using personal identity numbers to detect any PTH within 30 days after surgery. The methodology of merging data for PTH has been used for several years in the annual reporting of outcomes from the NTSRS [2,14]. The merging of data was performed in collaboration with representatives from The National Board of Health and Welfare to ensure the integrity and validity of the data. All the participating centres had written improvement plans describing the changes they intended to make in clinical practice. The plans were reviewed and analysed for this study to characterize and describe the types of improvement activities. Process indicators The process indicators retrieved from the NTSRS included the techniques used for dissection and haemostasis. These techniques were classified into groups, "cold" and "hot", based on whether the chosen surgical instruments added heat to the surgical field. Cold steel dissection was categorized as "cold dissection", whereas coblation, diathermy scissors, ultracision and bipolar diathermy were categorized as "hot dissection". "Cold haemostasis" was defined by the use of packs, ties and adrenaline infiltration, and "hot haemostasis" was defined by the use of bi-or monopolar diathermy. If any "hot" technique was used for dissection, the haemostasis technique was also considered "hot" [11]. Outcome variable The outcome variable for the project was PTH, which was defined in this study as bleeding from the throat that occurred after discharge and within 30 days from surgery and resulted in re-admission to hospital. Statistical analyses The distributions of variables are given as numbers and percentages for categorical variables and as the mean, standard deviation (SD), median, minimum, and maximum for continuous variables. For comparisons between groups, we used Fisher´s exact test (lowest 1-sided p value multiplied by 2) for dichotomous variables, the Mantel-Haenszel chi-square test for ordered categorical variables, the chi-square test for non-ordered categorical variables, and the Mann-Whitney U test for continuous variables. For comparisons between groups, generalized estimating models were used to analyse PTH rates. p values for comparisons between groups for each time point and between time points within each group based on these analyses are shown for the variable "readmission for haemorrhage". All significance tests were conducted at the 5% significance level. SAS Software Version 1 3 9 (SAS Institute, Cary, NC, USA) was used for all statistical analyses. Ethical considerations The study was approved by The Regional Ethical Review Board in Gothenburg, Sweden (Reg. No. 257-14). Data management was handled according to Swedish law and regulations. Improvement activities Six surgical centres ("Intervention group") participated in the QIP. The improvement plans were unique for each surgical centre, although many features were the same across centres. Five main change themes emerged ( Table 1). All the centres reported that they intended to change their surgical practice by minimizing the use of hot techniques. This included decreased use of bipolar diathermy for haemostasis (all centres) and the use of lower power settings for the bipolar diathermy device (five of the six centres). One centre that used coblation (a hot technique) prior to the QIP changed to cold dissection during the intervention period. Five centres reported that they would revise their strategy for pharmacological pain treatment. Four of the six centres aimed to improve their adherence to the national guidelines for pain treatment in paediatric patients. Five centres improved and updated their patient education by referring patients and caregivers to the website "tonsilloperation.se" for pre-and postoperative information. Published by the committee of experts that manages the NTSRS, this website contains practical information (in Swedish and other languages) about tonsil surgery for patients and caregivers. All the centres aimed to upgrade the surgical status of tonsillectomy (in Sweden, tonsil surgery is one of the first surgical procedures taught to residents and is often regarded as "a simple and common procedure"). Actions to elevate the status of tonsillectomy included improved education in tonsillectomy surgical technique for junior doctors and having discussions and experience exchanges about tonsil surgery in staff meetings. Surgical and patient demographic characteristics 12 months before the QIP started ("baseline"; October 2012 to September 2013), the number of tonsillectomies performed at the surgical centres in the intervention group varied between 155 and 233; 1220 surgeries were performed. In the control group, a range of 17-372 surgeries was performed at each centre, with 1318 surgeries performed during the same period. Demographics and baseline patient characteristics are shown in Table 2. There were no gender or age differences between the groups; female patients were more common in both the intervention and control groups. There was a small but statistically significant (p = 0.0025) difference in the indication for surgery, with slightly more patients treated for infection-related problems in the control group at baseline. Outpatient TE was more common in the control group than the intervention group both at baseline and at follow-up. Process indicators At baseline, the use of cold dissection techniques was more common in the intervention group (63.0% of all TEs) than in the control group (45.8%). There were no differences in Table 1 Themes of improvement activities and their use by the surgical centres * Implemented before the quality improvement project haemostasis techniques. In the intervention group, there was a significant increase from baseline (63.0%) to the followup period (94.1%) in the use of cold dissection techniques. There was also an increase in the use of cold haemostasis techniques in the intervention group, from 6.2% at baseline to 15.6% at follow-up. The control group showed no significant changes in techniques for dissection or haemostasis from baseline to follow-up (Table 2; Fig. 1). Outcome There was no statistically significant difference between groups regarding the outcome variable, PTH rate, at baseline. In the intervention group, baseline to follow-up comparisons demonstrated a significant reduction of PTH rates, from 12.7 to 7.1%. The control group showed no change in PTH from baseline to follow-up. At follow-up, there was a statistically significant (p = 0.0025) difference between the intervention group (7.1%) and the control group (10.9%) regarding PTH (Table 2; Fig. 2). Discussion Tonsillectomy is a common surgical procedure with wellestablished positive effects on several medical conditions. Tonsillectomy, like all surgical procedures, carries a risk of complications. The most important of these, postoperative haemorrhage (PTH), not only carries the risk of a fatal outcome, but is often a traumatic and negative experience for the patient and family. Furthermore, it places an avoidable burden on the health care system. There are numerous publications on PTH rates that show wide variation in these rates, which indicates that many instances of PTH could be avoided [5][6][7][8]. This should inspire many ENT surgeons to review their own practices. This article demonstrates that it is possible to decrease PTH rates through a QIP -the intervention centres reduced the PTH rate from 12.7 to 7.1%, i.e., an average 5.6 fewer instances of PTH per 100 TEs. To the best of our knowledge, this QIP evaluation is the first of its kind. The NTSRS leadership concludes that QIPs can and should be used to decrease PTH rates, especially in surgical centres that persistently have higher rates of PTH than their peers. In Sweden, the NTSRS provides important support for initiating and evaluating such projects. Drawing on the best available scientific evidence, cold techniques were promoted in this project. The QIP led to positive changes in the key process indicators: techniques for dissection and haemostasis. Participating local managers reported planning for such changes; subsequently, a significant increase in the use of cold dissection and haemostasis was documented in the register in the intervention group, but not in the control group. Having found no other plausible Fig. 1 Techniques for dissection and haemostasis at baseline, intervention period and follow-up, displayed with 95% confidence interval 1 3 explanations for the clinically and statistically significant reduction in PTH rates (from 12.7 to 7.1%) among the QIP centres, we conclude that the QIP led to the reduction. There are likely multiple reasons for this observed decrease in PTH rates. The reasons may also differ among the participating centres. It is not possible to describe or study all the factors contributing to this result due to the study design and the limitations of data collected in the NTSRS. Our results strongly indicate that the main contributors followed from the increased use of cold techniques for surgical dissection and haemostasis. However, other factors may have contributed in a web of influences, including the increased awareness of PTH among the clinical staff, the "upgrade" of the status of tonsil surgery and the improved surgical training. Furthermore, the engagement of local project leaders helped, as did the support from department heads and the tailoring of improvement plans to each centre's local context, a key consideration in health care improvement [23,24]. The possibility of releasing the local project manager from clinical work for 2 weeks was appreciated, and this time was used for analysis, lectures and implementing the improvement plan. For example, for an ENT department to change its surgical technique practices, it was necessary to review and sometimes change instruments and to educate the nurses and surgeons involved in the tonsil surgery process. There is growing evidence that PTH and other complications, such as pain, are more common when hot techniques are used [25,26]. At least three population-based studies have demonstrated that cold dissection and cold haemostasis result in lower rates of PTH, but hot instruments continue to dominate in clinical practice [5,11,19]. A British audit presented clear recommendations of the cold technique, but a follow-up study showed a relapse in the use of hot instruments [27]. In Sweden, the PTH rate has been unchanged on national level for the last 5 years, regardless of fact that the results from the NTSRS, including recommendations regarding the use of cold techniques, are published and distributed to all Swedish ENT surgeons annually [14]. However, the present study shows that a QIP can change entrenched habits. Methodological considerations There are important limitations to this study's generalizability. First, the six surgical centres that participated in the QIP all volunteered to participate, whereas the control group consisted of surgical centres with similar PTH rates Fig. 2 PTH rates at baseline, intervention period and follow-up, displayed with 95% confidence interval that had not been invited to participate in the QIP. Volunteering equals self-selection and a potentially greater commitment to change than non-volunteers have, which might have impacted the result. However, the baseline measurements showed no or small significant differences in PTH rates and process indicators (dissection and haemostasis techniques), which indicates that the even if the volunteering surgical centres were more eager to improve, their eagerness had not reduced their PTH rates to near the national average before the start of the QIP. We believe that the QIP is the main explanation for the changes in surgical techniques and decreases in PTH rates observed in the intervention group. Furthermore, the will to improve is an essential precondition for improvement [28]. Second, although the control group consisted of centres with similar PTH rates, the two groups differed slightly in other baseline characteristics: The control group included surgical centres that performed relatively fewer TEs each year. The NTSRS data indicate, however, that there is no association between a centre's volume of surgeries and PTH rates [14]. Furthermore, there was a small but statistically significant difference in the indication for TE surgery, with slightly more patients undergoing surgery for infectionrelated problems in the control group at baseline. However, there were no differences in the baseline-to-follow-up comparisons in either group, suggesting that the indication for TE had no impact on the decrease in PTH rates in the intervention group (Table 2). While outpatient surgery was slightly more common among the control group centres at baseline, in-versus outpatient TE is not a factor that affects PTH [29]. Third, this project was not initially planned as an intervention study but as a QIP. This may have led to a lesscontrolled study environment. The study was not designed to determine which of the intervention activities had the greatest impact. However, a less-controlled environment can increase the generalizability of the findings to guide similar QIPs elsewhere. Finally, our follow-up time was limited to 1 year. It would have been advantageous to have a longer follow-up to evaluate the sustainability of the decrease in PTH rates. This was not feasible because of the retrospective nature of the data, which were taken mainly from the NPR. We wanted to present only complete years to avoid possible seasonal differences in PTH rates. The sustainability of the decrease in PTH rates would be an interesting subject for future studies. Conclusions The rates of postoperative haemorrhage, a major complication after tonsillectomy, can be reduced with a QIP. A national quality register can be used not only to identify areas for improvement, but also to evaluate the impact of an improvement project.
2018-04-03T05:09:23.982Z
2018-03-24T00:00:00.000
{ "year": 2018, "sha1": "6f0f4ea21f5a25f54095637cbdc4598d6e9fc6e0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-018-4942-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "07d39c9616a1afad744dec20a467b988c7fd6274", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119077390
pes2o/s2orc
v3-fos-license
Quantum Walks, Quantum Gates and Quantum Computers The physics of quantum walks on graphs is formulated in Hamiltonian language, both for simple quantum walks and for composite walks, where extra discrete degrees of freedom live at each node of the graph. It is shown how to map between quantum walk Hamiltonians and Hamiltonians for qubit systems and quantum circuits; this is done for both a single- and multi-excitation coding, and for more general mappings. Specific examples of spin chains, as well as static and dynamic systems of qubits, are mapped to quantum walks, and walks on hyperlattices and hypercubes are mapped to various gate systems. We also show how to map a quantum circuit performing the quantum Fourier transform, the key element of Shor's algorithm, to a quantum walk system doing the same. The results herein are an essential preliminary to a Hamiltonian formulation of quantum walks in which coupling to a dynamic quantum environment is included. I. INTRODUCTION In many quantum-mechanical systems at low energies, the Hilbert space truncates to the point where the system is moving between a set of discrete states (which may however be very large in number). In this case we can describe the system, with complete generality, as equivalent to a system in which a particle (which may itself possess internal degrees of freedom) 'hops' between a set of 'nodes', or 'sites', on some graph -the nodes of this graph can then be identified with states in the Hilbert space of the original system. The hopping amplitudes between nodes are just the transition amplitudes in the original Hamiltonian, so that the topology of the graph is entirely determined by these transition amplitudes. In general we may allow the Hamiltonian to be time-dependent, so that both the hopping amplitudes and the on-site energies are allowed to change. We can also allow the internal state of the hopping particle to couple to its coordinate on the graph. In path integral language, one can think of the trajectory of a quantum particle moving between 2 nodes A and B on this graph as a 'quantum walk', made up of a succession of discrete hops. The amplitude to go from A to B is then given by summing over all possible paths (or 'walks') between them, with the appropriate amplitudes. Formulated in this way, the problem of a 'quantum walk' is very familiar to most physicists, and has in fact been under study since the very beginning of quantum mechanics. Notable examples come from solid-state physics (where particles hop around both crystalline lattices [1] and disordered systems of various topology [2]), from quantum magnetism [3] (where an assembly of spins makes transitions between different discrete spin states), from atomic physics and quantum optics (where one deals with discrete atomic states, and where in the last few years 'optical lattices' have come under study [4]), and from a large variety of problems on different sorts of graph in quantum statistical mechanics [5]. Quantum Walks and Quantum Information: A certain class of quantum walks has recently come under study in the context of quantum information processing [6]. These walks are intended to describe the time evolution of quantum algorithms, including the Grover search algorithm and Shor's algorithm. The general idea is that each graph node represents a state in the system Hilbert space, and the system then walks in 'information space'. In some cases explicit mappings have been given between the Hamiltonian of a quantum computer built from spin-1/2 'qubits' and gates, and that for a quantum particle moving on some graph [6,7]. More generally, the mapping between a walk and an algorithm is most transparent for spatial search algorithms with the local structure of the database. The quantum dynamics between two sites A and B on a given graph has been shown for certain graphs to be much faster (sometimes exponentially faster) than for a classical walk on the same graph [9,13,14]. It has also been argued that quantum walks may generate new kinds of quantum algorithm, which have proved very hard to find. Those algorithms based on quantum walks proposed so far fall into one of two classes [11]. The first is based on exponentially faster hitting times [7,8,9,14], where the hitting time is defined as the mean 'first passage' time taken to reach a given target node from some initial state. While several examples have been found, such as the 'glued-trees' of Childs et al. [8], there is presently no application of these to solve some useful computational problem. The second class uses a quantum walk search [10,12,13] providing a quadratic speed-up. In the case of a spatial search, the quantum walk algorithms can perform more efficiently than the usual quantum searches based on Grover's algorithm. Amongst the graphs so far studied for quantum walks are 'decision trees' [7,8,9] and hypercubes [10,12]; quantum walks on some other graphs, and their connection to algorithms, were recently reviewed [6]. Several recent papers have also proposed experimental implementations of quantum walks for quantum information processing [16,17], in various systems such as ion traps, optical lattices and optical cavities. Some of these proposals involve walks in real space, whereas others are purely computational walks (eg., a walk in the Hilbert space of a quantum register [17]). To our knowledge, two quantum walk experiments have been carried out: a quantum walk on the line, using photons [18], and a walk on a N = 4 length cycle, using a 3 qubit NMR quantum computer [19]. However many experiments over the years, particularly in solid-state physics, have also been implicitly testing features of quantum walks. The variety of walks that one may consider is quite enormous -one may vary the topology of the graphs, and, as we will see below, even quite simple walks may have a complicated Hamiltonian structure on these graphs. Even the solid-state and statistical physics literature has only considered a small part of the available graph structures. In the quantum information literature, the discussion of walks has so far been confined to a very restricted class of graphs and Hamiltonians on these graphs. Attention has focussed almost exclusively on either regular hypercubic lattices, on trees (or trees connected by random links), and on 'coin-tossing' walks on lines. Often it is not obvious how one might implement these walks in some real experiment -clearly one is not going to be building, for example, a d-dimensional hyperlattice! Thus one pressing need, which is addressed in considerable detail in the present paper, is to give explicit mappings between the kinds of qubit or gate Hamiltonian that one is interested in practise, and quantum walk Hamiltonians. Quantum Walks and Quantum Environments: The range of possible quantum walk systems becomes even more impressive if one notes that any quantum walker will couple to its environment. In general one needs to understand what form the couplings will take, and how they will influence the dynamics of the quantum walk. Typically these couplings can be formulated in terms of 'oscillator bath' [20,21] or 'spin bath' [22,23] models of the environment; in the case of quantum walks we will see that various couplings to these are allowed by the symmetries of the problem. It has been common in the quantum information literature, at least until very recently, to model decoherence sources and environmental effects using simple noise sources (usually Markovian). Results derived from such models are highly misleading -they miss all the non-local effects in space and time which result when a set of quantum systems are coupled to a real environment, and also give a physically unrealistic description of how decoherence occurs in many systems. Thus another pressing need is to set up a Hamiltonian description of quantum walkers coupled to the main kinds of environment which do exist in Nature, showing how these Hamiltonians transform when one maps between quantum walk systems and qubit or quantum gate systems. This then allows a bridge to real experiments. This is actually a rather substantial task which is undertaken in a separate paper [24]. Plan of paper: The main goal of the present paper is to set up a Hamiltonian description of quantum walk systems, and to give a detailed derivation of the mappings that can be made between quantum walk systems and more standard qubit and gate systems. The results are in some cases quite complex, and in order to make them both useful and easier to follow we give detailed results for several examples. Two things we do not do in this paper are (i) incorporate couplings to the environment into the discussion -this is the subject of another paper [24]; and (ii) work out the dynamics of walkers for any of the Hamiltonians we derive (see however refs [24,25]). In section II we begin by setting up a formalism for the discussion of different kinds of quantum walk. In section III we then show one may systematically map from different quantum walk Hamiltonians to various qubit systems and quantum circuits. This is done first with single-and multi-excitation encoding of walks into many-qubit systems, and then more generally; the mappings are illustrated with simple examples, notably walks on a hyperlattice. In section IV we do the reverse, mapping qubit systems back to quantum walks. This is done first for systems which can be maped to spin chains, and then for more general qubit systems, both static and dynamic; to illustrate the mappings we discuss various chains and small qubit systems, and show how to map systems implementing the quantum Fourier transform to quantum walks. Finally, in the concluding section V we summarize our results. II. QUANTUM WALK HAMILTONIANS In this section we discuss the structure of the different kinds of quantum walk Hamiltonian we will meet. We deal in this paper with 'bare' quantum walks (ie., those without any coupling to a background environment). We emphasize that in this section (and the next) our primary object of study is the quantum walk, as opposed to, eg., qubit networks or quantum circuits. However in section 4 we will be freely mapping between quantum walk systems and other kinds of network. We assume, as in the introduction, that the bare walk is defined by the topology of the graph on which the system walks, and by the 'on-site' and 'inter-site' terms appearing in the Hamiltonian. We can then begin by distinguishing two kinds of bare quantum walk, which we call 'simple' and 'composite', as follows: A. Simple Quantum Walk The 'simple' quantum walker has no internal states, so that we can describe its dynamics by a Hamiltonian with N nodes, each labelled by an integer j ∈ [0, N − 1], of form: Here each node j corresponds to the quantum state |j =ĉ † j |0 , so that |j denotes the state where the 'particle' is located at node j. The two terms correspond to a 'hopping' term with amplitudes ∆ ij (t) between nodes, and on-site node energies ǫ j (t), both of which can depend on time. There is no restriction on either the topology of the graph, or on the time-dependence of the {∆ ij (t), ǫ j (t)}. Thus, for example, one can design a pulse sequence for the parameters ∆ ij (t) and ǫ j (t), as a method of dynamically controlling the quantum walk. Two of the simplest topologies that have been discussed in the literature for quantum walks are d-dimensional hypercubes and hyperlattices. The hypercube simply restricts the simple quantum walk described above to a hypercubic graph -its interest resides in the fact that we can map a general Hamiltonian describing a set of d interacting qubits to a quantum walk on a d-dimensional hypercube. This mapping is discussed in section IV. Hyperlattices extend the hypercube to an infinite lattice in d dimensions; it is common to assume 'translational symmetry' in the lattice space, which means writing a very simple 'band' Hamiltonian where ∆ o is a constant, and p is the 'quasi-momentum' (also called the 'crystal momentum' in the solid-state literature); the 'band energy' is then and the states of the walker can be defined either in the extended or reduced Brillouin zone of quasi-momentum space. In (3) we assume a lattice spacing a o , the same along each lattice vector; (henceforth we will put a o = 1). All results can be scaled appropriately if these restrictions are lifted. B. Composite Quantum Walk The composite walker has 'internal' degrees of freedom, which can function in various ways. We assume these internal modes have a finite Hilbert space, and they can often be used to modify or control the dynamics of the walker. Thus we assume a Hamiltonian in which the simple walker couples at each node j to a mode with Hilbert space dimension l j , and on each link {ij} between nodes to a mode with Hilbert space dimension m ij , and we have a HamiltonianĤ This composite Hamiltonian reduces to the simple walker when F ij (M ij ; t) → ∆ ij (t) and when G j (L j ; t) → ǫ j (t). We do not at this point specify further what are F ij (M ij ; t) and G j (L j ; t), nor the form of their dynamics (which is goverend not only by the coupling to the walker but also by their own intrinsic HamiltonianĤ o ({M ij , L j })), but we will study several examples below. The bulk of this paper will be concerned with the simple walker in (1), which is already rather rich in its behaviour. We emphasize that the internal variables are assumed to be part of the system of interest -that is, they are not assumed to be part of an 'environment' whose variables are uncontrolled and have to be averaged over in any calculation. In the context of quantum information theory these internal variables are assumed to be under the control of the operator. For example, Feynman's original model [26] of a quantum computer is a special case of a composite quantum walk with HamiltonianĤ where τ corresponds to a set of register spins, where the computation is performed. The walker implements the clock of this autonomous computer. Another example of a composite quantum walk is given by the Hamiltonian in which decisions about where the walker hops to are made at various times t l by discrete variables {L j }. Such models include examples where some sequence of pulses acting on the internal walker variables are used to influence its dynamics. A simple special case of such Hamiltonians assumes the walk is entirely on a 1-dimensional line, and that the discrete variable L j is just a spin-1/2 variable -for example, we can assume the form which is just the discrete-time coin tossing Hamiltonian, in which a walker at site j hops to the left/right depending on whether the 'coin (ie., spin-1/2) at this site is up/down, with decisions being made after regular intervals of discrete time t o . Obviously one can cook up many more examples of composite walk systems. We have sometimes found it convenient to rewrite both (1) and (4) as sums over the original graph G and an ancillary graph G * formed from the links between the nodes of the graph. Thus we can write, for example, This representation puts the 'non-diagonal' or 'kinetic' terms on the ancillary lattice on the same footing as the 'diagonal' or 'potential' terms existing on the original lattice. Such a manouevre can be very useful in studying the dynamics of the walker, but we will not need it in this paper. In our study in this paper of mappings from quantum walks to systems of qubits and/or quantum gates (or vice-versa), we will concentrate on simple walk systems, for two reasons. First, as we will see, the results just for simple walks are rather lengthy. Second, a proper discussion of these mappings in a Hamiltonian framework requires a treatment of non-local effects in time, which also arise in the discussion of the coupling of the walker to the environment. Thus we reserve a detailed treatment of composite walks for another paper. III. ENCODING QUANTUM WALKS IN MULTI-QUBIT STATES We would now like to map quantum walk systems to a standard quantum computer made from qubits or quantum gates. This means that we wish to map from a quantum walk Hamiltonian like (1), acting on states |j , to a qubit Hamiltonian acting on M qubits; and we require an encoding of the node state |j in terms of the 2 M computational basis states. We will use the following notation for the computational basis states, where z k ∈ [↑, ↓] (we use spin operators here, instead of the more standard [0, 1], so as to avoid confusion with the node indices). We now describe two such encodings and the corresponding multi-qubit operators needed to implement the quantum walk described by the Hamiltonian (1), thereby deriving the equivalent qubit Hamiltonian. A. Single-excitation encoding Our first encoding implements the quantum walk in an M -dimensional subspace of the full 2 M dimensional Hilbert space for M qubits. In this sense, this encoding is inefficient in its use of Hilbert space dimension. However, the operations can prove to be more easily implementable, requiring only two-qubit terms in the Hamiltonian. The subspace we are interested in is spanned by the M -qubit states with only a single excitation -the states with only a single qubit in the 'up' state |↑ k state, with all other qubits in the |↓ j state (for all j = k). Each node of the graph is then encoded via the location of the excitation (in this case, we label the nodes from 1 to N ) i.e., |k ≡ |↓ 1 ⊗ |↓ 2 ⊗ . . . ⊗ |↑ k ⊗ . . . ⊗ |0 N . In this encoding, the general quantum walk Hamiltonian (1) iŝ which consists solely of 2-qubit terms, between each connected pair of qubits, as defined by the graph. This encoding allows the implementation of any quantum walk using only two-qubit terms in the Hamiltonian, provided arbitrary pairs of qubits can interact. To simulate evolution according to Hamiltonian (10) it suffices to be able to explicitly perform controlled evolution according to each term in the Hamiltonian. LettingĤ = kĤ k , for time-idependent parameters, we use the Trotter formula approaching equality as N → ∞. For time-varying parameters in the Hamiltonian, H(t), evolution is given by the unitary where exp + is the time-ordered exponential. This can be expanded as the product for small time step δ = t/m. By choosing δ sufficiently small, we approximate each term in the Hamiltonian to be constant over this time interval, Since δ is small, we then apply the Trotter formula. So to simulate the quantum walk on a quantum computer using this single-excitation encoding, we must perform unitary operators of the formÛ between pairs of qubits representing connected nodes of the corresponding graph, along with the single qubit termŝ In this way, this encoding represents a 'physical' walk, of a single spin-up over a network of qubits, defined by the pairwise interactions. It is interesting to note the scaling of the resources required for such a simulation of a general graph. In terms of space, the number of qubits required for a given graph is the corresponding number of nodes. The number of gates representing time (assuming only one-and two-qubit operations) is at the very least of the order of the number of edges, assuming each qubit is in direct interaction with all others. Details of the scaling of gate resources will depend upon the structure of both the graph, and the quantum computing architecture [15] B. Binary expansion-based encoding The most efficient way to encode each node is to use the binary expansion of the integer labelling the node. We start from the state at the 'origin' of the quantum walk, and label this state by the ket |0 , making this equivalent to the qubit 'vacuum state' where all spins are 'down'. Consider, a 2 qubit system. Then we have the mappings |0 = |↓↓ , |1 = |↓↑ , |2 = |↑↓ , and |3 ≡ |↑↑ . The number of qubits required will depend upon the number of nodes of the graph -M qubits can encode up to N = 2 M nodes. The corresponding many-qubit Hamiltonian for the quantum walk depends upon how the nodes of the graph are labelled. We start with the simple example of a free quantum walk on the hypercube, before discussing the construction for general graphs, and quantum circuit constructions. This encoding represents a walk in information space -the information about the position of the walker is stored in a quantum register. A similar construction for the simulation of discrete-time quantum walks on a quantum computer was conducted by Fujiwara et al. [17]. Results in this section can be viewed as analogous to this work, extended to the construction of quantum circuits for simulating continuous-time quantum walks. Mapping a Hypercube walk to a set of qubits Consider first the simplest possible quantum walk, where we take ǫ j = 0 (ie., a 'free walk'), and ∆ ij = ∆ o in (1). We also restrict the sum ij to nearest neighbours, so that ]. An easily visualised and trivial example is a free quantum walk on the regular three dimensional cube. This graph has 8 nodes, so requires 3 qubits to encode. Figure 1 displays a specific labelling [6] and the corresponding qubit encoding To determine the 3-qubit Hamiltonian corresponding to this free quantum walk, one considers a single element, i.e. where P k = |k k|. Continuing this process, we obtain which is simply a sum of single qubit terms. It is simple to extend this free walk to M -dimensions,where M -qubits are required. Each qubit represents one of the M orthogonal directions the quantum walker may move in from each node,and the value of the qubit corresponding to that direction gives at which end of that direction the walker is located. The corresponding qubit Hamiltonian for the M -dimensional free quantum walk is thus The quantum circuit to simulate this Hamiltonian is simply single qubit rotations on each qubit, the angle determined by the time of the walk. Scaling of resources for the simulation is trivial -the number of nodes N = log M , while the number of gates is the number of qubits, all of which can be applied simultaneously. Interactions between qubits are inevitably associated with a 'potential' ǫ j defined over the nodes, weighted edges, and/or next-nearest-neighbour couplings (in section IV below we derive the relation between the ǫ j and ∆ ij on the hypercube and the parameters of a general qubit Hamiltonian). General walks and circuit constructions From the simple example of the hypercube, we can see how to construct the multi-qubit Hamiltonian corresponding to the general quantum walk Hamiltonian using this encoding. Each location/node is now labeled by a bit strinḡ z = z 1 . . . z M , with↑≡ 1, ↓≡ 0. A given on-site term in the general quantum walk Hamiltonian (1) becomes where P z k denotes a projection operator. For the hopping terms, we have For each term in the tensor product, either the bit values are equal, and we have a projection operator, or the values are opposite, and we have a ladder operator, (τ + , τ − ), such that where δ(x) is the delta function. Expanding the tensor product in terms of Pauli x and y operators, such that the addition of the Hermitian conjugate terms ensure only products with even numbers of τ y k survive i.e. To simulate the evolution of a general quantum walk on a quantum computer using this encoding, we make use of the Trotter formula (11), implying we must be able to implement unitaries corresponding to evolution according to each term in the total Hamiltonian. For the onsite/potential terms, this corresponds to unitaries of the form A simple circuit to implement this unitary [27] uses a single ancilla qubit, initialized in the |↓ state, and a multi-qubit gate which takes all qubits as input and flips the ancilla qubit if the walker qubits are in the state |z . An example is shownbelow for the state withz =↑↑↓, where the solid/hollow cirlces indicate control on ↑ / ↓, and The multiple-controlled-NOT gates can be constructed using 3-qubit Toffoli gates, additional ancilla (M − 1 gates/ancilla for M control qubits) and a controlled-NOT and (see [27] page 184). For the hopping terms, we must simulate unitaries which implement evolution according to some product of τ x 's and τ y 's on some subset of walker qubits, if the other qubits are in some given state -a multi-qubit controlled operation. Firstly, the evolution by the Hamiltonian consisting of a product over τ z operators can be simulated using controlled-NOT gates and a phase gate with a single ancilla [27], for unitaries U and V , we can use single qubit gates and the circuit above to simulate any product of f τ x 's and τ y 's. Since controlled-NOT is its own inverse, the controlled evolution is implemented by simply making the A ǫ a controlled gate, i.e. for Uτ z U † =τ x and Vτ z V † =τ y . The complexity of the circuit to simulate a quantum walk will depend upon the graph, and how the nodes are labelled. One simplification is to minimize the Hamming weight (number of different bits) between connected nodes, which we use below for the walk on the line and hyperlattice. Hyperlattice walks mapped to qubits and gates We start with a line with 2 N nodes such that the general Hamiltonian is H = − iĉ i . The encoding of the node states is as follows: start with a single qubit, defining a two node walk, with the nodes labelled as |↓ and |↑ . This quantum walk is simply defined by H = −∆ 1 τ x 1 . Now add an additional qubit, such that each node now has two labels, without changing the Hamiltonian, we have two, two node walks, which we now join together at opposite ends, such that the order of the nodes is now ↓↓,↓↑,↑↑,and ↑↓. We then continue in this fashion (as shown in the figure 2) for N-qubits, giving a 2 N node walk on the line. Note that the label of each node differs from it's nearest neighbours in only one bit. Given a bit-stringx = x N x N −1 . . . x 2 x 1 specifying a node, the position along the line (with ↓↓ . . . ↓ corresponding to the origin, ie., position 1) is given by the function where ⊕ denotes addition modulo 2. This labelling results in the following N -qubit Hamiltonian for the quantum walk on the line, such that each hopping term consists of only one Pauli term, and the rest projection operators. For the corresponding circuit simulation, this means that only multiply-controlled single-qubit gates are required. In the case of uniform hopping, ∆ i = ∆ 0 , the sum over the hopping terms simplifies to The corresponding circuit to simulate U k (ǫ) = exp (−i ǫH k ), for ↓ , (such that the corresponding unitaries U k are controlled rotations on the k th qubit.) is shown below (for 6 qubits); where we have used the notation X θ ≡ R x (θ) = exp(−i θτ x /2), such that X π gate corresponds to the Pauli-X i.e. a bit flip. To write the circuit above in terms of a one-and two-qubit gates we use the construction described above. Explicitly, we require the multiply controlled gate with the Toffoli gates realised using single qubit rotations and CNOT gates, as shown below: using the following single qubit gates (where R a (θ) = exp(−iθσ a /2)): Finally, we need to be able to apply a controlled-R x (2ǫ), which is simply: This can be simply modified to the quantum walk on the circle, by modifying the last term in the Hamiltonian tô ↓ , and in turn altering the corresponding gate. Having the hopping amplitudes between nodes equal greatly simplifies the quantum circuit simulation -the number of gates requires scales approximately as O(n 2 ) for each incremental time step. The construction of the qubit quantum circuit for simulating the quantum walk in the line can be easily generalised to simulate a quantum walk on an arbitrary D-dimensional hyperlattice, with 2 N D nodes. Each node on the hyperlattice is specified by D bit-strings of length N , each of which denote the location of the node in a given direction -each node is represented by an N × D qubit state, |x 1 ;x 2 ; . . . ;x D , wherex k is an N -bit string. Using this encoding, the quantum walk on the hyperlattice simply corresponds to D individual quantum walks on the line, where D is the dimension of the lattice -there is no interaction between qubits specifying different directions. Thus, we use the above construction on D different sets of M -qubits to define the quantum walk on the D-dimensional hyperlattice as follows We have discussed the construction of qubit Hamiltonians for a given walk when the graph structure is completely known. Another scenario is where we are given access to a 'black-box' or oracle, which contains information about the graph structure, e.g the adjacency matrix. In the standard set-up, we may query the oracle with two nodes to determine if there is such a connection. This is the situation in the Childs et al. algorithm [8], and was considered more generally by Kendon [15]. IV. FROM QUBIT HAMILTONIANS TO QUANTUM WALKS The other direction to approach these mappings from is to start with a multi-qubit Hamiltonian, and determine a corresponding quantum walk. We begin with a simple one-dimensional spin chain. It is possible to 'collapse' such a quantum walk to a biased walk along a line [9]. This corresponds to the XY -model with non-homogenous coupling strengths. This is done be defining column subspaces, such that states in column space k, are only connected to states in column spaces k − 1 and k + 1, in terms of the corresponding graph for the quantum walk. Site k on the line then corresponds to an equal superposition of states in the corresponding column subspace. The strength of the coupling between the nodes is then determined from the Hamiltonian. Figure 5 shows the linear chain corresponding to to the XY -Hamiltonian with six sites, in the three excitation subspace. The two end nodes correspond to the states |↑↑↑↓↓↓ and |↓↓↓↑↑↑ . B. Static Qubit Hamiltonians to Quantum Walks Now let's look at more general spin systems. A system of considerable interest, both methodological and practical, is the general N -qubit Hamiltonian with time-independent couplings. As an example consider the following form: We have not included all possible interaction terms V αβ ijτ α iτ β j here, because the algebra then becomes rather messy, but instead just all the terms representing different kinds of interaction: the longitudinal and transverse diagonal couplings V ij and V ⊥ ij , and a representative non-diagonal χ ij . It is intuitively useful, before giving the general results, to first consider just three qubits. Using the binary expansion encoding, where the state |k represents the k th node on some graph, we havê which is a quantum walk over a cubic lattice, with the addition of the diagonal connections, on the faces, as well as on-site potentials, as shown in figure 6. If we generalise now to an N -qubit Hamiltonian of the form above, we have a quantum walk on a hypercube, with the addition of next-nearest neighbour connections, where the nodes are encoded as described earlier for the hypercube. We can re-express the Hamiltonian in the general quantum walk form aŝ where the coefficients are defined as follows. Consider the binary representation of each of the nodes, i.e. i ≡ i 1 i 2 . . . i N , j = j 1 j 2 . . . j N where i a , j b = 0, 1 corresponding to spin-up and spin-down in the qubit representation. Then, for 1 ≤ a, b ≤ N , and The only aspect of these expressions that is not immediately obvious is the signs. C. Dynamic Qubit systems mapped to quantum walks We now consider a universal gate set in which we allow time-dependence in all the couplings. Again we do not consider the most general case because the results are too messy, but instead take a special case in which the qubit Hamiltonian has the formĤ where we have complete control over all parameters in the Hamiltonian, which are time-dependent. This is a rather idealised case, but will suffice for our demonstration. If every qubit is 'connected', such that there are sufficient coupling terms between qubits allowing entanglement between all, then the Hamiltonian is universal for quantum computation. The two single qubit terms allow any single-qubit unitary to be implemented, then all that is needed is a two-qubit entangling operation [28], as provided by the XX coupling. From this Hamiltonian, a quantum circuit will correspond to a pulse sequence, describing applications of different terms in the Hamiltonian. The fundamental gate set consists firstly of arbitrary x and z rotations (on the Bloch sphere) for each qubit, denoted which can be combined to describe any single qubit unitary operationÛ , viâ for some global phase α. As well we have the two-qubit unitaries described by between qubits i, j. We will construct circuits in terms of these fundamental gates, then convert the relevant pulse sequence into a quantum walk. The canonical universal gate set consists of single-qubit unitaries and the controlled-NOT, (cnot) operation. Using a method from Ref. [29], we show below a circuit which is equivalent to not made up gates from our fundamental set; For compactness of notation, we set R x (θ) ≡ X θ and R z (ξ) ≡ Z ξ , and W = V ⊥ (π/4). The circuit in terms of the fundamental gates easily becomes a pulse sequence by interpreting the angles as times of application for corresponding terms in the Hamiltonian. Applying R x (γ) on the second qubit corresponds to switching on ∆ 2 for a time T such that T = −γ/2∆ 2 . When γ is positive, we simple replace this with the angle γ ′ = γ − 2π, which gives an equivalent rotation. Similarly, for R z (θ) on the third qubit, T = θ/2ǫ 2 , and for V ⊥ (χ) on the third and fourth qubits, we switch V ⊥ 34 on for a time T = χ/V ⊥ 34 . We can interpret each fundamental gate in terms of a quantum walk on graph whose nodes are arranged on the hypercube with the specific gate determining the edges (see figure 7). Imagine the 2 N nodes of a quantum walk arranged on a hypercube. An R (k) x (γ) pulse switches on connections along edges - figure 7(1) -in a direction given by the qubit acted upon. We then have a quantum walk on this restricted hypercube, for a time corresponding the angle γ. Similarly, a V ⊥ jk (χ) pulse 'switches on' connections along the diagonals of faces determined by the qubits acted upon, resulting in a different restricted quantum walk, for a time corresponding to χ ( figure 7(2)). On the other hand, a R (j) z (θ) pulse does not connect any nodes, but rather applies a relative phase to half of the nodes, i.e. This relative phase is applied to the nodes on a 'face' of the hypercube, dependent upon the qubit acted upon (see figure 7(3)). A quantum computation will correspond to a series of these pulses, of varying time -the analogous quantum walk will be over a hypercube with time-dependent edges. As an example, we consider the quantum Fourier transform (QFT), the essential element of Shor's factoring algorithm. The QFT on an orthonormal basis |0 , |1 , . . . , |N − 1 is defined by the linear operator, which on an arbitrary state acts as where is the (classical) discrete Fourier transform of the amplitudes x j . This transformation is unitary, so can implemented on a quantum computer. Following the prescription from [27], to perform the QFT on a qubit quantum computer we let N = 2 n , and the basis |0 , . . . , |N − 1 be the computation basis for n-qubits. Each j is expressed in terms of it's binary representation, j ≡ j 1 j 2 . . . j n -explicitly j = j 1 2 n−1 + j 2 2 n−2 + . . . + j n 2 0 . We use the notation 0.j k j k+1 . . . j l to represent the binary fraction j k /2 + j k+1 /4 + . . . + j l /2 l−k+1 . This allows us two write the action of the QFT in a useful product representation [27], |j 1 . . . j n → 1 2 n/2 |0 + e i2π0.jn |1 |0 + e i2π0.jn−1jn |1 . . . |0 + e i2π0.j1...jn |1 . Based on this representation, an efficient circuit, shown in figure 8, for the QFT is constructed [27]. This circuit utilises the Hadamard gate, H, swap gates, and controlled-R k gates, where We can rewrite this circuit in terms of our fundamental gate set, to derive a corresponding pulse sequence. A controlled-T k gate is given in figure 9, while the swap gate is shown in figure 10. Z π(2 k−1 +1) 2 k FIG. 9: The controlled-R k gate in terms of the fundamental gate set. The pulse sequence can be read directly from the circuit. By combining these circuits we construct the QFT circuit in terms of our fundamental gates set. This circuit can be interpreted as a pulse sequences, the duration of the pulses corresponding to the angles characterising the different gates. For the above example we have assumed complete control over all parameters in the Hamiltonian, with the ability to switch all on or off. In physical systems, this is almost surely not the case. For example, interactions may be FIG. 10: The swap gate as a pulse sequence using our fundamental gates. constant, with the single qubit terms controllable. Quantum computation is still possible in this case, though pulse sequences will be more complicated. An interesting problem is how circuit complexity varies as further restrictions are placed on possible controls. The problem of constructing efficient circuits in general is a very open and active area of research [30]; when decoherence is included in the operation of the gates, this becomes even more interesting -circuits would be designed to minimise decoherence, as opposed to complexity. Naively, one would expect less gates to mean shorter running time and lessening the effects of decoherence. A detailed study may demonstrate that this is not the case. V. CONCLUDING REMARKS In this paper we have formulated quantum walks in a Hamiltonian framework, and explored the mappings that exist between various quantum walk systems and systems of gates and qubits. The Hamiltonian formulation possesses considerable advantages. We have seen that it allows a unified treatment of continuous time and discrete time walks, for both simple and composite quantum walk systems. It is also necessary if one wishes to make the link to experimental systems. This latter point becomes particularly clear when one tries to understand decoherence for quantum walkers, for which it is essential to set up a Hamiltonian or a Lagrangian description. In the paper we have concentrated on walks on hypercubes and hyperlattices. Walks on hypercubes are naturally mapped to systems of gates or qubits, and we have explored mappings in either direction. Walks on hyperlattices, on the other hand, can be mapped to qubit or gate systems, but the mappings are not so obvious -we have exhibited them, and thereby shown how one could construct an experimental d-dimensional hyperlattice from a gate system. In the case of both hypercubes and hyperlattices we have exhibited the general methods for finding these mappings and their inverses, in sufficient detail that it should now be clear how to make such mappings for quantum walks on more general graphs. The practical use of our methods and results does not become completely clear until we incorporate the environment into our Hamiltonian description. As indicated in the introduction, this can be done in a fairly comprehensive way, by using a general description of environments in terms of oscillator and or spin baths. The rather lengthy results once this is done appear in a companion paper to this one [24]. Once this is done it becomes possible to solve rigourously for the dynamics of quantum walk systems, without using ad hoc models with external noise sources. The results can be pretty surprising, as shown by the results in ref. [25] for one particular example. Ultimately the main reason for the work in the present paper is that one can bring the work on quantum walks into contact with experiment, and design experimental systems able to realise different kinds of quantum walk. In parallel work we have done this for both a particular ion trap system, and for a particular architecture of spin qubits [31]. Only in this way will it be possible to fully realise the potential offered by quantum walk theory in the lab (and to test it experimentally!).
2019-04-14T03:22:30.132Z
2007-01-14T00:00:00.000
{ "year": 2007, "sha1": "fec30a903ca36bbdb5c4cc002fbe01399954b8ef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0701088", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fec30a903ca36bbdb5c4cc002fbe01399954b8ef", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46862770
pes2o/s2orc
v3-fos-license
Response to “Response to Hoy, ‘Gender imbalance and brain stimulation conferences: We have a problem and it is everyone's problem’” In a Letter to the Editor [1], Associate Professor Hoy identified the gender imbalance at The International Brain Stimulation Conference, both with respect to the program from 2015 and the preliminary program for the upcoming 2017meeting; in their recent response, Professors George and Sackeim [2] alluded to the fact that the issue of gender balance does not stand alone, and is interwoven with other concerns such as providing a balanced programwith respect to presenters’ career stage and geographical location, together with a diversity of clinical and basic research. Here we act on Prof George and Sackeim’s call for a possible solution to provide a more balanced program at the conference, both in terms of gender and career stage, while critically maintaining high standards of scientific merit. The conception and support of The International Brain Stimulation Conference by the Editorial Board of the Brain Stimulation journal provides a unique opportunity for the selection of invited speakers for the upcoming conference. Specifically, invitations to speak could be offered to authors of the most highly cited recent research in Brain Stimulation. While we acknowledge that this method does not consider high-quality brain stimulation research published in other world-class journals, it represents a first step in acknowledging, and rewarding, high-quality research publications in our field. We audited research articles published in Brain Stimulation from 2014 to 2016 (n = 321) (data obtained from Web of Science on 3/11/2016), identifying the gender of the first and senior (last) authors.2 Overall, 29% of first and senior authors were female; whenwe selected themost highly cited papers from each year (2014 papers with ≥15 citations, n = 20; 2015 papers with ≥10 citations, n = 18; 2016 papers with ≥4 citations; n = 5), 35% of first and senior authors were female. It is apparent, however, that the gender imbalance is greater for senior (22% and 26% female for all and highly cited papers, respectively) than for first authors (37% and 44% female for all and highly cited papers, respectively). These data are largely consistent with a 24% (range 17–30%) base rate of females within neuroscience departments [3]. Furthermore, the greater percentage of female first, compared to last, authors is consistent with the loss of female scientists inmid-to-senior career stage as highlighted by Hoy [1]. However, these data are inconsistent with the gender balance in oral presentations selected from abstracts at the First International Brain Stimulation Conference held in Singapore, 2015 (5% female) and the preliminary program (keynotes only) for the Second International Brain Stimulation Conference to be held in Spain, 2017 (0% female). Taken together, it is clear that female scientists are publishing highly cited original research in the premier journal for brain stimulation but this contribution is not reflected in invitations or selections for oral presentations at our international conference. Here we present a practical and effective strategy to promote gender and career stage diversity at the International Brain Stimulation Conference, using an objective method to quantify the quality and impact of recent Brain Stimulation papers. First, we ranked papers (original research, review, meta-analysis) published in Brain Stimulation according to citation count; second, we selected the top five ranked papers for each year (2014–2016); and third, we obtained the field-weighted citation impact (FWCI, [4]) for the period 2011– 2016 for the first and senior authors of these top-ranked papers (see Table 1). Our rationale was that highly cited papers reflect the impact of the study, and the FWCI reflects an individual’s citation performance in recent years irrespective of their career stage. To calculate the FWCI, the number of citations for individual’s papers is presented as a ratio of the average number of citations for all comparable publications indexed in Scopus. Therefore, a FWCI of 1.5 indicates that the individual’s publications have been cited 50% more times than expected. FWCI is a useful objective metric to benchmark researchers across different disciplines and career stages. Of the 30 authors presented in Table 1, 33% are female (7/15 first authors; 3/15 senior authors). We suggest that first-author data could be used to organise a specific symposium for earlyand mid-career researchers, in which the first authors of highly cited papers are invited to present (not selected from abstracts). The data presented here suggest that such a symposium could be gender balanced (47% female). While conceived as a short-term solution to achieve gender balance at the conference, this approach will likely generate a positive spiral and lead to longer term benefits in achieving gender balance in our discipline. Indeed, invited presentations facilitate career development through promotion of cutting-edge research, and greater collaborative outreach will empower scientific leadership and provide greater access to academic promotion. This will ultimately lead to greater female representation at senior levels. Those researchers who appear in our senior author list are clearly some of the leaders in our field, and warrant invitations for keynote addresses or symposia organisers/presenters. Indeed, a number of these researchers gave invited talks at the 2015 meeting (two males as keynotes; one male 2 Gender was identified via online means: 3.17% of authors could not be identified and therefore are not included in the results below. The last author was assumed to be the senior author, which is convention for most neuroscience disciplines. Dear Editor, In a Letter to the Editor [1], Associate Professor Hoy identified the gender imbalance at The International Brain Stimulation Conference, both with respect to the program from 2015 and the preliminary program for the upcoming 2017 meeting; in their recent response, Professors George and Sackeim [2] alluded to the fact that the issue of gender balance does not stand alone, and is interwoven with other concerns such as providing a balanced program with respect to presenters' career stage and geographical location, together with a diversity of clinical and basic research. Here we act on Prof George and Sackeim's call for a possible solution to provide a more balanced program at the conference, both in terms of gender and career stage, while critically maintaining high standards of scientific merit. The conception and support of The International Brain Stimulation Conference by the Editorial Board of the Brain Stimulation journal provides a unique opportunity for the selection of invited speakers for the upcoming conference. Specifically, invitations to speak could be offered to authors of the most highly cited recent research in Brain Stimulation. While we acknowledge that this method does not consider high-quality brain stimulation research published in other world-class journals, it represents a first step in acknowledging, and rewarding, high-quality research publications in our field. We audited research articles published in Brain Stimulation from 2014 to 2016 (n = 321) (data obtained from Web of Science on 3/11/2016), identifying the gender of the first and senior (last) authors. 2 Overall, 29% of first and senior authors were female; when we selected the most highly cited papers from each year (2014 papers with ≥15 citations, n = 20; 2015 papers with ≥10 citations, n = 18; 2016 papers with ≥4 citations; n = 5), 35% of first and senior authors were female. It is apparent, however, that the gender imbalance is greater for senior (22% and 26% female for all and highly cited papers, respectively) than for first authors (37% and 44% female for all and highly cited papers, respectively). These data are largely consistent with a 24% (range 17-30%) base rate of females within neuroscience departments [3]. Furthermore, the greater percentage of female first, compared to last, authors is consistent with the loss of female scientists in mid-to-senior career stage as highlighted by Hoy [1]. However, these data are inconsistent with the gender balance in oral presentations selected from abstracts at the First International Brain Stimulation Conference held in Singapore, 2015 (5% female) and the preliminary program (keynotes only) for the Second International Brain Stimulation Conference to be held in Spain, 2017 (0% female). Taken together, it is clear that female scientists are publishing highly cited original research in the premier journal for brain stimulation but this contribution is not reflected in invitations or selections for oral presentations at our international conference. Here we present a practical and effective strategy to promote gender and career stage diversity at the International Brain Stimulation Conference, using an objective method to quantify the quality and impact of recent Brain Stimulation papers. First, we ranked papers (original research, review, meta-analysis) published in Brain Stimulation according to citation count; second, we selected the top five ranked papers for each year (2014-2016); and third, we obtained the field-weighted citation impact (FWCI, [4]) for the period 2011-2016 for the first and senior authors of these top-ranked papers (see Table 1). Our rationale was that highly cited papers reflect the impact of the study, and the FWCI reflects an individual's citation performance in recent years irrespective of their career stage. To calculate the FWCI, the number of citations for individual's papers is presented as a ratio of the average number of citations for all comparable publications indexed in Scopus. Therefore, a FWCI of 1.5 indicates that the individual's publications have been cited 50% more times than expected. FWCI is a useful objective metric to benchmark researchers across different disciplines and career stages. Of the 30 authors presented in Table 1, 33% are female (7/15 first authors; 3/15 senior authors). We suggest that first-author data could be used to organise a specific symposium for early-and mid-career researchers, in which the first authors of highly cited papers are invited to present (not selected from abstracts). The data presented here suggest that such a symposium could be gender balanced (47% female). While conceived as a short-term solution to achieve gender balance at the conference, this approach will likely generate a positive spiral and lead to longer term benefits in achieving gender balance in our discipline. Indeed, invited presentations facilitate career development through promotion of cutting-edge research, and greater collaborative outreach will empower scientific leadership and provide greater access to academic promotion. This will ultimately lead to greater female representation at senior levels. Those researchers who appear in our senior author list are clearly some of the leaders in our field, and warrant invitations for keynote addresses or symposia organisers/presenters. Indeed, a number of these researchers gave invited talks at the 2015 meeting (two males as keynotes; one male and one female as session speakers) and two appear on the preliminary program as keynote speakers for the 2017 meeting (both male). We have presented, for consideration, an objective, empirical method for fostering broader recognition of the significant contributions of female researchers at our conference. The approach could easily be extended by auditing brain stimulation research in other high-quality journals and by expanding the metrics used to assess researchers' track records. More broadly, the data we present raise the question of why, if female scientists are publishing highquality original research, are they underrepresented at conferences, on editorial boards, and in other senior positions? Indeed, there is growing evidence for widespread, systematic gender bias in the sciences [5]. It is imperative that effort is required by the entire scientific community to address such issues, which can only enhance scientific advancement and discovery.
2018-04-03T05:38:40.459Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "99902d4ab1b6f58336ec553166f3b133467e2bfc", "oa_license": "CCBYNC", "oa_url": "https://researchrepository.murdoch.edu.au/id/eprint/35265/1/Response%20to%20Hoy.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3d62285953531557adefbb8933fbe195543695b5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
9594973
pes2o/s2orc
v3-fos-license
Parsing with Principles and Probabilities This paper is an attempt to bring together two approaches to language analysis. The possible use of probabilistic information in principle-based grammars and parsers is considered, including discussion on some theoretical and computational problems that arise. Finally a partial implementation of these ideas is presented, along with some preliminary results from testing on a small set of sentences. Introduction Both principle-based parsing and probabilistic methods for the analysis of natural language have become popular in the last decade. While the former borrows from advanced linguistic specifications of syntax, the latter has been more concerned with extracting distributional regularities from language to aid the implementation of NLP systems and the analysis of corpora. These symbolic and statistical approaches axe beginning to draw together as it becomes clear that one cannot exist entirely without the other: the knowledge of language posited over the years by theoretical linguists has been useful in constraining and guiding statistical approaches, and the corpora now available to linguists have resurrected the desire to account for real language data in a more principled way than had previously been attempted. This paper falls directly between these approaches, using statistical information derived from corpora analysis to weight syntactic analyses produced by a 'principles and parameters' parser. The use of probabilistic information in principle-based grammars and parsers is considered, including discussion on some theoretical and computational problems that arise. Finally a paxtial implementation of these ideas is presented, along with some preliminary results from testing on a small set of sentences. Little work has been done on the complexity of algorithms used to parse with a principle-based grammar, since such grammars do not exist as accepted mathematically well-defined constructs. It has been estimated that in general, principle-based parsing can only be accomplished in exponential time, i.e. 0(2") [Berwick and Weinberg1984, Weinberg19881. A feature of principle-based grammars is their potential to assign some meaningful representation to a string which is strictly ungrammatical. It is an inherent feature of phrase structure grammars that they classify the strings of words from a language into two (infinite) sets, one containing the grammatical strings and the other the ungrammatical strings. Although attempts have been made to modify PS grammars/parsers to cope with extragrammatical input, e.g. [Carbonell and Hayes1983, Douglas and Dale1992, Jensen et al.1983, this is a feature which has to be 'added on' and tends to affect the statement of the grammar. Due to the lack of an accepted formalism for the specification of prindple-based grammars, Crocker and Lewi, [Crocker and Lewin1992] define the declarative 'Proper Branch' formalism, which can be used with a number of different parsing methods. A proper branch is a set of three nodes --a mother and two daughters --which are constructed by the parser, using a simple mechanism such as a shift-reduce interpreter, and then 'licensed' by the principles of grammar. A complete phrase marker of the input string can then be constructed by following the manner in which the mother node from one proper branch is used as a daughter node in a dominating proper branch. Eadl proper branch is a binary branching structure, and so all grammatical constraints will need to be encoded locally. Crocker [Crocker19921 develops "a 'representational' reformulation of the transformational model which decomposes syntactic analysis into sew,.ral representation types --including phrase structure, chains, and coindexation --allowing one to maintain the strictly local characterisation of prindples with respect to their relevant representation types," [Crocker and Lewin1992, p. 511]. By using the proper branch method of axiomatising the grammar, the structure building section of the parser is only constrained in that it must produce proper branches; it is therefore possible to experiment with different interpreters (i.e. structure proposing engines) while keeping the grammar constant. The Grammar and Parser A small principle-based parser was built, following the proper branch formalism developed in [Crocker and Lewin1992]. Although the grammar is very limited, the use of probabilities in ranking the [mrscr's output can be seen as a first step towards implementing a principle-based parser using a more fully specified collection of grammar modules. The grammar is loosely based on three modules taken from Government-Binding Theory --X-bar theory, Theta Theory and Case Theory. Although these embody the spirit of the constraints found in Choresky [Chomsky1981] they are not intended to be entirely faithful to this specification of syntactic theory. There is also only a single level of representation (which is explicitly constructed for output purposes but not consulted by the parser). This representation is interpreted as S-structure. Explanations of the knowledge contained within each grammar principle is given in the following sections. Theory X-bar Theory uses a set of schemata to license local subtmes. We use a parametrised version of the X-bar schemata, similar to that of Muysken [Muysken1983], but employing features which relate to the state of the head word's theta grid to give five schemata (figure 2) . A .ode includes the following features (among others): Figure ~Y2 X + _ ~X~ Y+ ~XS+ y_----,X s Y= ~y_-X s -2: The X-bar Schemata 1. Category: the standard category names are employed. 2. Specifier (SPEC): this feature spedfies whether the word at the head of the phrase being built requires a spech%r. 3. Complement (COMP): the complement feature is redundant in that the information used to derive it's value is already present in a word's there grid, and will therefore be checked for well-formedness by the theta criterion. Since this information is not referenced until later, the COMP feature is used to limit the number of superfluous proper-branches generated by the parser. 4. The head (i.e. lexical item) of a node is carried on each projection of that node along with its theta grid. The probabilities for occurrences of the X-bar schema were obtained from sentences from the preliminary Penn Tmebank corpus of the Wall Street Journal, chosen because of their length and the head of their verb phrase (i.e. the main verbs were all from the set for which theta role data was obtained); the examples were manually parsed by the authors. The probabilities were calculated using the following equation, where X~: --~ Y~## Z~s~ is a specific schema, X is the set of X-bar schemata and A and B and C are variables over category, SPEC and COMP feature bundles: This is different to manner in which probabilities are collected for stochastic context-free grammars, where the identity of the mother node is taken into account, as in the equation below: This would result in misleading probabilities for the Xbar schemata since the use of schemata (3), (4), and (5) would immediately bring down the probability of a parse compared to a parse of the same string which happened to use only (1) and (2).* *The probabilities for (1) and (2) would be I as they have unique mothers. The overall (X-bar) likelihood of a parse can then be computed by multiplying together all the probabilities obtaim:d from each application of the schemata, in a manner analogous to that used to obtain the probability of a phrase marker generated by an SCFG. Using the schemata in this way suggests that the building of structure is category independent, i.e. it is just as likely that a verb will have a (filled) specifier position as it is for a noun. The work on stochastic context-free grammars suggests a different set of results, in that the specific categories involved in expansions are all important. While SCFGs will tend to deny that all categories expand in certain ways with the same probabilities, they make this claim while using a homogeneous grammar formalism. When a more modular theory is employed, the source of the supposedly category specific information is not as obvious. The use of lexical probabilities on specifier and complement co-occurrence with specific heads (i.e. lexical items) could exihibit properties that appear to be category specific, but are in fact caused by common properties which are shared by lexical items of the same category. 2 Since it can be argued that the probabilistic information on lexical items will be needed independently, there is no need to use category specific information in assigning probabilities to syntactic configurations. Theta Theory Theta theory is concerned with the assignment of an argument structure to a sentence. A verb has a number of the thematic (or 'theta') roles which must be assigned to its arguments, e.g. a transitive verb has one theta role to 'discharge' which must be assigned to an NP. If a binary branching formalism is employed, or indeed any formalism where the arguments of an item and the item itself are not necessarily all sisters, the problem of when to access the probability of a theta application is presented. The easiest method of obtaining and applying theta probabilities will be with reference to whole theta grids. Each theta grid for a word will be assigned a probability which is not dependent on any particular items in the grid, but rather on the occurrence of the theta grid as a whole. A preliminary version of the Penn Treebenk bracketed corpus was analysed to extract information on the sisters of particular verbs. Although the Penn Treebank data is unreliable since it does not always distinguish complements from adjuncts, it was the only suitable parsed corpus to which the authors had access. Although the distinction between complements and adjuncts is a theoretically interesting one, the process of determining which constructions fill which functional roles in the analysis of real text often creates a number of problems (see [Hindle and Rooth1993] for discussion 2It is of course possible to store these cross-item similarities as lexical rules [Bresnan1978], but this alone does not entail that the properties axe specific to a category, cff. the theta grids of verbs and their ~related' nouns. on this issue regarding output of the Fidditch parser [Hindle1993]). The probal)ilities for em'h of tim verbs' thcta t;l'hls were calculated using the equati~ m bch Jw, w her(, I '(s, It,) is the probability of the theta grid st occurring with thc verb v, (v, si) is an occurrence of the items in si being licensed by v, and S ranges over all theta gr!ds for v: Case Theory In its simplest form, Case theory invokes the Case filter to ensure that all noun phrases in a parse are assigned (abstract) case. Case theory differs from both X-bar and Theta theory in that it is category specific: only NPs require, or indeed can be assigned, abstract case. If we are to implement a probabilistic version of a modular grammar theory incorporating a Case component, a relevant question is: are there multiple ways of assigning Case to noun phrases in a sentence? i.e. can ambiguity arise due to the presence of two candidate Case assigners? Case theory suggests that the answer to this is negative, since Case assignment is linked to theta theory via visibility, and it is not possible for an NP to receive more than one theta role. As a result, the use of Case probabilities in a parser would be at best unimportant, since some form of ambiguity is needed in the module, i.e. it is possible to satisfy the Case filter in more than one way, for probabilities associated with the module to be of any use. While having a provision for using probabilities deduced from Case information, the implemented parser does not in fact use Case in its parse ranking operations. Local Calculation The use of a heterogeneous grammar formalism and multiple probabilities invokes the problem of their combination. There are at least two ways in which each mother's probabilities can be calculated; firstly, the probability information of the same type can be used: the daughters' X-bar probabilities alone could be used in calculating the mother's X-bar probability. Alternatively, a combination of some or all of the daughters' probability features could be employed, thus making, e.g., the X-bar probability of the mother dependent upon all the stochastic information from the daughters, including theta and Case probabilities, etc. The need for a method of combining the daughter probabilities into a useful figure for the calculation of the mother probabilities is likely to involve trial and error, since theory thus far has had nothing to say on the subject. The former method, using only the relevant daughter probabilities, therefore seems to be the most fruitful path to follow at the outset, since it does not require a way of integrating probabilities from different modules while the parse is in progress, nor is it ~m computationally expensive. Global Calculation The manner in which the global probability is calct!latcd will be partly dependent upon the information ~'ontained in the local probability calculations. If the probabilities for partial analyses have been calculated using only probabilities of the same types from the subanalyses -e.g. X-bar, Theta --the probabilities at the top level will have been calculated using informationally distinct figures. This has the advantage of making 'pure' probabilities available, in that the X-bar probability will reflect the likelihood of the structure alone, and will be 'uncontaminated' by any other information. It should then be possible to experiment with different methods of combining these probabilities, other than the obvious 'multiplying them together' techniques, which could result in one type of pr~babililty emerging as the most important. On the other hand, if probabilities calculated during the parse take all the different types of probabilities into account at each calculation --i.e. the X-bar, theta, (~tc. probabilities on daughters are all taken into account when calculating the mother's X-bar probability --the probabilities at the top level will not be pure, and a lot of the information contained in them will be redundant since they will share a large subset of the probabilities used in their separate calculations. It will not therefore be c~asy to gain theoretical insight using these statistics, and their most profitable method of combination is likely tt~ be more haphazard affair than when more pure probabilities are used. The parser used in testing employed the first method and therefore produced separate module probabilities for each node. For the lack of a.better, theoretically motivated method for combining these figures, the product of the probabilities was taken as the global probability for each parse. Testing the Parser The parser was tested using sixteen sentences containing verbs for which data had been collected from the Penn Treebank corpus. The sentences were created by the authors to exhibit at least a degree of ambiguity when it came to attaching a post-verbal phrase as an adjunct or a complement. In order to force the choice of the 'l)est' parse on to the verb, the probabilities of theta grids for nouns, prepositions, etc. was kept constant. Of these 16 highest ranked parses, 7 are the expected parse, with the other 9 exhibiting some form of misattachment. The fact that each string received multipie parses (the mean number of analyses being 9.135, ~md the median, 6) suggests that the probabilistic information did favourably guide the selection of a single amdysis. It is not really possible to say from these results how successful the whole approach of probabilistic principlebased parsing would be if it were fully implemented. The inconclusive nature of the results obtained was due to a number of limiting factors of the implementation including the simplicity of the grammar and the lack of available data. Discussion Limitations of the Grammar The grammar employed is a partial characterisation of Chomsky's Government-Binding theory [Chomsky1981, Chomsky1986] and only takes account of very local constralnts (i.e. X-bar, Theta and Case); a way of encoding all constraints in the proper branch formalism (e.g. [Crocker1992]) will be needed before a grammar of sufficient coverage to be useful in corpora analysis can be formulated. The problem with using results obtained from the implementation given here is that the grammar is sufficiently underspecified and so leaves too great a task for the probabilistic information. This approach could be viewed as putting the cart before the horse; the usefulness of stochastic information in parsers presumes that a certain level of accuracy can be achieved by the grammar alone. While GB is an elegant theory of cognitive syntax, it has yet to be shown that such a modular characteristion can be successfully employed in corpus analysis. Statistical Data and their Source The use of the preliminary Penn Treebank corpus for the extraction of probabilities used in the implementation above was a choice forced by lack of suitable materials. There are still very few parsed corpora available, and none that contain information which is specified to the level required by, e.g., a GB grammar. While this is not an absolute limitation, in that it is theoretically possible to extract this information manually or semiautomatically from a corpus, time constraints entailed the rejection of this approach. It would be ultimately desirable if the use of probabilities in principle-based parsing could be used to mirror the way that a syntactic theory such as Government-Binding handles constructions --various modules of the grammar conspire to rule out illegal structures or derivations. It would be an elegant result if a construction such as the passive were to use probabilities for chains, Case assignment etc. to select a parse that reflected the lexical changes that had been undergone, e.g. the greater likelihood of an NP featuring in the verb!s theta grid. It is this property of a number of modules working hand in hand that needs to be carried over into the probabilistic domain. The objections that linguists once held against statistical methods are disappearing slowly, partly due to results in corpora analysis that show the inadequacy of linguistic theory when applied to naturally occurring data. It is also the case that the rise of the connectionist phoenix has brought the idea of weighted (though not strictly probabilistic) functions of cognition back to the fore, freeing the hands of linguists who believe that while an explanatorily adequate theory of grammar is an elegant construct, its human implementation, and its usage in computational linguists may not be straight forward. This paper has hopefully shown that an integration of statistical methods and current linguistic theory is a goal worth pursuing.
1994-08-02T03:02:45.000Z
1994-08-02T00:00:00.000
{ "year": 1994, "sha1": "a7ebfe7afbbbf850a8ff361ac5efdeee4dca6db4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "992c334f2000cd97a72fd44fd66b806bf111a606", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
261461848
pes2o/s2orc
v3-fos-license
Lysosome-related genes: A new prognostic marker for lung adenocarcinoma Currently, a reliable early prognostic marker has not been identified for lung adenocarcinoma (LUAD), the most common malignancy. Recent studies demonstrated that lysosomal rupture is involved in cancer migration, progression, and immune microenvironment formation. We performed a bioinformatics analysis of lysosomal rupture to investigate whether lysosome-related genes (LRGs) are key in LUAD. The analysis identified 23 LRGs. Cytoscape visualization identified 10 core genes (CCNA2, DLGAP5, BUB1B, KIF2C, PBK, CDC20, NCAPG, ASPM, KIF4A, ANLN). With the 23 LRGs, we established a new risk scoring rule to classify patients with LUAD into high- and low-risk groups and verified the accuracy of the risk score by receiver operating characteristic curves and established a nomogram to evaluate clinical patients. Immunotherapy effectiveness between the high- and low-risk groups was evaluated based on the tumor mutational burden and analyses of immune cell infiltration and drug sensitivity. Pathway enrichment analysis revealed that lysosomes were closely associated with glucose metabolism, amino acid metabolism, and the immune response in patients with LUAD. Lysosomes are a likely new therapeutic target and provide new directions and ideas for treating and managing patients with LUAD. Introduction The most recent World Health Organization (WHO) report stated that lung cancer is the second most prevalent malignant tumor and has the highest mortality rate in the world, which seriously affects human health. [1,2]Excluding lung squamous carcinoma, which is closely related to smoking, the incidence rate is currently highest for lung adenocarcinoma (LUAD).Currently, the popularization of low-dose computed tomography and continuous development of personalized early treatment for LUAD has reduced the lung cancer mortality rate somewhat.Therefore, identifying accurate and reliable early prognostic markers is crucial for the survival prognosis of patients with LUAD. An intracellular organ, the lysosome contains numerous acidic hydrolases, which are an important means of triggering spontaneous apoptosis. [3,4]The contribution of lysosomes, an extremely important part of the cell death process, has been largely overlooked in cancer.The metabolism of cancer cells is extremely fast, and the biological properties of lysosomes meet exactly the needs of tumor growth. [3,5]Therefore, the expression profile of lysosome-related genes (LRGs) is likely to be a relevant condition that predicts early cancer.In recent years, the potential effect of the biological function of lysosomes in tumors has received increasing attention.Soleimani et al induced lysosomal activation by activating TFEB phosphorylation in triple-negative breast cancer, which caused cellular autophagy. [6]Diverse lysosome-mediated cancer therapeutic agents such as strigolactones, [7] tea polysaccharides, [8] and quercetin [9] were identified through the biological properties of lysosomes.Meanwhile, lysosome-induced oxidative burst, [10] histone protease release, [11] and oxidative stress are involved in tumor cell immune escape and apoptosis.Similarly, as an intracellular digestive organ, certain lysosomal metabolic process products are likewise the basis for tumor proliferation and migration.However, the specific mechanisms involved in lysosomal involvement in LUAD and development remain to be explored. In this study, we developed a new risk scoring system consisting of 23 LRGs to accurately predict the prognosis of patients with LUAD, constructed and validated a nomogram, and explored the response of LRG scores in the immune microenvironment and drug therapy in patients with LUAD.Ten core genes associated with lysosomal-associated LUAD were identified through the protein-protein interaction (PPI) network to determine representative targets for LUAD therapy. Data preparation and processing We obtained available follow-up and clinical data for 535 LUAD and 59 normal samples from The Cancer Genome Atlas (TCGA) database as a training cohort.The microarray dataset GSE37745 from the Gene Expression Omnibus (GEO) database was used as a separate cohort for subgroup analysis to verify the data validity.The data were aligned and integrated with Perl software.R software was used for bioinformatics analysis and image plotting.Data were normalized using the R package limma. Differential analysis between Ensembl annotation and LRGs We downloaded gene transfer format (GTF) files from Ensembl (http://asia.ensembl.org)to perform gene annotation and ID conversion, [12] which enabled the conversion of gene IDs to gene names to obtain a more accurate gene expression matrix.The differentially expressed LRGs in the intersection of TCGA database and GEO database samples were screened with a false discovery rate < 0.05 and |log2 fold change (FC)| > 0.585.Genes were screened using the R packages limma and pheatmap, Volcano plots and heatmaps were mapped using R gplot and pheatmap. Calculation of tumor mutational burden and prognostic gene screening The tumor mutation data were downloaded from TCGA database, integrated by Perl software, and analyzed and visualized by the R package maftools.Univariate Cox regression was used for screening and forest diagrams of prognostic genes, with P < .05 as the screening criterion.The Wilcoxon rank test was used for the risk ratio (RR) and 95% confidence interval (CI), which were run by the R packages survminer and survival. Construction and validation of LRG prognostic risk scoring model The prognostic genes with P < .05 in the training group were identified using univariate Cox regression analysis.A LASSO regression model was constructed to identify the genes with the least error in cross-validation, [13] and the genes involved in the model construction were derived.The risk score formula was obtained as follows: LRG risk score = n P i Coef f icient (gene) £Expression (gene).The training group was divided into high-and low-risk groups according to the risk score, and the median was used as the critical value.Principal component analysis (PCA) was performed on the model plots and the PCA plots were plotted using the R package ggplot2.The R packages survival and survminer were used for survival analysis and progression-free survival analysis and plotting. Independent prognostic analysis and correlation analysis of clinical characteristics Using the sample age, sex, pathological stage, T classification, and risk score as variables, univariate and multivariate Cox Immunotyping analysis We classified patients with LUAD into 5 subtypes using the file Subtype_Immune_Model_Based.txt for pan-cancer immunophenotyping and observed whether the patients' risk scores differed between subtypes with the R package ggpubr. Nomogram analysis A nomogram of the corresponding variables was plotted with the R packages survival and regplot to assess the clinical prognoses of the corresponding patients, and 1-, 3-, and 5-year calibration curves were plotted.Receiver operating characteristic (ROC) curves were plotted with the R package timeROC to compare the risk score validity for predicting the patients' clinical risk.Univariate and multifactorial Cox risk regression prognostic analyses were performed on the nomogram. Immunocyte correlation analysis Tumor-infiltrating immune cells were identified from tumor sequencing data using the ESTIMATE (Estimation of STromal and Immune cells in Malignant Tumor tissues using Expression data) algorithm.Subsequently, the R packages reshape2 and ggpubr were used to analyze the differences in the infiltration of different immune cells between the high-and low-risk groups and to draw boxplots.Single-sample gene set enrichment analysis (ssGSEA) and gene set variation analysis (GSVA) were performed using the R packages GSEABase and GSVA, respectively, and yielded a boxplot of immune-related functions and a heat map of differential immune-related functional pathways.The Tumor Immune Dysfunction and Exclusion (TIDE) score file for LUAD was downloaded from the TIDE database (http://tide.dfci.harvard.edu/) to assess anti-immune checkpoint therapy efficacy. Gene mutation and drug sensitivity analyses Box plots for gene mutation and drug sensitivity analyses were created with the R packages ggpubr and pRRophetic. PPI network and network core genes The PPI network of differential genes was constructed and visualized in string-db.org/and the core network genes were extracted to assess the differential expression of the core genes in the training cohort. Statistical analysis Differences between multiple groups were analyzed with the Kruskal-Wallis test.Differences between 2 groups were compared with the Wilcoxon test.P < .05 was considered statistically significant.R 4.2.1 and Strawberry Perl software were used for all statistical analyses. Expression analysis of LRGs in patients with LUAD Figure 1 depicts the flow chart of the processes used in this research.We included 523 patients and 59 normal samples to investigate the effect of LRGs on patients with LUAD and identified 877 LRGs.Of these, 291 differential LRGs were extracted by comparing the differences between tumor and normal tissues, where 152 genes were downregulated and 139 genes were upregulated (Fig. 2A and B). Construction of prognosis-related LRGs Seventy LRGs were associated with LUAD prognosis, as shown in the forest plot in Figure 3A.A total of 284 samples (46.1%) were mutated, with the VCAN gene demonstrating the highest mutation rate of 13%.Most of the mutations were missense mutations.In descending order of mutation rate, the remaining mutated genes were MNDA, LRRK2, BCAN, and GPC5 (Fig. 3C).The co-mutation relationship between the genes is depicted in Figure 3B. Prognostic model construction and correlation analysis The risk score model was constructed with TCGA database samples as the training group and GEO database samples as the test group, where the samples were divided into high-and low-risk groups (Fig. 4A and B).4C and D). Figure 4F demonstrates that the area under the curve (AUC) value of the risk score ROC curve was greater than that of the other clinical characteristics (age, sex, stage, T), indicating that our model had higher accuracy for prognostic rather than clinical assessment of patients.The AUC values were 0.759, 0.735, and 0.691 for patients at 1, 3, and 5 years postoperatively (Fig. 4E), respectively, again demonstrating the excellent prognostic value assessment of our model.Subsequently, survival analysis was performed for the 2 groups, where Kaplan-Meier analysis demonstrated that patients with LUAD with higher LRGs scores had poorer survival curves (Fig. 5A).The PCA demonstrated that we were able to distinguish better between the high-and low-risk groups (Fig. 5C and D).Furthermore, the progression-free survival proved that the low-risk group had a better prognosis than the high-risk group (Fig. 5B). Nomogram analysis By plotting the nomogram related to sex, age, pathological stage, T stage, and risk score, we were able to obtain the scoring of the corresponding patients and thereby assess the prognostic risk by the calculated total score (Fig. 6A) and predict the 1-, 3-, and 5-year survival rates of the patients with LUAD. Figure 6B demonstrates the stability of the nomogram.By comparing the ROC curves of the nomogram and the LRG risk score, we determined that the nomogram had a greater AUC value than the LRG risk score (Fig. 6C).Finally, we analyzed whether the nomogram could be an independent factor for patients with LUAD with LRGs through univariate and multivariate Cox risk regressions, and the results were statistically significant. Correlation analysis of immune function First, we observed that, between the different pan-cancer immunophenotypes, C1 patients differed from all other phenotypes and had higher risk scores than patients of the other phenotypes (Fig. 7A).Second, by assessing paracancer immune cell infiltration, we were able to determine that the low-risk group had a significantly higher abundance of memory B-lymphocytes, plasma cells, regulatory T cells, monocytes, resting dendritic cells, and resting mast cells than the high-risk group, while the tumor tissue from the high-risk samples contained significantly more infiltrated activated CD4 memory T cells, resting natural killer cells, M0 macrophages, and activated mast cells (Fig. 7B).The low-risk group had higher scores for immune checkpoint, human leukocyte antigen, T-cell co-stimulation, T-cell co-inhibition, and type II IFN response infiltration than the high-risk group (Fig. 7C).Finally, GSVA determined that glucose and nuclear metabolism and the P53 signaling pathway were enhanced in the high-risk group, while amino acid and fatty acid metabolism were more pronounced in the low-risk group (Fig. 7D). Analysis of gene mutations and drug efficacy Among the 6 genes with a high proportion of mutations, the mutant phenotypes of TP53, TTN, and CSMD3 had significantly higher risk scores than the wild-type (Fig. 8A-F).Drug sensitivity analysis determined that cisplatin, erlotinib, gefitinib, gemcitabine, and paclitaxel had higher effectiveness in the lowrisk group than in the high-risk group, and TIDE scores were negatively associated with the risk score (Fig. 9A-F). LRG functional enrichment analysis We performed Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis to understand the role of prognostic markers in LUAD.GO analysis demonstrated that prognostic indicators were more abundant in human immune responses such as antimicrobial humoral immune response mediated, defense response to bacterium, and antimicrobial humoral response (Fig. 10A and B).KEGG analysis demonstrated that the most abundant pathways for LRGs were neutrophil extracellular trap formation and others (Fig. 10C and D). Network core genetic analysis We constructed a PPI network between LRGs to investigate the interactions between LRGs (Fig. 11A), in which the nodes represented genes or proteins and a linkage between 2 nodes indicated a connection between 2 proteins.Figure 11B depicts the extracted 10 core protein interaction genes (CCNA2, DLGAP5, BUB1B, KIF2C, PBK, CDC20, NCAPG, ASPM, KIF4A, ANLN), where the red area indicates a core gene of protein interactions.Samples with higher expression of the core gene CCNA2 rather than low expression had worse prognoses (Fig. 11C).Plasma cells, resting CD4 memory T cells, monocytes, dendritic cells, and mast cells were more abundant in tumors with low CCNA2 expression, and there was greater infiltration of CD8 T cells, activated CD4 memory T cells, resting natural killer cells, and M0 macrophages in samples with high CCNA2 expression (Fig. 11D).Figures S1-S3, Supplemental Digital Content, http://links.lww.com/MD/J561depict the survival curves and immune cell differential analysis of the other 9 core genes. Discussion One of the prevalent organelles in the cell, the lysosome contains diverse hydrolytic enzymes and is often used to separate substances that enter the cell from the outside and to digest local cytoplasm or organelles, such as proteins recognized by HSC70. [14]The lysosome also cleaves when the cell turns senescent, thereby digesting the entire cell and causing its death. Recent studies suggested that the lysosome may be essential in lung cancer development and apoptosis, [15] although the potential molecular mechanism of the lysosome as a member involved in apoptosis and its related genes in LUAD have not been reported.We believe that LRGs may play a greater role in LUAD development.Therefore, we constructed an LRG-associated clinical model of LUAD and identified 23 lysosome-associated genetic markers.To clarify the significance of LRGs in LUAD development, we investigated and validated the prognostic value of LRGs in LUAD by constructing a clinical risk score for LUAD-associated LRGs. We first developed an LRG-associated LUAD model by Cox risk regression, and the prognosis-related model was associated with 23 genes (SFTPB, TRAF3IP3, VCAN, SORT1, PYGB, GPC5, TM6SF1, AGRN, NEU1, AP1M2, GLB1L2, NPRL2, CDX2, PLEKHF1, TMEM106B, CCT2, CCT8, RAB3A, ARRB1, BCAN, MAP6, BLOC1S4, BTK), some of which may serve as LUAD biomarkers.For example VCAN was associated with the proliferation and migration of a variety of cancers [16][17][18] while BTK, a tyrosine kinase, was also strongly associated with hematologic tumors. [19]We conjectured that these 23 LRGs may be involved in the progression and metastasis of LUAD.Newman et al [20] found that TRAF3IP3 can upregulate the TGF-β signaling pathway, promote cellular autophagy, and activate CD40 to activate the immune response by promoting NFκB activated. [21,22]Similarly, TM6SF1, NEU1, NPRL2, and TMEM106B, as key genes for lysosome formation, could enhance lysosomal function and thus further regulate tumor progression. [23,24]Epithelial mesenchymal transition (EMT), a crucial stage in building the tumor microenvironment, plays an integral part in the progression of LUAD.Yuan et al found that Glypican-5, an oncogene, could inhibit the process of EMT in lung cancer, so as to suppress tumor growth and metastasis. [25,26]n contrast, PYGB, NEU1, AGRN, ARRB1, MAP6, CDX2, and other genes have been shown to promote EMT and thus enhance LUAD proliferation and migration via the WNT pathway [27][28][29] or PI3K/Atk pathway. [30,31]A major component of the extracellular matrix, VCAN has also been much studied in recent years, mainly involved in cell adhesion, proliferation, migration and angiogenesis, and as a key mediator of immunity and inflammation to promote the synthesis and secretion of inflammatory factors (TNFα, IL6, NF-κB, etc.). [32]Consequently, these 23 lysosome-related genes affect lysosome construction, EMT, surrounding immune microenvironment and inflammation, and are closely related to the development of LUAD.We also hope that these 23 LRGs can be used as reliable early prognostic markers for lung adenocarcinoma.In our model, the patients with LUAD were divided into high-and low-risk groups according to their risk scores and we confirmed that the risk score level correlated with the patients' prognoses. The immune function analysis revealed that the low-risk group had much greater dendritic cell abundance than the highrisk group, which was consistent with the report by Iulianna et al [15] GSVA revealed that cellular pathways such as the P53, carbohydrate metabolism, amino acid metabolism, and nucleotide metabolism pathways were significantly enhanced in the highrisk group, and that the P53 signaling pathway could induce lysosomal rupture and therefore apoptosis. [33,34]Mitsuhiro Endoh demonstrated that FLCN inhibited lysosomal activity through TFE3 to prevent excessive glycoisomerization. [35]We also performed GSEA, and GO and KEGG analysis revealed that the pathways for defense response to bacterium, humoral immune response, and antibacterial humoral response were enriched mainly in the high-risk group, suggesting that lysosomes are associated with in vivo immunity.The immune-related factors were also closely associated with LUAD development. We plotted a nomogram to assess the survival prognosis of clinical patients.Figure 6A demonstrates that the patients' prognoses were closely related to the risk score of our constructed model and the tumor pathological stage (P < .001),which was used to accurately predict the 1-, 3-, and 5-year survival time of LUAD.In the ROC curve, an AUC value of 0.739 was obtained for the column line graph, which was significantly higher than the other factors.Additionally, we determined that the nomogram and prognostic risk score were independent prognostic factors following the integration of common clinical features (Figs. 4, 6D and 6E). Notably, a recent study reported that immunotherapy achieved relatively positive results in patients with LUAD. [36,37]ccordingly, we performed immune correlation studies on this model and included analyses of immune cell infiltration, immunophenotyping, and drug sensitivity.In particular, the immune microenvironment is an important aspect in cancer development, [38] where we identified significantly higher infiltration of B cells, T cells, and dendritic cells in the low-risk group, suggesting that a decrease in these cells may be associated with poor prognosis.As first-line therapeutic agents for non-small cell lung cancer, [39,40] cisplatin and paclitaxel significantly improve the survival prognosis of patients with LUAD.In this study, we determined that the low-risk group had significantly higher cisplatin and paclitaxel sensitivity than the high-risk group, indicating that the efficacy of these drugs was significantly higher in the low-risk group than in the highrisk group. Boyle et al indicated that complex disease traits are driven by numerous small effects, and disease risk is likely to result from the propagation of a network built by regulating several genes. [41]The core genes in this network are likely to drive the surrounding genes to play a crucial role in disease development.Therefore, we performed Cytoscape visualization analysis through the PPI network to identify 10 core genes (CCNA2, DLGAP5, BUB1B, KIF2C, PBK, CDC20, NCAPG, ASPM, KIF4A, ANLN) and determined that abnormal expression of these 10 genes was closely associated with LUAD. Our study has some limitations.First, TCGA and GEO databases were not representative of the clinical situation.Second, we were unable to add detailed experimental studies such as validation analysis using cell lines, animal models, and a large number of clinical samples.In addition, our constructed models required numerous clinical samples for immunohistochemical analysis to verify the prognostic marker validity.Future studies can explore the specific mechanism of action of these genes in lung cancer and thereby select representative targets for LUAD treatment. Conclusion We performed a comprehensive bioinformatics analysis of LRGs in LUAD, established a model with 23 associated genes, and identified the early diagnostic and prognostic features of LRGs in LUAD.Subsequently, we analyzed the risk score, immune cell infiltration, gene mutations, drug sensitivity, pathway enrichment, and network core genes to validate the high practical value of our model and provide new targets and perspectives for LUAD treatment. Figure 3 . Figure 3. LRG differential analysis and mutation analysis in LUAD.(A) Forest plot of differential genes (red represents upregulated genes, blue represents downregulated genes).(B and C) Mutation frequencies and co-mutation relationships of LRGs.LRGs = lysosome-related genes, LUAD = lung adenocarcinoma. Figure 4 . Figure 4. Construction of a lysosome-associated prognostic gene signature.(A and B) The coefficient and partial likelihood deviance of the prognostic signature.(C and D) Univariate and multivariate risk proportion regressions demonstrated that the risk score was significantly associated with prognosis.(E and F) ROC curves demonstrate a greater predictive power of risk scores.ROC = receiver operating characteristic. Figure 5 . Figure 5. (A and B) Validation of OS and PFS.(C and D) PCA verification of training group and our model.PCA = principal component analysis, PFS = progression-free survival. Figure 6 . Figure 6.Nomogram analysis.(A) Nomogram.(B) Nomogram stability.(C) Nomogram ROC curve.(D and E) Univariate and multivariate risk proportion regressions demonstrate that the nomogram was significantly associated with prognosis.ROC = receiver operating characteristic. Figure 9 . Figure 9. Analysis of drugs and TIDE with risk score.TIDE = The Tumor Immune Dysfunction and Exclusion.
2023-09-03T06:17:15.280Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "a42453f84e95540fc974f2bd8106bb8ea37ddb26", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000034844", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aaacb89e450d8016520fe6b0e0eac277fc0808a4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
25740787
pes2o/s2orc
v3-fos-license
Additional Cardiovascular Risk Factors Associated with Excess Weigth in Children and Adolescents . The Belo Horizonte Heart Study The prevalence rates for overweight and obesity were 8.4% and 3.1%, respectively. In relation to the students in the lower quartile (Q1) of the distribution of subscapular skinfold, the students in the upper quartile (Q4) presented a 3.7 times higher risk (odds ratio) of having elevated TC levels. Overweight and obese students had a 3.6 times higher risk of having elevated systolic blood pressure, and a 2.7 times higher risk of elevated diastolic blood pressure when compared to normal weight students. The less active students in the Q1 of distribution of MET presented a 3.8 times higher risk of having elevated TC levels compared to those who were more active (Q4). CONCLUSION Students who were overweight, obese or in the upper quartiles for other adiposity variables, as well as students with low levels of physical activity or a sedentary lifestyle presented higher blood pressure levels and a lipid profi le indicative of an increased risk of developing atherosclerosis. RESULTS The prevalence rates for overweight and obesity were 8.4% and 3.1%, respectively.In relation to the students in the lower quartile (Q1) of the distribution of subscapular skinfold, the students in the upper quartile (Q4) presented a 3.7 times higher risk (odds ratio) of having elevated TC levels.Overweight and obese students had a 3.6 times higher risk of having elevated systolic blood pressure, and a 2.7 times higher risk of elevated diastolic blood pressure when compared to normal weight students.The less active students in the Q1 of distribution of MET presented a 3.8 times higher risk of having elevated TC levels compared to those who were more active (Q4). CONCLUSION Students who were overweight, obese or in the upper quartiles for other adiposity variables, as well as students with low levels of physical activity or a sedentary lifestyle presented higher blood pressure levels and a lipid profi le indicative of an increased risk of developing atherosclerosis. KEY WORDS Obesity, blood pressure, motor activity, nutrition, child, adolescent. The current epidemic of ischemic cardiovascular diseases (CVD) in the developing countries has brought an increased burden for public health in terms of Disability Adjusted Life Years (DALY) 1 .In Brazil, DCVs account for the highest burden of disease (9.6 DALY) followed by diabetes mellitus (5.1 DALY), both sharing excess weight as a common risk factor 2 . Recent and profound lifestyle changes regarding dietary habits characterized by a high intake of saturated fat, hypercaloric beverages and low levels of physical activity resulted in a widespread epidemic of overweight and obesity, and their consequent comorbidities, CVDs and non-insulin-dependent diabetes mellitus (NIDDM) 1 . Children are becoming increasingly more vulnerable to excess weight, in a "Junior" version of the global adult obesity epidemic, and they even present with insulin resistance, type 2 diabetes mellitus 3 and early onset atherosclerosis 4 comprising the manifestations of the metabolic syndrome. Recent studies have shown a decline in the prevalence of malnutrition and a predominance of excess weight in children and adolescents with signifi cant yearly increase rates of the latter.Wang et al 5 verifi ed a three-fold increase in the prevalence of excess weight in Brazil whereas the prevalence of underweight people showed a sharp decline to almost half of previous numbers. METHODS A school-based epidemiologic cross-sectional study was conducted in the city of Belo Horizonte, State of Minas Gerais, Brazil.Initially, twenty out of the 521 public and private schools of Belo Horizonte were randomized.The second step was to choose two classrooms from each school to compose a sample of 1,450 students to be investigated.The sample size was calculated for the fi rst two stages using the Kish method 6 , based on the Student's t-test and pre-specifying the α and β errors as 0.05 and 0.20, respectively. A Written Consent Form was given to the students.This study was approved by the Research Ethics Committee of UFMG and by the Research Ethics Committee of USP. All instruments were tested in a previous pilot study conducted at two schools (one public and one private). The anthropometric measurements included weight, height and central body fat distribution indicators -triceps, subscapula and suprailiac skinfold thickness, and waist and hip measurements.The percentage of body fat was estimated using a Tanita bioelectric impedance scale.Height measurements were taken using a portable aluminum stadiometer, with a tolerance of approximately 0.1 cm, with the students standing without shoes.Body weights were taken with the Tanita scale, with a tolerance of approximately 0.1 kg. and systematically confi rmed after every ten measurements with a OMS portable electronic scale.Skinfold thickness measurements were recorded to the closest millimeter using Lange skinfold calipers (Cambridge Scientifi c, Cambridge, MA). Serum levels of total cholesterol (TC), and LDL-c and HDL-c lipoproteins were analyzed by Cobas Mira Plus (Roche Corp.) in accordance with the protocols of the National Cholesterol Education Panel.Since there were no triglyceride serum levels higher than 400 mg/dl, the LDL serum levels were calculated using the Friedewald formula. Systemic blood pressure (systolic and diastolic) readings were taken in accordance with the recommendations of the American Heart Association 7 .The mean of the two blood pressure readings was used in the statistical analyses. The parents or guardians of each child or adolescent under 14 years of age and the students were interviewed using a questionnaire to obtain information regarding demographic aspects, dietary habits, physical and sedentary activities, smoking and family history of early onset CVD. The energy expenditure, evaluated in kilocalories -MET (metabolic cost or unit of resting metabolic rate) was obtained using a 24-hour record questionnaire of the student's physical activities, modifi ed from the Spark questionnaire 8 .For statistical analyses, physical activities were classifi ed in accordance with energy expenditure levels in four intervals or quartiles.In this analysis, only the upper interval or quartile (Q4) and the lower interval or quartile (Q1) were considered in order to obtain a more marked contrast between the most active and the least active students.A qualitative question was also included for the students to compare their daily level of physical activity with their counterparts 9 . The questionnaire also evaluated sedentary activities including the number of hours spent watching TV and videos, number of hours spent playing computer games, videogames and handheld computer games and the number of hours spent listening to music (without dancing) for relaxation 9 . A quantitative food frequency evaluation questionnaire was prepared based on a North American questionnaire that was prepared for cardiovascular disease screening studies, and developed by Gladys Block et al and organized by Thompson and Byers 10 in the Dietary Assessment Resource Manual. From the scale mentioned above, three levels of nutrition standard were established: adequate (a diet with low lipid levels and a high concentration of fruits and vegetables); inadequate (high fat content and low fruit and vegetable content); very inadequate (very high fat content and very low fruit and vegetable content).The consumption of these foods was evaluated on a weekly and monthly basis using a diet record. We considered students up to age 11 as children and students between 12 and 18 as adolescents.The socialeconomic variable was defi ned in accordance with the Brazilian Association of Market Research Institutes and grouped for statistical analysis purposes in an upper level (AB) and a lower level (CDE). In order to compare our results with those of other studies previously conducted, and following the recommendations for epidemiologic inquiry evaluations for the prevalence of excess weight in children and adolescents, and inference of associations and risk of subsequent comorbidities, overweight was considered as a BMI between the 85th and 94th percentiles, and obesity equal to or higher than the 95th percentile according to age and gender 11,12 .In the present study, students were considered with "excess weight" when they were either overweight or obese, that is, anyone with a BMI over the 85th percentile.Those considered with a "normal weight" had a BMI between the 5th and 85th percentiles.By defi ning the 85th percentile as the cut-off point for excess weight, overweight -a CVD risk factor 12 -was included in the data, and there was a much higher number of overweight students than obese students in the sample. Blood pressure was considered "high normal" (borderline) when the systolic blood pressure (SBP) or diastolic blood pressure (DBP) was between the 90th and 95th percentiles for the reference population; signifi cant high blood pressure was defi ned as a systolic or diastolic pressure above the 95th percentile, and severe high blood pressure as a systolic or diastolic pressure above the 99th percentile for the reference population, or approximately 10 mmHg above the 95th percentile in accordance with the National High Blood Pressure Education Program Working Group on Hypertension Control in Children and Adolescents 13 .We considered "blood pressure higher than normal" or "high blood pressure" when the SBP or DBP levels were above the 90th percentile for the reference population. Statistical Analysis -All significance tests were considered at a level of 0.05 for the type I error, that is, a level of 5% for the hypothesis that each parameter is equal to zero was used to reject the hypothesis whenever the estimated parameter value exceeded the estimated standard error 1.96 times.Initially the analyses were conducted in the Department of Statistics of the Institute of Exact Sciences, Universidade Federal de Minas Gerais, using the statistical analysis program SPSS for Windows (Release 8.0 Chicago, IL, USA).The inferential analysis was conducted in the Institute of Mathematics and Statistics, Universidade de São Paulo 14 , using the Statistical Analysis Software-SAS (Release 8.02, SAS Institute Inc, Cary, NC, USA). Risk factors were clustered according to the total of the four risk variables studied, that is, total cholesterol > 200 mg/dl, systolic blood pressure ≥ 90th percentile, diastolic blood pressure (DBP) ≥ 90th percentile, and BMI > 85th percentile. Univariate analyses for the signifi cance test of the associations were conducted using the Student's t-test for continuous variables and the Chi-square test for discrete variables.To evaluate the strength of the association, the odds ratio calculation was used, which in a clinical focus study, such as this one, can be used as a risk proxy for both univariate and multivariate analyses.The adequacy assumptions of the linear model were evaluated graphically and using the Ryan-Joiner test (similar to Shapiro-Wilk) to verify the normality of the model error distribution. The independent association of predictive variables with elevated blood pressure levels, overweight and obesity were evaluated using the backward stepwise logistic regression, keeping the variables at a maximum descriptive level of 0.01 in the null test of their effect to determine which energy expenditure levels, sedentary activities, dietary habits, social-economic class, excess weight (BMI) and variables indicative of central body fat distribution would be the most predictive of adverse levels of total cholesterol and lipoprotein fractions, and of systolic and diastolic blood pressure.All regression analyses were controlled for various confounding factors including skin color and age. RESULTS From the initial 1,450 participants, fi ve were excluded for inconsistent data.From the remaining 1,445 that responded adequately to the questions and had their anthropometrical data collected, 1,382 (95.3%) agreed to have a blood sample drawn.The majority of the participants attended public schools (76%), 53% were female and 47% male, 56% were from the lower social classes (below middle class -CDE), 45% were white, 44.4% were dark skinned and 14.5% were black.For statistical analysis purposes the white and dark skinned participants were grouped together since there were no signifi cant differences between dark skin and the other variables.As such, the skin color variable was defi ned as white and black. Table 1 shows the mean values, standard deviation (SD), minimum, maximum, median, and lower and upper quartiles of the variables studied. Tables 2 and 3 show the distribution (mean and standard deviation) of serum lipids, blood pressure, physical and sedentary activities and adiposity variables, according to the demographic characteristics of the participants. In relation to the ranges classifi ed as "desirable", "borderline" and "elevated", one third (32.9%) of these students presented total cholesterol levels higher than the values considered desirable (> 170 mg/dl), and one quarter (25.1%) also presented LDL-c levels higher than the values considered desirable (> 110 mg/dl).In relation to HDL-c, approximately one fi fth (17%) of the students presented values considered as undesirable (Table 4). According to the levels found and the criteria of NCEP/ NHLBI/NIH (Table 5), once again one third (32.9%) of these students that presented elevated levels of total cholesterol (> 170 mg/dl) were classifi ed in the range of moderate to serious risk to develop atherosclerotic disease in adulthood, as well as close to one third (32.4%) in relation to those with elevated levels of LDL-c (> 105 mg/dl). The results revealed that 28.1% of the participants spent more than 5.5 hours on sedentary activities, 22.6% presented low levels of physical activity expressed as energy expenditure (located in the lower MET quartile), and 68.5% were described by themselves or their parents/ guardians as less active than their counterparts who were considered much more active (31.5%).Generally speaking, the results show an average of 4 hours per day spent on sedentary activities; 2.8 hours watching TV and 0.3 hours playing video or computer games.The mean energy expenditure (MET/day) was 627.8. The majority of the students presented dietary habits characterized by the consumption of high fat foods.In relation to the group of fruits, vegetables and fi bers, 64.8% presented an intake classifi ed as "very inadequate", 35.0% an inadequate intake and 0.0% reported and adequate intake of these macronutrients.A little more than one quarter (26.7%) ate potato chips or popcorn almost every day (> fi ve days per week), 14.5% three to four days per week and 22.8% one to two days per week.Roughly one quarter (25.9%) ate snack foods almost every day (> fi ve days per week), 18.0% three to four days per week and almost one third (31.5%) one Prevalence rates were 8.4% for overweight, 3.1% for obesity, and 11.5% for excess weight (BMI > 85th percentile). Participants with a regular intake of very high-fat foods and very low levels of fruits, vegetables and fi bers did not present any signifi cant differences when compared to those with intakes of high-fat foods and low levels of fruits, vegetables and fi bers in relation to low levels of HDL-c, elevated levels of total cholesterol and LDL-c, elevated systolic or diastolic blood pressure and central body fat distribution variables, except the subgroups of subscapular and suprailiac skinfold thickness measurements in children.(Table 6). The chance of a student with a physical activity level located in the 1st interval or lower quartile -Q 1 of the calorie expenditure distribution and also classifi ed as "less active" (calorie expenditure < 100 MET/day) to have high cholesterol (>200 mg/dl) is 3.80 times higher than the chance of another student classifi ed as "more active" located in the upper quartile -Q 4 (calorie expenditure > 964 MET/day) (Table 6).A student located in the 4th interval (upper quartile -Q 4 ) of the subscapular skinfold thickness distribution classifi ed in quartiles has 3.68 times more chances to have high cholesterol than another student located in the 1st interval (lower quartile -Q 1 ) of this skinfold (Table 6).In relation to LDL-c, the chance of a student with central body fat distribution, that is, located in the 4th interval (upper quartile -Q 4 ) of the "sum of skinfold thickness measurements", to have elevated levels of LDL-c (>130 mg/dl) is 3.29 times higher than the chance of another student located in the 1st interval for this variable (Table 6).In regard to HDLc, the chance of a student with a BMI value lower than the 85th percentile to have "desirable levels of HDL-c" is 2.20 times higher than the chance of another student with excess weight (> 85th percentile) (Table 6).Further, the chance of a student located in the 1st interval of the "waist -hip ratio" classifi ed in quartiles to have "desirable levels of HDL-c" is 2.45 times higher than the chance of another student located in the 4th interval of this quotient when both are in the last interval of the sum of skinfold measurements (Table 6). Students with excess weight (BMI > 85th percentile) presented 3.60 times more chances to have high systolic blood pressure and 2.70 times more chances to have high diastolic blood pressure (> 90th percentile) (Table 4).Also, students enrolled in public schools presented 3.95 times more chances to have a systolic blood pressure above the 90th percentile than those enrolled in private schools (Table 6). Students with "excess weight" presented 1.99 times more chances to be located in the higher social-economic classes (AB) than those with a BMI < 85th percentile; 1.86 times more chances to be male and 1.78 times more chances to be "less active than the others" when compared with females and those that are considered to be "more active than the others".However, the age group (child) only had a signifi cant association with "excess weight" when the skin color variable was included in the regression model (Table 6). There were more female students from the higher social-economic classes (AB) with a body fat percentage in the upper quartile (Q4) of their distribution than male students.Compared to a student that is "more active than the others", the less active was estimated to have 1.18 times more chances to be in the upper quartile of the distribution of the percentage of body fat (Table 6). With the exception of dark skinned children, more female students were located in the upper quartile of the distribution of subscapular skinfold thickness than male students.Children with dietary habits classifi ed as "very inadequate consumption of fruits, vegetables and fi bers" presented 1.18 times more chances to present subscapular skinfold thickness values in the upper quartile of their distribution compared to those that presented an "inadequate intake".Students considered "less active than the others" presented 1.23 times more chances to be located in the upper quartile of the distribution of this skinfold than those who are more active (Table 6). Once again, more female students were found to have suprailiac skinfold thickness measurements in the upper quartile of their distribution than male students.Children with a diet classifi ed as "very inadequate intake of fruits, vegetables and fi bers" presented 1.18 times more chances to present suprailiac skinfold thickness values in the upper quartile of their distribution compared to those who presented an "inadequate intake."Participants who were "less active than the others" presented 1.25 times more chances to be located in the upper quartile of the distribution of this skinfold than those who were more active (Table 6). Participants from the social-economic group AB had 1.22 times more chances to present values of the sum of the three skinfold measurements studied located in the upper quartile of their distribution than those from the lower social-economic classes (groups CDE).Consistent with the other variables indicative of central body fat distribution, participants who were "less active than the others" presented 1.23 times more chances to be located in the upper quartile of the distribution of this variable than those who were more active.The female participants also presented higher values for this variable than the males (Table 6). There were more male students with waist-hip ratios (WHR) located in the upper quartile of their distribution than females.Students located in the upper quartile of the distribution of WHR did not present any signifi cant differences compared to those located in the lower quartile, in relation to the adiposity, and physical and sedentary activity variables (Table 6). No signifi cant differences were observed in a multivariate analysis between the groups for other variables. DISCUSSION Consistent with other studies, we found higher serum lipid levels in the female participants 15,16 with an increase roughly between the ages of nine and eleven.However, unlike other studies 15 , we found that the white students had higher levels of TC, LDL-c, and HDL-c than the black students.Higher levels of serum lipids, overweight, obesity and central body fat distribution were found among the students from the upper social-economic classes and enrolled in private schools, as frequently occurs in countries in epidemiological transition. When compared with the results of the Lipid Research Study 16 , there was a wide discrepancy in the positive and negative values for the mean values of serum lipids which in most cases was not statistically signifi cant.Despite the presence of a lipid profi le that is harmful to health and similar to and at times even more alarming than that of the North American population regarding the fi gures found for risk of development of atherosclerotic disease, these fi gures are still lower than those reported in the cities of New York and Bogalusa, with fi gures of slight risk between 28% and 33%, moderate risk between 17% and 22%, and serious risk between 20% and 30% 17 . For the comparison of results, we chose the studies conducted in the cities of Rio Acima -MG 18 and Bento Gonçalves -RS 19 , because their methodology was similar to ours.Using identical design and methodology, we replicated the Belo Horizonte Heart Study in the city of Florianópolis -SC 20,21 , thus facilitating the comparison of the results found in Florianópolis with those of the main study from Belo Horizonte. In regard to the prevalence of elevated levels of cholesterol and lipoprotein fractions, the students from Belo Horizonte presented lower rates of elevated levels of total cholesterol and LDL-c than those found in the cities of Rio Acima 18 , Bento Gonçalves 19 and Florianópolis 20,21 , and higher rates of undesirable levels of HDL-c than those of Bento Gonçalves 19 and Florianópolis 20,21 . After adjustment for other variables, signifi cant odds ratio were found for desirable HDL-c levels in individuals with "normal weight", similar to the results of the Bogalusa Heart Study 22 .Also, as described in this North American cohort 22 , an inverse relation between desired HDL-c levels and the WHR was demonstrated in this Brazilian study (Table 6).In agreement with other population studies 23 , we did not fi nd signifi cant differences in the diastolic blood pressure readings for males and females, but adolescents presented significantly higher values in relation to children and blacks in relation to whites.Signifi cant differences were found between the values of elevated systolic blood pressure for black male adolescents when compared to white female children and public school students, however, no difference was found for socialeconomic classes (Table 3).In the present sample, 12% of the students presented higher than normal blood pressure readings (systolic and/or diastolic > 90th percentile).This prevalence of high blood pressure was lower than that found in another Brazilian sample by Perone et al (15%) 24 and the same as found in the city of Florianópolis (12%) 20,21 . As was expected, and similar to the results found by Nielsen et al 25 (OR=3.99),our students with "excess weight" (BMI > 85th percentile) presented more chances to have elevated (> 90th percentile) systolic or diastolic blood pressure than those with "normal weight" (BMI < 85th percentile) (Table 6).The public school students presented more chances to have high systolic blood pressure than those enrolled in private schools, whereas no signifi cant differences were observed between the extreme quartiles of the distribution of adiposity and high blood pressure variables (Table 6).This odds ratio was higher than those found by Styne (2.40) 26 and close to that verifi ed for systolic blood pressure levels in developing countries (4.00) 27 . Our prevalence rates for overweight (8.4%), obesity (3.1%) and excess weight were lower than the rates for Brazil as a whole, several countries in Latin America, and the United States 28,29 .Data collected from another source one year before our study and representative of Brazil also showed the same prevalence for overweight, but higher rates for obesity and excess weight 30 .In Florianópolis, found higher prevalence rates for overweight, obesity, and excess weight were found in a sample of 1,050 students between the ages of six and eighteen 20,21 .Compared to another study conducted fi ve years earlier with students between the ages of six and eighteen in Belo Horizonte 31 , with the same design and methodology and researchers as this study, our results indicate a clear 13% increase trend during this period in the rates for overweight and obesity in this population group. As was expected, participants with excess weight and those with central body fat presented higher systolic and diastolic blood pressure readings than those with "normal weight" and no central body fat (Table 6).The lack of a signifi cant association between WHR and the majority of the variables studied could be explained by a larger increase in the shoulder circumference compared to abdominal circumference during the growth cycle. In the present study, black male children enrolled in public schools and from lower social-economic classes (CDE), presented higher values of energy expenditure (MET), when compared to white female adolescents enrolled in private schools and from higher socialeconomic classes (Tables 2 and 3).Adolescents were more sedentary than children and those from lower socialeconomic classes spent more time watching television than those from the higher classes (Tables 2 and 3).There were more participants in this study that spent long periods of time on sedentary activities (28% > 5.5 hr/day) than recorded for the world population (17% > fi fteen years of age) 32 , less than in the Brazilian population as a whole (50%-79% > twelve years of age) but similar to the adolescent subgroup of the Brazilian population (76% > 5.5 hr/day) 33,34 .Compared to North Americans (3.5 hr/day) 35 and another Brazilian sample (5 hr/day), 36 our students, including the adolescents, spent less time watching television (2.8 hr/day).The results from the study arm conducted in the state of Santa Catarina 20,21 showed a higher percentage of students with little physical activity (very low levels of calorie expenditure, MET/day = 40%) compared to those in Belo Horizonte (lower quartile calorie expenditure, MET/day = 22.8%). Both the comparative scale of physical activity and energy expenditure levels indicated low levels of these variables and were more frequently found in those with excess weight than those with "normal weight".Similar to the fi ndings of other authors 25 , we found signifi cant odds ratios for excess weight (BMI > 85th percentile) and increased skinfold thickness measurements for low levels of physical activity (Table 4).In agreement with Kemper et al 37 (OR = 0.81), we found that physical activity has a protective effect against excess weight (OR = 0.61).Unlike other studies 38 , our students with excess weight did not spend more hours watching television than those with normal weight levels. Our fi ndings were similar to those in Florianópolis regarding the consumption of foods affecting cardiovascular health.Thus, while 88.4% of the Belo Horizonte students consumed foods high in saturated fat and not one participant reported an adequate intake of fruits, vegetables and fi bers, these values in Florianópolis were 79% and 0.3% 20,21 .In Dennison et al's study 39 , just as in the present study, participants with excess weight did not present higher energy intake than their controls. Consistent with other studies 39 , we found that participants with excess weight ate fruits, vegetables and fi bers more frequently than those with normal weight levels or underweight.It is possible that the participants with excess weight and their parents exaggerated when reporting fruit and vegetable intake and under reported the intake of high fat foods due to the concepts of "healthy and unhealthy foods" that are promoted by various media and health professionals. As reported in other studies, we identifi ed among the students a cluster of risk factors for developing metabolic syndrome 40 .We verifi ed that practically one in every fi ve participants (19.3%) presented a cluster of four cardiovascular risk factors in the same individual: elevated total cholesterol levels (> 200 mg/dl), BMI > 85th percentile, systolic blood pressure > 90th percentile, diastolic blood pressure > 90th percentile. The lack of a signifi cant association between dietary habits and other variables could be partially due to attenuation caused by the low accuracy of data regarding dietary habits and physical activity. CONCLUSIONS The majority of the students presented dietary habits considered harmful to health, characterized by the consumption of alarming quantities of junk food, foods with high levels of saturated fat and low fruit, vegetable and fi ber intakes.Many students presented very low levels of physical activity, long periods of time spent on sedentary activities, mainly watching television and more than half of the participants in the sample were individuals considered "much less active than the others."We found a disturbing rate of "excess weight" and central body fat distribution, which together with patterns of low levels of physical activity/ a sedentary lifestyle were associated with high blood pressure, elevated levels of total cholesterol and LDL-c as well as low HDL-c levels. Table 2 -Mean values of the distribution of lipids, blood pressure, BMI, central body fat, physical and sedentary activities according to gender, age group and skin color TC: total cholesterol (mg/dl); LDL-c: low density lipoprotein (mg/dl); HDL-c: high density lipoprotein (mg/dl); SBP: systolic blood pressure (mmHg); DBP: diastolic blood pressure (mmHg); BMI: body mass index; Subscapula: thickness of the subscapular skinfold (mm); Suprailiac: thickness of the suprailiac skinfold (mm); % of fat: percentage of body fat; WHR: waist hip ratio; MET: unit of resting metabolic rate; Sedentary: hours/day spent on sedentary activities; TV/Video: hours/day spent watching TV and/or videos to two days per week.More than one third (37.5%) ate French fries one to two days per week, 17.9% three to four days per week and 14.3% almost every day (> fi ve days per week).A little more than one half (58.3%) ate candies and chewing gum almost every day (> fi ve days per week), 14.7% three to four days per week and 13.7% one to two days per week.Roughly one third (32.9%) drank regular soft drinks almost every day (> fi ve days per week), and another third (32.8%) one to two days per week, while one fi fth (21.0%) three to four days per week and the majority (86.6%) at least once a week.Almost one half (48.0%) ate fruits and vegetables every Table 3 -Mean values of the distribution of lipids, blood pressure, BMI, central body fat, physical and sedentary activities according to social-economic level and type of school TC: total cholesterol (mg/dl); LDL-c: low density lipoprotein (mg/dl); HDL-c: high density lipoprotein (mg/dl); SBP: systolic blood pressure (mmHg); DBP: diastolic blood pressure (mmHg); BMI: body mass index; Subscapula: thickness of the subscapular skinfold (mm); Suprailiac: thickness of the suprailiac skinfold (mm); % of fat: percentage of body fat; WHR: waist hip ratio; MET: unit of resting metabolic rate; Sedentary: hours/day spent on sedentary activities; TV/Video: hours/day spent watching TV and/or videos Suprailiac skinfold thickness (Upper Quartile) Q1 = lower quartile (or interval) of the distribution of the independent variables; Q4 = upper quartile (or interval) of the distribution of the independent variables; 1 serum lipid levels, blood pressure, excess weight (overweight and obesity) and central body fat distribution (dependent variables) -in bold and highlighted.The other variables are the independent ones; 2 resting metabolic rate (1 MET=1Kcal/Kg/h); 3 triceps + subscapular + suprailiac skinfolds; 4 desirable levels of HDL-c: up to 10 years of age = > 40 mg/dl; 10 to 18 = >35 mg/dl; 5 percentile; 6 higher social-economic class (AB), versus lower social-economic class (CDE); 7 less (physically) active than the other students versus more active than the other students; 8 very inadequate consumption of fruits, vegetables and fi bers versus inadequate consumption (according to the Block score)
2018-04-03T06:13:44.229Z
2006-06-01T00:00:00.000
{ "year": 2006, "sha1": "d860cdd80668117d903a64cd82a7c5e564595d63", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/nMpjYjdNX3m8rJWcZZFPhNP/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eb91a40f0307c94ab60f0223ff91930e5e090472", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211565687
pes2o/s2orc
v3-fos-license
Development of Oral Care Chip, a novel device for quantitative detection of the oral microbiota associated with periodontal disease Periodontal disease, the most prevalent infectious disease in the world, is caused by biofilms formed in periodontal pockets. No specific bacterial species that can cause periodontitis alone has been found in any study to date. Several periodontopathic bacteria are associated with the progress of periodontal disease. Consequently, it is hypothesized that dysbiosis of subgingival microbiota may be a cause of periodontal disease. This study aimed to investigate the relationship between the subgingival microbiota and the clinical status of periodontal pockets in a quantitative and clinically applicable way with the newly developed Oral Care Chip. The Oral Care Chip is a DNA microarray tool with improved quantitative performance, that can be used in combination with competitive PCR to quantitatively detect 17 species of subgingival bacteria. Cluster analysis based on the similarity of each bacterial quantity was performed on 204 subgingival plaque samples collected from periodontitis patients and healthy volunteers. A significant difference in the number of total bacteria, Treponema denticola, Campylobacter rectus, Fusobacterium nucleatum, and Streptococcus intermedia bacteria in any combination of the three clusters indicated that these bacteria gradually increased in number from the stage before the pocket depth deepened. Conversely, Porphyromonas gingivalis, Tannerella forsythia, Prevotella intermedia, and Streptococcus constellatus, which had significant differences only in limited clusters, were thought to increase in number as the pocket depth deepened, after periodontal pocket formation. Furthermore, in clusters where healthy or mild periodontal disease sites were classified, there was no statistically significant difference in pocket depth, but the number of bacteria gradually increased from the stage before the pocket depth increased. This means that quantitative changes in these bacteria can be a predictor of the progress of periodontal tissue destruction, and this novel microbiological test using the Oral Care Chip could be effective at detecting dysbiosis. Introduction Periodontal disease is an infectious disease caused by oral bacteria that inhabit biofilms formed in the subgingival pocket. It is known that bacterial species forming subgingival plaques are grouped into several microbial complexes [1]. It is hypothesized that the complex composed of Porphyromonas gingivalis, Tannerella forsythia, and Treponema denticola is responsible for the initiation and progress of periodontal disease since these bacterial species are frequently isolated from severe periodontal lesions [2]. However, a meta-analysis report has shown that P. gingivalis is not always found in all subgingival microbiotas of deep periodontal pockets [3]. In addition, no specific bacterial species that can cause periodontitis alone, has been found in any animal model study to date. Therefore, a hypothesis is proposed that periodontal disease is not caused by several specific bacterial species, but by the interactions between the host and the dysbiotic subgingival microbiota [4]. Existing methods for analysis of microbiota data is not quantitative or clinically applicable, although detecting these specific bacteria is a key tool for the diagnosis of periodontal disease and assessing treatment effectiveness. This study aimed to investigate the relationship between the subgingival microbiota and clinical findings of periodontal disease in a quantitative and clinically applicable way. The analysis procedure was performed using a large-scale sample cluster analysis containing healthy and periodontal disease sites, based on the similarity of the microbiota proportions, or by comparing the subgingival microbiota proportions before and after periodontal treatment. Several detection methods using anaerobic culture, immunofluorescent antibodies, DNA probes, and polymerase-chain-reaction (PCR) have been developed [5,6]. There have also been many reports using competitive PCR to detect bacteria quantitatively [7,8]. However, it was previously difficult to accurately determine the number of each species of bacterium in the subgingival microbiota, because there were several technical difficulties associated with methods that detect multiple targets [9]. Therefore, by applying the recently developed methods with an improved quantitative performance by combining microarray and competitive PCR [10], more bacterial species can be explored in this study. Oral Care Chip is a new device, which was developed to provide a simultaneous and quantitative analysis of 17 subgingival bacteria to acquire microbiota data. Oral Care Chip We first developed a novel DNA microarray Oral Care Chip containing DNA probes to measure the total number of bacteria and detect 17 species of specific bacteria assumed to be responsible for the initiation and progress of periodontal disease [1,2]. The sequences of the DNA probes for determining the total number of bacteria were designed according to sequences in the conserved region of V3 of 16S rRNA; the sequences of specific probes for each bacterial species were selected among sequences in the specific V3 region of 16S rRNA based on the National Center for Biotechnology Information (NCBI) database (National Library of Medicine, Bethesda, MD, USA) ( Table 1). The specificity and hybridization efficiency of each probe on the Oral Care Chip were confirmed individually. In this process, it became clear that the probes for Fusobacterium nucleatum subspecies animalis and F. nucleatum subsp. nucleatum (probe no.06 and no.07) hybridized with each other, because the DNA sequence of these subspecies have significant similarity. The synthesized probes were next mounted onto a fibrous DNA chip platform Genopal™ (Mitsubishi Chemical, Tokyo, Japan) as previously described [11]. The probes were accordingly assigned to five spots on one microarray. Competitive PCR and hybridization As the internal control for amplification, we synthesized an artificial oligonucleotide target mimic with a 463-bp sequence that hybridizes to control probe and PCR primer sets at both ends. Subsequently, it was ligated into the pUC19 vector (S1 Fig; S1 Table). Then, competitive PCR and hybridization were carried out in the following steps. Forward V3 forward primer (5 0 -Cy5-TACGGGAGGCAGCAG-3 0 ) and V4 reverse primer (5 0 -TACCIGGGTATCTAATCC-3 0 ) were used for competitive PCR. PCR was conducted using 0.5 amol of control DNA, 20 pmol of each primer, 10 μl of 2× PCR solution Premix Ex Taq™ Hot-start version (Takara, Shiga, Japan), and template (as described below), in a total volume of 20 μl. The reaction was started by an initial denaturation of 1 min at 95˚C, followed by 40 cycles of 10 s at 98˚C, 30 s at 55˚C, and 20 s at 72˚C. The amplicon length was approximately 440 bp. The PCR product was directly suspended in 180 μl of hybridization solution (48 μl of 1 M tris-HCl pH7.5, 48 μl of 1 M NaCl, 20 μl of 0.5% tween-20, and 64 μl of Milli-Q water), hybridized with the probes on the Oral Care Chip at 50˚C for 16 h, and washed with the Genopal™ instrument system (Mitsubishi Chemical). Hybridization signal intensity (SI) was determined using multi-beam excitation technology and Genopal reader (Mitsubishi Chemical). SI for subsequent analyses was obtained by deducting the SI median of background spots from the SI median of the five spots on each probe. The background spots were spots with no probe mounted therein. For each array, an SI median of the background spots + 3σ was treated as the detection limit value. As PCR templates, MSA-1003™ containing mixed genomic material of 20 strains (American Type Culture Collection, Manassas, VA, USA), plasmid DNA, or subgingival plaque samples was used. Quantitative detection of 17 species of oral bacteria As the first step of quantitative detection, we measured the total amount of 16S rRNA using the standard calibration curve plotted in reference to a previous method [10]. Next, we determined the number of each bacterial species using each species-specific probe SI corrected with hybridization affinity ratio ( Table 1, S3 Fig). Data from the Ribosomal RNA Database version 5.5 (the Schmidt Laboratory at the University of Michigan, Ann Arbor, MI, USA) were used to determine the number of copies of 16S rRNA (Table 1). In the absence of appropriate information, the median value for the genus was used. To calculate the total number of bacteria in samples, 16S rRNA copy numbers relative to genomic DNA was assumed to be 4.5, calculated based on a weighted average reported in a study in which the predominant and prevalent bacterial species in the saliva of orally healthy subjects were determined by pyrosequencing [12]. The bacterial counts were calculated by multiplying the Avogadro's constant based on the molecular weight of the genome (i.e. the molecular weight of 16S rRNA was divided by the number of 16S rRNA copies). Verification of the validity of Oral Care Chip To verify the validity of Oral Care Chip with respect to representative 6 periodontopathic bacterial species, real-time PCR was performed using the 7500 Fast Real-time PCR System and Taq-Man™ Fast Universal PCR Master Mix, no AmpErase™ UNG (Applied Biosystems, Foster City, CA, USA). There were 121 measurement target samples before and after SRP treatment. Three samples with insufficient residual volume for verification were excluded. The compositions of the reagents used were as specified by the instruction manual; 1 μl of template was analyzed. The experiments were performed under the following conditions: 20 s at 95˚C, followed by 40 cycles of 3 s at 95˚C, and 30 s at 60˚C for each bacterium, or followed by 40 cycles of 15 s at 95 C and 60 s at 60˚C for the detection of the total number of bacteria. The probes and primers used were as described elsewhere [13][14][15][16], with the exception of those for T. forsythia for which 1 base at the 5 0 -end of the reverse primer was deleted because that particular base varied among the different strains (S2 Table). Similar to the Oral Care Chip probe, it was confirmed that the real-time PCR probe completely matched the sequence of the standard strain. Standard curves for each bacterium were generated, accordingly, using the following DNA samples: P. gingivalis ATCC 1 33277D-5, T. forsythia ATCC 1 43037 D-5, T. denticola ATCC 1 35405 D-5, Prevotella intermedia ATCC 1 25611 D-5, Aggregatibacter actinomycetemcomitans ATCC 1 700685 D-5, and ATCC 1 MSA-1002 (American Type Culture Collection) for the total number of bacteria. The detection limit was determined to be a threshold of 35 cycles. Clinical samples This study was conducted in accordance with the principles of the Declaration of Helsinki and was also approved by the ethics committee of Osaka University Graduate School of Dentistry (approval number: H20-E9). Prior to the selection of subjects, we explained the purpose of this study and possible disadvantages in detail both verbally and in writing, and then obtained written informed consent. A total of 64 patients with periodontal disease (25 males and 39 females; mean age, 47.9 ± 14.8 years) who visited the Osaka University Dental Hospital at first presentation and 72 healthy volunteers (46 males and 26 females; mean age, 25.7 ± 6.1 years) participated in this study (Table 2). We initially examined their periodontal tissue and recorded probing depth (PD), bleeding on probing (BOP), gingival index (GI), and plaque index (PlI) as clinical parameters. Two samples were taken from the patients with periodontal disease: one severely diseased site with deep periodontal pockets (PD � 6 mm) and one moderately diseased site (4 mm � PD < 6 mm) either in a neighboring tooth or in the contralateral tooth, respectively, for examination, whereas in the healthy volunteers, one healthy site with PD < 3 mm and GI < 1 was selected for plaque sampling. For these two patients, the above 2 sets and 4 samples were also collected. Among the patients with periodontal disease, we obtained samples from 31 patients who had agreed to sampling (total of 62 samples) after scaling and root planing (SRP). Samples were obtained from periodontal pockets with #40 absorbent points (Dentsply Maillefer, Ballaigues, Switzerland). Next, 200 μl of distilled water was added to the samples and vortex-mixed for 20 s. Then, the samples were stored at −80˚C. Prior to the analyses, the samples were pre-heated at 80˚C for 10 min, and 1 μl of 200-μl samples was used as a DNA template for PCR. Statistical analysis All analyses were conducted using R version 3.1.5 (R Foundation for Statistical Computing, Vienna, Austria). To test the correlation between the Oral care chip and real-time PCR, the Pearson correlation coefficient was used. Similarities between microbiota were analyzed with the Ward's method for clustering and the difference between PD and the numbers of each bacterial species in respective clusters was examined with the Steel-Dwass multiple comparison test. A Wilcoxon signed-rank test was used to determine changes in PD and the number of each bacterium before and after periodontal disease treatment. Among clinical information, BOP, GI, and plI were treated as categorical variables. The Chi-square test was used to compare three clusters, and the McNemar test was used for comparisons before and after treatment. The significance level was set to 0.05 for all tests. Evaluation of the quantitative performance of the Oral Care Chip To produce the standard curves to calculate the total counts of bacteria, input/output ratios were plotted (S2 Fig). The SI obtained by MSA-1003™ evaluation was analyzed to The data obtained by Oral Care Chip and real-time PCR were highly correlated, as indicated by the significant and low p-values (Fig 1). In contrast, some samples yielded different results with the two methods, probably because the probes had different specificities for different strains, except for the representative strains, as revealed by BLAST search analysis (NCBI). Cluster analysis of subgingival microbiota Cluster analysis was then performed to classify 132 samples obtained from patients at their first visit and 72 samples from healthy volunteers, based only on the quantities of 17 species of bacteria (Fig 2, Table 3). The clinical parameters value seen in Table 3 were calculated from the cluster constituent sample after classification and were not used for cluster analysis. The subgingival microbiota obtained from patients at their first visit was classified into at least three clusters according to the similarity of the quantities of bacteria. There was a significant difference in PD between cluster 1 and cluster 3, also, cluster 2 and cluster 3, but there was no significant difference between cluster 1 and cluster 2. The BOP-positive rate was significantly higher in cluster 3 than in clusters 1 and 2 in order. GI and PlI tended to be significantly higher in cluster 3 than in cluster 1 and 2 in order. Analysis of the characteristics of microbiota showed that there was a significant difference in the number of total bacteria, T.denticola, C.rectus, F. nucleatum, and Streptococcus intermedia in any combination of the three clusters. It also revealed that the proportion of T. denticola, Campylobacter rectus, and F. nucleatum to the total number of bacteria was particularly higher in cluster 3. Meanwhile, the numbers of P. gingivalis, T. forsythia, P. intermedia, and S. constellatus were significantly different among clusters 1 or 2 vs. 3, and the proportion of these bacteria to the total number of bacteria was particularly high in cluster 3. Overall, cluster 1 showed healthy profiles, with a mean PD of 2.9 mm and low amount of P. gingivalis, T. forsythia, and T. denticola. Cluster 2 showed an early-stage periodontal disease profile with a mean PD of 3.4 mm. The number of bacteria in cluster 2 was higher than that in cluster 1 for all species tested in this study. Cluster 3 showed an advanced-stage periodontal disease profile with a mean PD of 6.2 mm and a highest amount of P. gingivalis, T. forsythia, and T. denticola. Changes in microbiota after periodontal treatment Subgingival plaque samples before and after SRP (n = 62) were compared using the Oral Care Chip. The clinical findings of after-SRP sites were characterized by a reduction in clinical parameters, which indicate the presence of inflammation ( Table 4). The low p-values for bacterial count data indicated a correlation between clinical parameters and microbiota. In contrast, the change in microbiota showed a similar pattern to that observed for healthy sites after periodontal disease treatment, and the abundance ratio of some strains including streptococci was increased. Discussion A previous study [17] showed that poor oral hygiene increases the amount of dental plaque (bacterial plaque) that attaches to the surface of teeth and changes the composition of subgingival microbiota, leading to inflammation in the gingiva. Hence, bacteria were confirmed to be the major cause of periodontal disease. Especially, P. gingivalis, T. forsythia, and T. denticola of the red complex, a group of bacteria that are frequently isolated from deep periodontal pockets in patients with periodontal disease were considered responsible for the initiation and progress of periodontal disease [2]. Studies made from the view point of dysbiosis have increased along with the progress of microbiota analysis technology. It is hypothesized that periodontal disease is caused by the interactions between the host and the dysbiotic subgingival microbiota recently [4]. To monitor these microbiota changes and confirm their association with periodontal disease, simultaneous and quantitative detection of multiple bacteria that make up the subgingival microbiota is necessary. Existing methods for analysis of microbiota data are not quantitative or clinically applicable. Recently, a new method for simultaneous multiple bacteria detection has been developed [10]. In this study, we have demonstrated that the use of Oral Care Chip was quantitative as shown in the comparison with real-time PCR. This method can easily measure multiple bacterial species at the same time in a clinically applicable way. The value in parentheses indicates standard error. a P-value < 0.05 by Steel-Dwass test between clusters 1 and 3, also, between clusters 2 and 3. For bacterial counts, the p-value was adjusted based on the Bonferroni test. b P-value < 0.05 by χ 2 test for BOP, GI, and plI in any combination of the three clusters. Using this method, we have shown for the first time, to the best of our knowledge, the composition monitoring of subgingival microbiota in large-scale samples linking it with the PD and BOP obtained. This study demonstrated that the subgingival microbiota obtained from patients at their first visit can be classified into at least three clusters based on similarities in the number of each bacterium: these results indicated that the similarities in the number of each bacterium are associated with clinical findings, and that bacterial testing to diagnose periodontal disease was effective. The observations that samples with severe periodontal disease were enriched in cluster 3 and that those with moderate periodontal disease were enriched in cluster 2 indicated that the numbers of total bacteria, T. denticola, C. rectus, and F. nucleatum have gradually increased from the stage before PD deepens because there was a significant difference in any combination of the three clusters. Conversely, P. gingivalis and T. forsythia, P. intermedia, and S. constellatus were expected to increase after PD became deep to some extent because they were significantly higher in cluster 3 than in clusters 1 and 2. Hence, these changes in the microbiota can predict the progress of periodontal tissue destruction [18]. Moreover, a comparison of cluster 1 and 2 revealed that microbiota patterns were different even in the samples with no statistical difference in PD. Therefore, it is possible to determine the detailed condition of early periodontal disease by measuring the microbiota of a sample before PD grows deeper. From another point of view, 82 of 204 samples obtained at the first time visit had a PD of 6 mm or greater, and 81% of the 82 samples were found to have P. gingivalis (S1 File). This result is consistent with the Meta-analysis report which summarized studies from several countries including Japan which reported that P. gingivalis detective rate was 78% [4]. In this study, most sites treated for periodontal disease showed a remarkable improvement in clinical findings and a decrease in many target bacteria including the total bacteria. The similarity of these microbiota profiles between after-treatment samples and clinically healthy samples obtained at the first visit indicates that the subgingival microbiota after periodontal treatment changed to a state close to that of healthy sample. Oppositely, as for the two sites where the total number of bacteria increased exceptionally after treatment, one site was the only sample with deeper PD and worsened BOP after treatment. The other test site showed no change before and after treatment in the clinical findings of PD and BOP. The Oral Care Chip enables precise analyses of changes in the microbiota and is also effective to study the profiles of microbiota found in sites with no improvements in clinical symptoms after treatment. The main limitation of this study was that the number of samples was insufficient, therefore the periodontal disease threshold used as a test result could not be defined as the absolute number of bacteria. Acquiring clinical and bacterial data of the same subject over time can be beneficial. Future research will aim to clarify the changes in the microbiota by following the rate of progress of the disease at one site, the effects of age, sex, and the consumption of antibiotics. Our findings indicate that an increase in the ratio of C. rectus and F. nucleatum in the subgingival microbiota, followed by the emergence of the red complex, can be a predictor of the progress of periodontal tissue destruction. These results demonstrate that this novel bacterial detection method using the newly developed Oral Care chip is effective to identify dysbiosis in the mouth. This novel bacterial detection platform might also be useful not only for the analysis of oral microbiota but also for the analysis of intestinal microbiota, skin microbiota, and environmental microbiotas by modifying the design of the DNA probes used. Genomic DNA in a sample and control DNA are amplified using common universal primers by competitive PCR. An amplicon having a complementary strand with two probes is distributed to the two probes at a constant rate upon hybridization. (C) The ratio is unique for each probe and defined as the hybrid coefficient. These were calculated in advance experimentally (S1 Fig). Analysis of signal intensity after hybridization was performed in two steps. For the first step, the total number of bacteria was calculated from the SI of competitive PCR products. In the second step, the number of each species of bacteria was calculated by multiplying the SI ratio specific for each probe and the total number of bacteria. To correct for the binding capacity of each specific probe, the SI of each probe was corrected using the hybridization coefficient as described above ( 18. When individual probes were evaluated, PCR products from plasmid DNA as a template was purified using the MinElute PCR purification kit (Qiagen, Hilden, Germany) and suspended in a hybridization solution. The plasmid DNA with the appropriate 16S rRNA sequence (sequence accession numbers are given in Table 1) was inserted into pUC19 (FASMAC, Kanagawa, Japan). The reason for purifying the amplified product after PCR was to exclude extra primers and to calculate the number of moles from the DNA concentration. To compare the utility of each probe, the molar concentrations of the template DNA were set based on conditions. (TIF)
2020-03-01T14:03:23.703Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "3b27f90c9f72ea0a4dd00eaa3b3f796017568635", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0229485&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed0ef2cccf70eb468ab7ae52f3b87f887d38e718", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232047331
pes2o/s2orc
v3-fos-license
Elk migration influences the risk of disease spillover in the Greater Yellowstone Ecosystem Abstract Wildlife migrations provide important ecosystem services, but they are declining. Within the Greater Yellowstone Ecosystem (GYE), some elk Cervus canadensis herds are losing migratory tendencies, which may increase spatiotemporal overlap between elk and livestock (domestic bison Bison bison and cattle Bos taurus), potentially exacerbating pathogen transmission risk. We combined disease, movement, demographic and environmental data from eight elk herds in the GYE to examine the differential risk of brucellosis transmission (through aborted foetuses) from migrant and resident elk to livestock. For both migrants and residents, we found that transmission risk from elk to livestock occurred almost exclusively on private ranchlands as opposed to state or federal grazing allotments. Weather variability affected the estimated distribution of spillover risk from migrant elk to livestock, with a 7%–12% increase in migrant abortions on private ranchlands during years with heavier snowfall. In contrast, weather variability did not affect spillover risk from resident elk. Migrant elk were responsible for the majority (68%) of disease spillover risk to livestock because they occurred in greater numbers than resident elk. On a per‐capita basis, however, our analyses suggested that resident elk disproportionately contributed to spillover risk. In five of seven herds, we estimated that the per‐capita spillover risk was greater from residents than from migrants. Averaged across herds, an individual resident elk was 23% more likely than an individual migrant elk to abort on private ranchlands. Our results demonstrate links between migration behaviour, spillover risk and environmental variability, and highlight the utility of integrating models of pathogen transmission and host movement to generate new insights about the role of migration in disease spillover risk. Furthermore, they add to the accumulating body of evidence across taxa that suggests that migrants and residents should be considered separately during investigations of wildlife disease ecology. Finally, our findings have applied implications for elk and brucellosis in the GYE. They suggest that managers should prioritize actions that maintain spatial separation of elk and livestock on private ranchlands during years when snowpack persists into the risk period. | INTRODUC TI ON Traditionally, epidemiological models have considered the temporal dynamics of pathogen transmission while frequently overlooking the role of movement in host-pathogen interactions (Diekmann et al., 2012;Dougherty et al., 2018). Host movements are often an essential component of transmission dynamics, however, especially for diseases with highly mobile hosts and long transmission periods (Dougherty et al., 2018;Plowright et al., 2017;White et al., 2018;Zidon et al., 2017). Seasonal migrations are a form of movement whereby animals take advantage of cyclical fluctuations in resources, escape predation and insect harassment, find mates, and avoid seasonably uninhabitable landscapes (Alerstam et al., 2003;Avgar et al., 2014;Dingle & Drake, 2007). These movements likely influence pathogen transmission both within and across host species, but these influences are seldom quantified (Altizer et al., 2011;Plowright et al., 2017;Teitelbaum et al., 2018). Across taxa, many populations display a characteristic form of within-population variation in migration behaviour, with migrants moving seasonally between distinct ranges, and residents remaining in the same area throughout the year (Chapman et al., 2011). This within-population variation in individual behaviour is known as partial migration and offers unique opportunities to evaluate the role of migration in pathogen transmission dynamics by examining differences in transmission potential between migrants and residents. This may be because seasonal migration allows migrants to escape from infected habitats or because the energetic demands of migration disproportionately kill infected animals (Altizer et al., 2011;Bradley & Altizer, 2005;Mysterud et al., 2016). Here we examine the role of partial migration on the risk of pathogen transmission from elk Cervus canadensis to cattle Bos taurus and domestic bison Bison bison (hereafter livestock; Table 1), which we refer to as spillover risk. Partial migration behaviour is common in elk populations in the Rocky Mountain West (Cole et al., 2015;Eggeman et al., 2016;Hebblewhite & Merrill, 2011;Jones et al., 2014;Middleton et al., 2013). In some of these populations, the number of migrant elk is decreasing and the number of resident elk is increasing, perhaps in response to changes in land use, predation risk, habitat conditions or climate (Cole et al., 2015;Hebblewhite & Merrill, 2011;Middleton et al., 2013). In the Greater Yellowstone Ecosystem (GYE), migration behaviour in elk herds has the potential to affect disease spillover risk. Brucellosis is a zoonotic disease caused by the bacterium Brucella abortus, which induces, and is transmitted by, reproductive failures (abortions or nonviable births) in cattle, bison and elk (Cheville et al., 1998). Transmission occurs when individuals have direct contact with B. abortus bacteria in infected foetuses, placentas or birthing fluids (Cheville et al., 1998). Depending upon conditions, the bacteria can remain viable on tissue, soil or vegetation for several weeks, although scavengers typically remove aborted foetuses prior to loss of viability (Aune et al., 2012;Cook et al., 2004). In elk, almost all brucellosis-induced abortions occur between February and June, with a peak from March through May . Although brucellosis was nearly eradicated from the United States, it still persists in the elk and bison populations of the GYE (National Academies of Sciences Engineering and Medicine, 2017; Ragan, 2002). Elk are responsible for the rare, but increasing, number of livestock infections in Idaho, Montana and Wyoming (Brennan et al., 2017;Kamath et al., 2016;Rhyan et al., 2013). These spillover events are of considerable concern for livestock managers because of the costs of quarantine and trade restrictions (National Academies of Sciences Engineering and Medicine, 2017). In addition, brucellosis is expanding into new elk populations in the GYE (Cross et al., 2010;Kamath et al., 2016). Spillover risk from elk to livestock involves complex interactions among brucellosis seroprevalence, demography and density, distribution, the timing of abortions in elk, and the distribution and density of livestock (National Academies of Sciences Engineering and Medicine, 2017). The timing of spring elk migration in the GYE is influenced by snow conditions and plant phenology, and coincides with the period of greatest transmission risk for brucellosis Jones et al., 2014;Rickbeil et al., 2019;White et al., 2010). At the onset of the transmission risk period, migrant and resident elk occur together on lower elevation winter ranges that are often managed as private ranchlands (Rayl et al., 2019). As the transmission period progresses, however, migrant elk begin moving 10-100 s of kilometres to summer range on publicly owned lands at higher elevations, thereby decoupling their distribution from resident elk (Barker et We examined elk-to-livestock spillover risk in space and time, focusing on the role that migration, weather variability, disease prevalence and demography play in influencing this risk. We hypothesized that weather variability would affect the spillover risk from migrant elk because of its influence on plant phenology and the timing of migration. Consequently, we predicted that migrant elk would generate more spillover risk during years with heavier snowfall, whereas we expected risk from resident elk to be unaffected by variability in annual weather conditions. Furthermore, we hypothesized that elk migration would lower the per-capita risk of pathogen spillover because we expected commingling risk with livestock to be reduced as elk migrated away from winter range. Our results offer insight into the role migration plays in the risk of disease spillover at the wildlife-livestock interface and have practical implications for the management of elk and brucellosis in the GYE. | MATERIAL S AND ME THODS We did not have data quantifying contact rates of livestock with infected elk foetuses, nor data on how frequently that contact results in infection. Therefore, we relied on an approach that coupled spatiotemporal estimates of elk and livestock distribution with disease and demographic data to quantify spillover risk. To evaluate the role of migration and weather variability in the risk of brucellosis transmission from elk to livestock, we followed the same general approach of Merkle et al. (2018) and Rayl et al. (2019). We (a) identi- | Study area We studied elk from eight Greater Yellowstone Ecosystem herds | Identifying migrants and residents Although seasonal migration likely occurs along a continuum of movement tactics (Cagnacci et al., 2016;Dingle & Drake, 2007), our goal was to differentiate between the two dominant space use tactics that we hypothesized were most influential to spillover risk between elk and livestock. Therefore, we sought to discriminate between elk that remained inside or adjacent to winter range (residents) and elk that migrated away from winter range (migrants). To do so, we estimated the overlap of seasonal kernels to classify individual elk-years as migrant or resident (Barker et al., 2019;Fieberg & Kochanny, 2005). Unlike prior studies, which have typically calculated the overlap between individual seasonal home ranges to differentiate between migrants and residents, we calculated the overlap between each herd's winter range and individual elk-year post-migration home ranges. We delineated a winter range for each herd using that herd's GPS locations during the winter season (excluding migratory portions of individual datasets when elk initiated migration during winter or did not return from summer range until winter) to create 99% contours of bivariate normal kernels with the reference bandwidth (Worton, 1989). We used individual elk-year GPS locations from August to create 95% contours of bivariate normal kernels with the reference bandwidth as an estimate of postmigration home ranges. For 13 individual elk-years without GPS location data in August, we used July GPS locations to estimate postmigration home ranges. We classified individual elk-years as migrant when their post-migration home range did not overlap winter range, and as resident when they did. Because we did not monitor individual elk for an equal number of years, we used the proportion of individuals classified as migrants during their first year of monitoring to estimate the proportion of migrants in each herd. | Resource selection function development We used RSFs to describe the spatiotemporal relationship between the relative probability of female elk occurrence and landscape attributes. We fit RSFs separately for migrants and residents from each herd in winter, spring and summer because we expected resource selection to vary seasonally, among herds, and between migrants and residents. Because our objective was to identify finescale spatiotemporal overlap of elk abortion events with areas of potential livestock presence, we used third-order RSFs (selection of patches within individual home ranges) to characterize habitat selection (Meyer & Thuiller, 2006). We estimated RSFs by comparing habitat characteristics of observed locations with an equal number of available locations. We randomly sampled available locations from within a 99% contour of a bivariate normal kernel generated with the reference bandwidth for each individual-year in each season (Worton, 1989). We randomly assigned available locations to a specific day drawn with replacement from the distribution of days integrated Normalized Difference Vegetation Index (NDVI, 250-m resolution, MODIS data; Pettorelli et al., 2005) and the daily NDVI value of a pixel (250-m resolution; scaled between 0 and 1). We assigned daily values of snow cover to each pixel using the pixel value from the 8-day snow cover interval that encompassed that day. To derive daily NDVI values, we followed the methods of Bischof et al. (2012) and Merkle et al. (2016) to construct a smoothed and scaled NDVI time series for each pixel (see section 2 in Merkle et al. (2016) for details). Prior to building seasonal RSFs, we evaluated whether a linear or quadratic functional form for elevation, slope, solar radiation and daily NDVI was better supported. For each migrant and resident group from each herd in each season, we built univariate GLMMs or GLMs for the functional forms of each variable, and determined the form with the most support among all tactic-specific groups (i.e. all migrants or residents) using Akaike Information Criterion for small sample sizes (AICc; Burnham & Anderson, 2002). We evaluated collinearity between pairs of covariates before building seasonal RSFs. When we detected collinearity (Pearson's correlation coefficient ≥0.7), we built GLMMs or GLMs for each covariate, and excluded the covariate with less AICc support. We also assessed our seasonal RSF models (without quadratic terms) for multicollinearity using the variance inflation factor (VIF), and detected no issues (VIFs for all variables ≤4.43; Dormann et al., 2013;Graham, 2003). We derived maximum-likelihood estimates for GLMMs using adaptive Gauss-Hermite approximation with 5 integration points (Bolker et al., 2009). To evaluate the predictive ability of our seasonal RSF models, we used 10 repetitions of fivefold cross-validation with 10 bins of equal size, calculating the average Spearman rank correlation (r s ) between the withheld data and the ranked bins (Boyce et al., 2002). | Predicting abortion and spillover risk We built our RSFs using NDVI and snow cover data corresponding to the time period when individual elk were monitored, which allowed us to quantify the relationship between elk occurrence, vegetation phenology and snowfall. We then identified representative low, average and heavy snowfall years that occurred during our study period see Appendix S2). To evaluate the influence of weather variability on brucellosis transmission risk, we predicted each migrant and resident group's distribution using NDVI and snow cover datasets from these representative snowfall years (see below for details). The distribution of ungulates is a function of, not only environmental factors but also cognitive factors associated with sociality, spatial fidelity, memory and learning (Jesmer et al., 2018;Merkle et al., 2017;Wolf et al., 2009). As a result, elk herds in the GYE show strong fidelity to seasonal ranges and migration routes and individual herd ranges tend to be concentrated within larger areas of suitable habitat (Boyce et al., 2003;Kauffman et al., 2018;Rayl et al., 2019;White et al., 2010). Therefore, we developed a new technique using a sliding window approach to limit the spatiotemporal extent over which we mapped each migrant or resident group's RSF predicted values to areas likely to have been occupied by that group on that day. First, we resampled all covariate grids from their original resolution to 250 m by calculating the mean pixel value that fell within the extent of the output 250-m pixel. Then, for each of the three weather scenarios, we estimated the predicted relative probability of group use u(x, t) per 250-m pixel x, per time step t (in days), as: where i = refers to pixels 1 through n for time step t, w xt is the daily predicted RSF value of the relative probability of use by elk for a 250-m pixel x and K xt is the daily predicted value of elk availability (0 or 1) for pixel x. In Equation 3, K xt limits the spatiotemporal extent of the predicted relative probility of use to areas likely to have been occupied by that group on that day. We employed a sliding window approach to estimate K xt . For every time step t, we generated a 99% contour of a bivariate normal kernel with the reference bandwidth using elk-group locations from t − 15 days to t + 15 days. We assigned pixels within the contour a K xt value of 1 and pixels outside the contour a K xt value of 0. During the first 15 days of the risk period, we estimated the K xt kernel using elk-group locations from the first 31 days of the risk period because some groups lacked location data prior to the start of the risk period (because of capture timing). The denominator in Equation 3 ensures that ∑ n i = 1 u(x, t). equals 1, thereby allowing us to compare the daily predicted relative probability of use among groups. We also estimated the density experienced by migrant and resident groups during the risk period to explore how elk density changed from February through June, and the consequent implications for brucellosis transmission (see Appendix S3). For each weather scenario, we calculated the daily relative risk of abortion events R xt (hereafter abortion risk; Table 1) per 250 m pixel x, per time step t (in days), as: where u(x, t) is the daily predicted relative probability of group use for pixel x from Equation 2, F gh is the estimated number of female elk from group g and herd h, S h is the average brucellosis seroprevalence estimated for herd h, y is a mean pregnancy rate of 90% (K. Proffitt, unpubl. data) and p t is the predicted daily probability of aborting given an individual is seropositive and pregnant ; see Appendix S3). Equation 4 calculates a relative estimate, which is proportional to the number of abortion events, and can be directly compared among groups because the denominator in Equation 3 ensured that ∑ n i = 1 u(x, t) equals 1. We used samples from hunter-harvested and research-captured adult female elk to estimate herd seroprevalence (see Montana Fish Wildlife and Parks, 2015 for details on how serostatus was determined). We did not account for uncertainty in our estimates of R xt because of computational limitations associated with deriving error estimates for u(x, t) on a cell-by-cell basis, and because accurate methods to do so for F gh across the region were not available at the time of this analysis. While unaccounted for uncertainty associated with F gh , S h , y and p t may bias estimates of R xt high or low, these biases are likely to affect each pixel and migrant group similarly. Therefore, general conclusions and results are likely to be invariant across the range of variability associated with each parameter. We combined R xt estimates with landownership data to estimate the daily and cumulative abortion risk from each migrant and resident group occurring on private, BLM, USFWS, NPS, USFS, and state government lands across the three weather scenarios. We did not consider the distribution of livestock in these calculations. We then calculated these same metrics for areas with potential livestock grazing to quantify the potential for elk-to-livestock spillover risk on the landscape. We defined areas of potential livestock grazing as private ranchlands in Montana with ≥0.4 hectares of grazing area Table 1; see Appendix S4). We used turnout dates from BLM and USFS grazing records from 2014 (Wells et al., 2019) and state grazing records from 2017 to determine when livestock were present on federal and state allotments. We defined spillover risk as abortion risk on private ranchlands during the risk period or on allotments with turnout dates during the risk period (Table 1). Therefore, abortion risk on livestock allotments outside of turnout dates did not contribute to our estimate of spillover risk. Our comparisons of spillover risk between migratory tactics and among herds and weather scenarios relied on the assumption that livestock contact with infected foetuses and the risk of infection were positively correlated with abortion risk. We combined our estimates of the number of adult female elk in each group and spillover risk to calculate the average spillover risk per adult female elk (hereafter per-capita spillover risk; Table 1). Collared elk from three migrant groups spent a portion of the risk period on BLM, USFS and private lands in Idaho where we did not have data on public and private grazing. To account for this in our estimate of per-capita spillover risk, we reduced the estimated number of female elk from these groups by the daily predicted probability of group use that occurred in Idaho for each weather scenario. We conducted all analyses in program R version 3.3.1, using lme4 to fit GLMMs (R Development Core Team, 2016). | RE SULTS From our sample of 223 elk (280 elk-years), we identified 152 migrants and 71 residents. We identified one fully resident herd, and seven herds with both migrant and resident individuals (Figure 2). We collected multiple years of data from 37 elk in five herds. Seven of these 37 elk switched migratory tactics between years, including 6 of 18 elk from the Paradise Valley herd and one of six from the Blacktail herd. As migrants departed from winter range, the density of elk declined for both migrant and resident groups (Figure 3). Table S1). Seroprevalence did not differ among migrants and residents in our sample of collared elk, but sample sizes were relatively small for some herds (see Appendix S3: Figure S1). Estimated brucellosis seroprevalence ranged from a high of 53% (95% CI = 36%-70%) for the Mill Creek herd to a low of 2% (95% CI = 1%-7%) for the Greeley herd (see Appendix S3: Table S2). We estimated that the Madison Valley herd accounted for 41% of the abortion risk each Across weather scenarios, we calculated that 50%-56% of migrant and 77%-78% of resident abortion risk was on private lands (e.g. 100 × [migrant abortion risk on private lands/migrant abortion risk on all lands] = 50%-56%; Figure 4). During the average snowfall year, we estimated that approximately 50% of migrant abortion risk was on private lands, 25% on USFS lands, 13% on NPS lands, 8% on state government lands, 4% on BLM lands and 0% on USFWS lands. In that same year, we estimated that approximately 78% of resident abortion risk was on private lands, 8% on state government lands, 5% on both USFS and BLM lands, 4% on NPS lands, and 0% on USFWS lands. In contrast, we estimated that 98%-99% of spillover risk was on private ranchlands for both migrants and residents (e.g. 100 × [migrant spillover risk on private ranchlands/migrant spillover risk on grazing land] = 98%-99%). Spillover risk on private ranchlands represented 74%-75% of the total abortion risk from residents, and 49%-54% for migrants, depending on weather scenario. Migrant elk were responsible for 68% of spillover risk because they occurred in greater numbers than resident elk (Figure 5a, see Appendix S6: Table S1). We found support for our hypothesis that weather variability would affect spillover risk for migrant, but not resident elk. We estimated that the distribution of spillover risk for migrants was somewhat sensitive to changes in snow cover, with 7%-12% more risk occurring on private ranchlands during heavy snowfall years than during low or average snowfall years. Conversely, we estimated that the distribution of spillover risk on private ranchlands for residents changed <1% during low, average and heavy snowfall years. We also found support for our hypothesis that elk migration lowered per-capita spillover risk. Although migrants were responsible for 68% of spillover risk because of their greater numbers, for five of seven herds with both migrants and residents, per-capita spillover risk on private ranchlands was greater for residents than for migrants (Figure 5b). Averaged across weather scenarios and herds, we estimated that per-capita spillover risk on private ranchlands was 23% greater for residents than for migrants. | D ISCUSS I ON We combined ecological, epidemiological and behavioural data from >200 elk in eight GYE herds with livestock distribution data to examine the influence of partial migration on abortion and spillover risk. By incorporating these datasets, we revealed that most spillover risk from elk to livestock was on private ranchlands in early spring. Furthermore, we found that migrant elk generated the majority of spillover risk because they were more abundant than resident elk. On a per-capita basis, however, we estimated that migrant elk were 23% less risky to livestock than resident elk because they migrated off of private ranchlands during the risk period. Our synthetic approach is uncommon in disease ecology because of the challenges associated with merging data streams collected at varying spatial and temporal resolutions and scales (Dougherty et al., 2018;Plowright et al., 2017;Rayl et al., 2019). Other studies have integrated host movement or density data to estimate pathogen transmission risk or exposure rates within a host species (e.g. Borg et al., 2017;Proffitt et al., 2015;Russell et al., 2015). Indeed, there have been a number of previous studies that have examined the potential for brucellosis spillover risk from wildlife to livestock, but none to date that have examined the role of host migration in spillover risk (Kilpatrick et al., 2009;Merkle et al., 2018;Proffitt et al., 2011;Rayl et al., 2019). Our analysis accounted for many of the major components of elk-to-livestock brucellosis spillover risk, including host movement, distribution, density, prevalence and transmission timing, and suggested important links between migratory behaviour and the risk of disease spillover. The approach we developed is applicable to other disease systems such as avian influenza and bighorn Ovis canadensis and domestic sheep (O. aries) pneumonia, although the challenge is to have all the necessary datasets align in space, time and resolution Rayl et al., 2019). As we had hypothesized, our findings provided evidence that weather variability affected spillover risk from elk to livestock in the GYE. Previous work in this system has documented delayed elk migration in spring following winters with heavier snowfall and later vegetation green-up (Rickbeil et al., 2019;White et al., 2010). Our results correspond with these findings and demonstrate the cascading effects this environmental variability can have on spillover risk. As predicted, we found that heavier snowfall did not influence spillover risk from residents, but that it had a modest influence on the distribution of spillover risk from migrants. During heavy snow years, we estimated that 7%-12% more migrant spillover risk was on private ranchlands, likely because snow cover delayed departure from winter range. Under future climate change scenarios, decreased snowpack and advanced snowmelt are expected in the Rocky Mountains (Gergel et al., 2017). These changes may induce earlier departure from winter range, thereby reducing spillover risk from migrant elk on private ranchlands (Rickbeil et al., 2019;White et al., 2010). In support of our second hypothesis, we found evidence that the spring migration of elk alleviated per-capita spillover risk. As migrants moved from lower-elevation winter range to higherelevation summer range they moved away from private ranchlands where spillover risk was highly concentrated. Importantly, however, and in contrast to differences in per-capita spillover risk, we found that migrant elk generated the majority of spillover risk in our system because of their greater abundance. It has been hypothesized that the number of migrants and residents in partially migratory populations is maintained by density-dependent demography or behavioural switching between movement tactics (Kaitala et al., 1993;Lundberg, 2013). In partially migratory ungulate populations, migrants frequently outnumber residents, and this is what we observed among the elk in our study ; Fryxell F I G U R E 5 (a) Estimated cumulative spillover risk for migrant and resident female elk occurring on private ranchlands during the risk period (15 February-30 June) for eight Greater Yellowstone Ecosystem herds and (b) estimated cumulative per-capita spillover risk for migrant and resident female elk occurring on private ranchlands during the risk period. Values in both panels were averaged across weather scenarios Sawyer et al., 2016;Figure 2). Recent studies in the Rocky Mountains, however, have demonstrated that the benefits of elk migration relative to residency may be declining as a result of altered landscapes, climate regimes or predator guilds (Barker et al., 2019;Hebblewhite & Merrill, 2011;Middleton et al., 2013). For example, Barker et al. (2019) found that resident elk had access to higher quality forage than migrant elk because of the availability of irrigated agriculture in valley bottoms where residents resided yearround. If the fitness benefits of migration in our system decrease, this may affect the demographic balance of migrants and residents, and therefore, also influence the risk of disease spillover. How these potential demographic changes may alter the risk of disease spillover in the future remains unknown, however. If resident elk were to increase in number while elk carrying capacity remains static, this will likely translate into increased risk because of the greater spillover potential associated with resident behaviour. On the other hand, if elk herds decline in size as a result of a decreasing number of migrants, the risk of disease spillover will likely decline as well. An ancillary benefit of migration may be a reduction in disease or parasite exposure for hosts (Altizer et al., 2000;Folstad et al., 1991;Piersma, 1997). In temperate environments, cervids frequently occur at higher densities during winter, which may enhance the risk of pathogen transmission (Conner et al., 2008). In our study area, the migration of elk off of winter range lowered the conspecific density experienced by both migrant and resident groups during the risk period (Figure 3). This decline in density may reduce elk-to-elk transmission risk of brucellosis, as well as other density-dependent diseases. By the end of the risk period, migrant groups typically occurred at lower densities than resident groups. It is important to note, however, that our density estimates did not account for commingling of migrant elk from different herds during migration and on summer range, and thus are likely underestimates. Similar observations of higher conspecific density of resident elk compared to migrant elk have been observed elsewhere in Montana (Barker et al., 2019). Whether or not these changes in density result in differences in pathogen transmission within migrant and resident elk groups requires further investigation. In our sample of collared elk, seroprevalence did not differ among migrants and residents, consistent with other analyses, but sample sizes were relatively small for some herds (see Appendix S3: Figure S1; Yang et al., 2019). Our work clearly illustrates the value of including spatiotemporal variability for both reservoir and host populations during examinations of spillover risk. There was a striking contrast between our estimates of abortion risk, which did not consider the spatiotemporal distribution of livestock, and our estimates of spillover risk, which did. If we relied only on our estimates of abortion risk, we would have erroneously concluded that 44%-50% of transmission risk for migrant elk and 22%-23% for resident elk was on state and federal lands (Figure 4). In contrast, when we incorporated our spatiotemporal estimates of livestock distribution, we found almost no spillover risk (<2%) outside of private ranchlands for both migrant and resident elk. This suggests that the current timing of livestock stocking on state and federal allotments is effective at preventing commingling of elk and livestock during the risk period, at least for the elk herds in the Montana brucellosis designated surveillance area. Importantly, we were unable to account for several sources of spatial and temporal variability in our analyses. Most significantly, we did not have detailed data on the spatiotemporal distribution of livestock on private ranchlands. As a result, we most likely overestimated risk on this grazing type because we assumed that livestock were always present. Similarly, we did not have information about the space use of livestock within individual allotments, which likely changes annually. Additionally, we did not have data quantifying contact rates of livestock with infected elk foetuses, how frequently that contact results in infection or the environmental persistence of B. abortus. Aune et al. (2012) found that B. abortus can remain viable on foetal tissues and soil or vegetation for 21-81 days depending on month, temperature and exposure to sunlight. We expect that aborted foetuses will not remain on the landscape for that long in our study area because they will likely be removed by scavengers much more quickly (Cook et al., 2004). Further research is needed to estimate foetal scavenging rates for our study area. Finally, as in Merkle et al. (2018) and Rayl et al. (2019), we did not include a temporal transmission component within elk herds, and therefore, could not predict disease dynamics across consecutive years. Further research is needed to collect finer-resolution data on the distribution of livestock, and to incorporate temporal models of pathogen transmission into future predictions of risk. Our estimates of abortion and spillover risk should be viewed somewhat cautiously, as we were unable to include estimates of variance in our predictions. Instead, as in Merkle et al. (2018) and Rayl et al. (2019), we assumed that the number of female elk, seroprevalence, the proportion of migrants and residents, abortion timing, pregnancy rates, potential livestock distribution, and space-use predictions were all known without error. Incorporating uncertainty from seroprevalence, the proportion of migrants and residents, abortion timing and the number of female elk into our analyses would be relatively straightforward, but computationally demanding. Such an effort would rigorously propagate error through only a portion of Equation 4, as we do not currently have fine-resolution spatiotemporal data on livestock distribution. Additionally, it would be challenging to quantify uncertainty in the predicted probability of elk use across space and time given existing computing capacity. Because this unaccounted-for uncertainty is likely to affect individuals within herds similarly; however, it would be unlikely to alter inferences about within-herd differences between migrant and resident spillover risk (i.e. Figure 5b). It could, though, affect conclusions of among-herd comparisons (i.e., Figure 5a). In the future, as computational capacity increases, it would be useful to quantify this error. Doing so would generate information that could be used to identify optimal data collection and surveillance strategies to minimize uncertainty in risk predictions. Although animal movements likely impact disease dynamics, it is uncommon and difficult to synthesize host movements with disease ecology (Altizer et al., 2011;Dougherty et al., 2018;White et al., 2018; but see Guber et al., 2016;Merkle et al., 2018;Rayl et al., 2019). These unified approaches are necessary, however, to properly understand the effects that complex movement behaviours, such as migration, may have on host-pathogen dynamics. In this work, we used an integrated modelling framework to enumerate spillover risk in space and time, and found significant links between migration behaviour, the potential for pathogen transmission and environmental variability. Further research is needed to determine how density-dependent demography, behavioural switching between movement tactics and environmental change may influence these links, and therefore, the distribution of spillover risk. DATA AVA I L A B I L I T Y S TAT E M E N T Data available from the Dryad Digital Repository https://doi.org/ 10.5061/dryad.34tmp g4j2 (Rayl et al. 2020).
2021-02-26T06:16:26.795Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "e45236dc76905ccff6cbfb35b0c3ebacbea50ab8", "oa_license": "CCBYNC", "oa_url": "https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1365-2656.13452", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ed91a52c26f6e30e0844b68af8235b367ab57a3d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119520499
pes2o/s2orc
v3-fos-license
Multiple Reflections and Diffuse Scattering in Bragg Scattering at Optical Lattices We study Bragg scattering at 1D atomic lattices. Cold atoms are confined by optical dipole forces at the antinodes of a standing wave generated inside a laser-driven cavity. The atoms arrange themselves into an array of lens-shaped layers located at the antinodes of the standing wave. Light incident on this array at a well-defined angle is partially Bragg-reflected. We measure reflectivities as high as 30%. In contrast to a previous experiment devoted to the thin grating limit [S. Slama, et al., Phys. Rev. Lett. 94, 193901 (2005)] we now investigate the thick grating limit characterized by multiple reflections of the light beam between the atomic layers. In principle multiple reflections give rise to a photonic stop band, which manifests itself in the Bragg diffraction spectra as asymmetries and minima due to destructive interference between different reflection paths. We show that close to resonance however disorder favors diffuse scattering, hinders coherent multiple scattering and impedes the characteristic suppression of spontaneous emission inside a photonic band gap. I. INTRODUCTION The idea of realizing photonic band gaps (PBG) in optical lattices, i.e. periodic arrays of cold atomic clouds confined inside standing light waves, has been published in 1995 by Deutsch et al. [1]. A first step towards an experimental observation of this phenomenon was made by showing that lattices of atomic gases can give rise to Bragg scattering in the very same way as X-rays are scattered in structure analyses of solid crystals [2] or molecules [3]. This demonstration has been given by G. Birkl et al. and M. Weidemüller et al. [4,5] using nearresonant optical lattices. In resonant lattices the optical trapping potential provides an efficient cooling mechanism for the atomic cloud. This cooling is important, because it balances heating due to inelastic scattering processes, which destroy the periodic order and decrease the Bragg scattering efficiency. Cooling is absent in conservative optical lattices tuned far from atomic resonances. On the other hand, conservative lattice potentials are interesting in view of their perspectives to mimic solid state physics. For example, Mott insulator phase transitions in degenerate atomic quantum gases have been observed [6], and fermionic gases confined in optical lattices are expected to exhibit novel quantum phases involving hightemperature superfluidity. Those phases may constitute useful toy models for superconductivity in high-T c cuprates [7]. Bragg diffraction could represent a novel and powerful tool for sensitively probing the properties of such optical crystals provided the destructive influence of resonant probe light absorption is mastered. PBGs are today extensively studied in crystals and fibers. Dielectric materials offer the possibility of realizing complex periodic structures in three dimensions alternating high and low index of refraction domains. Those structures, called photonic crystals, can exhibit ranges of frequencies known as photonic band gaps for which the propagation of electromagnetic waves is classically forbidden in certain directions [8]. Tailoring of the density of states for the electromagnetic modes allows for controlling fundamental atom-radiation interactions in solid state environments and even to suppress vacuum fluctuations. The hallmarks of a PBG are the inhibition of spontaneous emission, an effect that has been observed with optical cavities [9] and the possibility of Anderson localization of light by point defects added to the photonic band gap material. Although impressive progress has been made [10] in fabricating photonic crystals, they suffer from fundamental difficulties in guaranteeing the required fidelity over long ranges [11] due to fluctuations in position and size of the building blocks. This disorder perturbs those properties of photonic crystals based on global interference: It reduces the Bragg reflectivity, extinguishes the transmitted light, and ultimately destroys the photonic band gap. On the other hand, optical lattices exhibit an intrinsically perfect periodicity. Local disorder introduced by thermal fluctuations in the atomic density distribution at each lattice site reduces the value of the Debye-Waller factor [12], but does not affect the quality of the longrange order. To observe photonic band gaps with optical lattices, one must reach the thick grating regime. However, all Bragg scattering experiments on optical lattices have so far [4,5,13] been performed in the thin grating regime, where the lattice's optical density is so low that multiple light scattering events are rare. Bragg scattering at thin lattices is understood as resulting from constructive interference of the Rayleigh-scattered radiation pattern emitted by periodically arranged point-like sources. In this regime, the reflection coefficient of the lattice turns out to be nearly real, phase shifts are negligible, and spectral lineshapes are symmetric. In particular, since the scattering takes place as a local process, which means that the light is scattered by individual atoms, the atomic positions do not shape the absorption spectrum [14]. In contrast, the thick grating regime is characterized by multiple reflections of the incident light between the stacked atomic layers (Bragg planes). The interference between the light reflected from or transmitted through the layers gives rise to stopping bands for certain light frequencies or irradiation angles. In this regime, absorption can generally be neglected, but large phase shifts occur, and the lineshapes are asymmetric. Multiple beam interference globalizes the scattering process. In this work we study a one-dimensional optical lattice consisting of a standing light wave filled with trapped atoms. The atoms arrange themselves into a linear array of lens-shaped clouds aligned along their symmetry axis. The clouds have a finite radial extent and are centered at the locations of the antinodes. We show that for this configuration the thick grating regime is within experimental reach. In fact, which regime is realized in experiment depends on the available effective number of scattering layers. The number of populated antinodes sets an upper limit. However the finite radial extent of the layers also limits the number of multiple reflections when the angle of incidence for the Bragg light is large. Therefore, to get into the thick grating regime, we have modified the setup of a previous experiment [13] in order to reduce the Bragg angle and increase the radial extent of the Bragg layers. With this setup we encounter a new limitation: When the probe laser is tuned close to an atomic resonance, detrimental absorption due to disordered atoms reduces the effective number of layers involved in multiple scattering and dominates the spectra. Off-resonance, in contrast, the weak Bragg scattering efficiency brings us into the thin grating regime. Hence, it is important to identify clear signatures of multiple scattering in optical lattices and to develop sensitive tools for their detection. In some respects, we may understand the 1D optical lattice as a dielectric mirror with layers made of a dilute atomic gas. On the other hand, as shown in Ref. [15], the 1D optical lattice shares the peculiarities of a linear array of point-like scatterers. Several authors [4,12,16] suggested optical lattices for the creation of 3D photonic band gaps. However, a consequence of the narrow width of the atomic resonance is that Bragg scattering at gaseous lattices is intrinsically one-dimensional. This holds also for 2D and 3D geometries of optical lattices, so that the results of our investigations apply to all kinds of lattice configurations. The low dimensionality of the scattering problem thus compromises the realization of true 3D photonic band gaps with optical lattices. We organized this paper as follows: In section II we present our experimental setup, show examples of typical spectra, and discuss how diffuse scattering interferes with multiple reflections. To qualitatively understand the spectra a transfer matrix model is developed in section III. Based on Ref. [1] the model is extented to comply with partially disordered lattices and inhomogeneous Stark shifts. The model provides a simple picture for the observed spectra allowing for a discrimination between diffuse and multiple scattering. It also describes the expected suppression of spontaneous emission and yields a quantitative prediction of the lattice's reflection and transmission as a function of experimental parameters. Section IV presents our observations and discusses them in terms of multiple reflections and diffuse scattering. Since both effects give rise to similar signatures in the reflection spectra, they are difficult to separate, in particular in the presence of experimental imperfections. As summarized in section V, despite the fact that some signatures strongly suggest the concurrence of multiple reflections, it seems actually beyond reach to see suppression of spontaneous emission due to a reduced density of optical states. A. Setup The optical layout of our experiment shown in Fig. 1 is identical to the one presented in Ref. [15]. It consists of an optical cavity and a setup for Bragg scattering. The light of a titanium-sapphire laser operating at λ dip = 808 -812 nm, which is red-detuned with respect to the rubidium D 1 line, is coupled and phase-locked to the cavity. The standing wave which builds up inside the cavity has a periodicity of 1 2 λ dip = π/k dip . The beam diameter at the center of the cavity is w dip = 220 µm. The intracavity light power is P cav = 5 W. Between N = 10 5 and 10 7 85 Rb atoms are loaded from a standard magnetooptical trap (MOT) into the standing wave. About 10000 antinodes are filled with atoms. Typically the temperature of the cloud is on the order of few 100 µK. In earlier experiments [15] we found that the temperature of the cloud tends to adopt a fixed ratio with the depth of the dipole trap, k B T ≈ 0.4U 0 [17]. Therefore, the spatial distribution of the atoms does not vary much with the potential depth. From this we derive the rmssize of an individual atomic cloud along the cavity axis, 2σ z = 1 π λ dip k B T /2U 0 ≈ 115 nm in the harmonic approximation of the trapping potential. The radial size is 2σ r = w dip k B T /U 0 ≈ 140 µm and the mean density lies between n = 3 × 10 9 and 3 × 10 11 cm −3 . For the present setup we estimate a Debye-Waller factor of f DW = e −2k 2 dip σ 2 z ≈ 0.2 [15]. The light used to probe the Bragg resonance is generated with a near infrared laser diode operating at λ brg = 780 nm. The laser light is passed through an acousto-optic modulator (AOM) and an optical fiber and collimated to a beam waist of w brg = 800 µm before crossing the dipole trap standing wave under an angle of β i ≃ arccos(λ brg /λ dip ), which at λ dip = 810 nm is about 15.6 • . The incident laser power, P i = 30 µW, is well below saturation. Some time after loading the atoms into the standing wave the probe beam is switched on and frequency-ramped across the rubidium D 2 resonance. The light power reflected from the atoms P r is detected under the angle β s = −β i with a photomultiplier (PMT1). The transmitted light power P t is recorded with a photodiode (PD), and the isotropically scattered power P a is detected by collecting the light emission into a solid angle of 0.05 sr orthogonal to the incident probe beam with a second photomultiplier (PMT2). To obtain Bragg reflection the angle of incidence of the probe laser has to be matched to the lattice constant. Experimental fine tuning of the Bragg condition is however easier by varying the wavelength of the lattice laser, ∆λ dip ≡ λ dip − λ brg / cos β i , while the angle of incidence is kept fixed. B. Bragg spectra The experimentally accessible quantities are the reflected, transmitted and absorbed light powers. We take simultaneous spectra of these quantities by ramping the probe laser frequency across the Bragg resonance. In order to compare with calculations we are interested in the reflection, transmission and absorption coefficients R, T , and A. A direct comparison is complicated by the fact that the probe beam cross section is larger than the size of the atomic cloud, so that only a fraction η ≈ 16% of the incident power P i really overlaps with the atomic cloud [18], Therefore the energy conservation requirement, P i = P r + P t + P a , implies R + T + A = 1. Fig. 2(a) shows reflection, transmission and absorption spectra of the Bragg resonance obtained by ramping the detuning of the probe laser ∆ brg from the D 2 resonance. The resonance linewidth is Γ/2π = 6 MHz. Note the high Bragg reflection efficiency of more than 30%, which is more than two orders of magnitude higher than in any previous measurement on optical lattices. The scattering efficiency for standard Bragg diffraction depends quadratically on the atom number. This dependency has been observed in Ref. [13] in the thin grating regime. The measurements performed with the present apparatus exhibit a different behavior. As seen in Fig. 2(b) at large atom densities the scattering efficiency seems to saturate. Also the shape of the reflection signal depends critically on the atom number. At very low atom number we find a Lorentzian lineshape, whose width corresponds to the natural linewidth of the D 2 line. When the atom number is increased, the linewidth broadens and the peak saturates. At large atom numbers a pronounced dip appears at the center of the strongest resonance of the reflection spectrum [4], whose contrast increases with increasing atomic density [cf. Fig. 2(c)]. Theoretical predictions based on transfer matrices calculations presented below confirm this behavior [cf. Fig. 2(d)]. The main goal of this paper is to explain these observations. C. Specular versus diffuse scattering The atoms are strongly localized in axial direction at the center of the antinodes. Consequently, the spectrum of the light reflected into the Bragg angle is narrowed by the Lamb-Dicke effect, and the axial thermal distribution of the atoms does not broaden the angular distribution of the reflected radiation, but increases the background of isotropically distributed diffuse scattering. This behavior is known from Bragg scattering of X-rays at solids. In contrast the weak radial confinement of the atoms does broaden the angular distribution [15]. The finite radial atomic distribution has however a more important impact on the reflected light. The radial size of the atomic cloud determines the length of the probe beam trajectory across the lattice. Thus for thick lattices, when multiple scattering plays a role, the maximum number of reflections, i.e. the effective number of layers, is limited to N s = 2σ r /(λ dip tan β i ). In an earlier experiment [13] we have studied Bragg scattering at an optical lattice near the 5S 1/2 -6P 3/2 transition at 420 nm. The corresponding Bragg angle was 58 • yielding N s ≈ 100. In the present setup we use a linear cavity with a larger mode waist and operate the probe light near the D 2 line. This decreases the Bragg angle to 15.6 • and increases the effective number of layers to N s ≈ 600. At first glance, the saturation behavior observed in Fig. 2(b) could be interpreted as an indication for multiple scattering. The dips appearing in Fig. 2(c) could result from different multiple scattering trajectories of the probe laser along the lattice, which destructively interfere for certain values of the incident angle and of the probe light frequency. However, as already demonstrated in Ref. [4] the situation is more complex. In fact in a thermal lattice, for a small Debye-Waller factor, f DW ≪ 1, disordered atoms have a dramatic influence on the scattering: 1. They reduce the number of ordered atoms contributing to Bragg scattering. 2. They absorb and attenuate the incident light, and hence decrease the penetration depth of the probe beam and thus the effective number of layers available to Bragg and in particular to multiple scattering. A first approach in describing the physical situation may consist in dividing the atomic cloud into two parts: a perfectly ordered optical lattice with density nf DW and a homogenous cloud having the density n(1 − f DW ). The homogeneous cloud limits the penetration depth to z pd = [σn(1 − f DW )] −1 , which corresponds to a reduced effective number of layers N s,pd = 2z pd /λ. The effective number of layers thus critically depends on the density of disordered atoms and, via the optical cross section σ, on the detuning ∆ brg . On resonance assuming a density n ≈ 3 × 10 11 cm −3 we estimate N s,pd ≈ 37 [19]. As a consequence, we expect a dramatic break-down of the Bragg reflection signal close to resonance, where the absorption by the unordered atoms is largest. The frequency range for absorption, being on the order of the natural linewidth Γ, is much smaller than the width of the reflection signal, which explains the appearance of a narrow dip in the reflection spectra Fig. 2(c). In particular, signatures of multiple scattering will not show up in parameter regimes, where the number of scattering layers is considerably reduced below N s , N s,pd ≪ N s . Note that the separation of the Bragg reflection into a perfectly ordered contribution and diffuse scattering corresponds to an ansatz frequently made in treating disorder and impurities in crystals [12,20]. A. Transfer matrix formalism In the following we attempt to provide a more quantitative understanding of the observations by developing a simple theoretical model. We start by relating the atomic polarizability α (given in SI-units) to the single-layer reflection coefficient ζ via where nδz is the surface density estimated for a homogeneous atomic density n and a layer thickness δz. To account for the presence of several hyperfine transitions at frequency detunings ∆ F [cf. Fig. 2(d)], we build the weighted sum of the individual oscillator strengths. For the description of the collective influence of the atoms on the incident light, we use a generalization of the transfer matrix formalism presented in Ref. [1]. The generalization concerns two major points: First of all, in that reference the probe beam was assumed collinear with the optical lattice beams, while in our case the angle of incidence is significantly different from 0. In fact the deviation of the chosen angle from the Bragg angle constitutes an additional degree of freedom allowing us to tune frequency and quasi-momentum independently. Again, in practice we detune the lattice constant rather than the angle of incidence. A second generalization consists in the inclusion of diffuse scattering into the formalism, as detailed in the next section. All theoretical lineshapes shown in the figures are obtained from this transfer matrix model. We will now supply the basic ingredients of the model skipping the details already reported in Ref. [1]. The in-and outgoing field amplitudes of the probe beam at any axial location z of the lattice (the model is onedimensional) are labeled E + (z) and E − (z), respectively. Their variation from one location to another is described by a transfer matrix M , such that (in the complex representation) The procedure consists now in dividing the atomic sample into layers. The transfer matrix for interaction of the probe light with a single infinitely narrow layer of the optical lattice characterized by the surface density nδz is The transformation of the field amplitudes between two such layers separated by ∆z is described by Hence the total transfer matrix for a lattice with N s layers reads M = (A ζ B ∆z ) Ns . Finally the reflection coefficient R = |r| 2 and the transmission coefficient T = |t| 2 are calculated via while the phase shift in reflection follows from φ = arctan(Im r/ Re r). If we identify a layer with an antinode of the standing wave, ∆z = δz = λ dip /2, we obtain from the Eqs. (6) Bragg spectra for the case of a perfect lattice, such as those shown in Figs. 3(a-c). B. Sequential density model Instead of separating the atomic cloud into a perfectly ordered lattice and a homogeneous density distribution as proposed in Sec. II C, we may subdivide every layer into a number N ss of sublayers for which we evaluate the transfer matrices based on the local density, where U (z) = −U 0 cos k dip z is the trapping potential, or in the harmonic approximation U (z) = U 0 k 2 dip z 2 = k B T z 2 /2σ 2 z . Now we recalculate the local single-layer reflection coefficient ζ loc as in Eq. (2) and set up the total transfer matrix, but using the local density n loc (z) instead of a homogeneous density n and setting the layer thickness to δz ≡ λ dip /2N ss , At finite temperatures the atoms are distributed over the optical potential and thus experience individual dynamical Stark shifts of their resonances varying with the atoms' locations. This leads to serious inhomogeneous broadening of the Bragg spectra as shown in Ref. [13]. Cold atoms which concentrate at the antinodes of the standing wave potential are Stark-shifted by a large amount and form the blue edge of the line profile. The hot part of the cloud sees on average a shallower potential and forms the red tail of the profile. A possible approach to describe the line broadening consists in building the convolution of the spectra calculated from Eqs. (6) and (8) with the probability density of finding an atom at a given potential energy. However this approach does not account for the fact that in the thick grating limit cold (predominantly ordered) and hot (mostly disordered) atoms yield qualitatively different lineshapes of the Bragg reflection signal. In other words, the convolution procedure is incompatible with the fact that the contributions of cold and hot atoms to the Bragg-scattered light depends on the penetration depth, which itself varies with the frequency detuning of the probe beam. Fortunately, the local Stark shift is easily included in the sequential densities model via the substitution The figures 4(a,b) show calculated reflection spectra without and with Stark shifts for various detunings of the lattice constant from the Bragg condition. As in Figs. 2(b,c) the dip in the spectra corresponds to a joint impact of diffuse and multiple scattering and will be discussed in more detail in Sec. IV A. Here we just point out that Stark broadening obviously induces pronounced asymmetries with respect to ∆λ dip . We verified that in the limit of a perfect lattice, obtained for T → 0, the line broadening disappears. The lines are just blue-shifted by an amount U 0 / . In the thin grating regime, obtained for small densities n < 10 11 cm −3 , the convolution approach and the sequential densities approach yield identical spectra. C. Suppression of absorption The hallmark of 3D photonic crystals is the suppression of spontaneous emission. The models describing the propagation of light inside photonic crystals assign this suppression to a reduction of the density of optical modes available for spontaneous decay [8]. In fact the frequency dependence of the density of states, sketched in Fig. 3(e), is characterized by forbidden bands. As pointed out in Ref. [1], 1D optical lattices exhibit similar band gaps. This can be seen in Fig. 3(a-c). For large numbers of scattering layers, N s 1000, the lattice gets opaque. The transmission vanishes over a large range of frequency detunings ∆ brg , while the reflection is close to unity. Even more interesting is the feature that the absorption spectrum 1 − R − T splits into two peaks, and the absorption vanishes in the center of the band gap. At first glance this may seem surprising. Since the geometry of our lattice is 1D, we would not expect a noticeable modification of the density of decay modes. This is similar to the inhibition of spontaneous emission inside linear optical cavities [9]: An excited atom may decay into all the transverse modes leading out of the cavity. If the cavity only covers a small solid angle of the radiating atom, its spontaneous decay rate will only be reduced by a small amount. The reduction of the absorption inside a band gap of an optical lattice has a different origin. A deeper understanding of the system is gained by calculating the progression of the probe light intensity along the optical lattice using the above transfer matrix formalism under various conditions (cf. Fig. 5). First of all, we find that the standing wave formed by the incident probe beam and the Bragg-reflected light adjusts its phase such that its nodes coincide with the atomic layers. In that way absorption is minimized. If the length of the lattice is finite, the contrast of the standing wave is smaller than 1, i.e. the probe light intensity at the locations of the atomic layers does not vanish. Hence a finite absorption subsists even, when the lattice is perfect and the Bragg condition fulfilled. Let us now study the response of the probe standing wave to variations of experimental parameters. Fig. 5(a) compares the cases of an aligned and a misaligned angle of incidence with respect to the Bragg angle. For a misaligned angle the periodicity of the probe standing wave does not coincide with the lattice constant, which results in a displacement of the nodes from the atomic layers and hence in enhanced absorption. The contrast of the probe wave is smaller than for an aligned angle. However, as the probe light penetrates deeper into the lattice, the displacement gets smaller and the probe wave contrast adopts the value of the aligned case. The curves shown in Fig. 5(a) assume a perfectly ordered lattice. In the presence of disordered atoms, i.e. atoms which are not confined to the locations of the probe beam intensity minima, absorption can take place anywhere. We thus obtain an additional background of absorption, which in the extreme case of strong disorder leads to a fast exponential decay according to the Lambert-Beer law. The exponential curves in Fig. 5(b) are obtained under the same conditions as in (a), but with a finite Debye-Waller factor, f DW = 0.03. Finally when the probe beam is tuned off resonance, the absorption is smaller and the penetration depth is drastically increased. This is shown in Figs. 5(c-d), which corresponds to (a-b) respectively, but with a probe laser detuning set to ∆ brg = Γ. In summary, absorption occurs for two reasons: Either the Bragg angle is mismatched, or the atoms are disordered. This is illustrated by calculations of reflection and absorption spectra shown in Fig. 6. For example, the spectrum in Fig. 6(d) corresponding to f DW = 1 shows an absorption peak despite perfect atomic ordering, because the Bragg angle is not matched; and the spectra in Fig. 6(b) corresponding to f DW < 1 show finite absorption although the Bragg angle is matched. Reduced absorption is only observed for a perfect lattice and a matched angle of incidence for the probe beam [see curve corresponding to f DW = 1 in Fig. 6(b)]. A. Signatures of multiple scattering The appearance of signatures for multiple scattering depends very much on the available effective numbers of layers. As we have seen in Sec. II C diffuse scattering drastically reduces the number of layers from N s = 600 to N s,pd = 37, which is clearly insufficient to produce dips in the reflection spectrum corresponding to destructive interference of different reflection paths. Fig.3(a) reveals that for our atomic densities several hundred layers are necessary to significantly distort the spectrum near the photonic band edge. In order to observe signatures of a band edge, the angle of incidence of the probe beam must be chosen in such a way that they appear outside the detuning range, where diffuse scattering is dominant. In fact already at |∆ brg | > 3Γ the penetration depth allows for an effective number of layers larger than 600. Fig. 7(a) shows Bragg reflection spectra measured for various lattice constants. Fig. 7(c) shows simultaneously recorded absorption spectra. Figs. 7(b,d) represent corresponding calculations. Although the reflection spectra are in qualitative agreement, there is a striking difference. While for the measured spectra the dip is always close to the line center, the symmetry of the calculated spectra changes when going from negative to positive ∆λ dip . An explanation for this is given in Sec. IV B. The amount of absorption measured in the experiment does not depend much on λ dip . This is confirmed by the simulations [dotted lines in Fig. 7(d)]. The reason for this is diffuse scattering due to atomic disorder as discussed in Sec. III C [21,22]. For comparison we have also plotted in Fig. 7(d) the spectra calculated for zero temperature (solid lines). The effect of absorption reduction expected for f DW = 1 is not observed in experiment. Fig. 8(a) shows a set of spectra recorded for various atomic densities. Fig. 8(b) shows the corresponding calculations. The spectra exhibit a well-developed splitting of the dip structure. This splitting can not be explained by diffuse scattering alone, which means that multiple reflections must play a role. The number of free parameters is large, therefore it is difficult to pin down the precise value of the experimental parameters by fitting. However, there is no realistic parameter regime for which our theoretical model predicts splittings of the dip without the assumption of multiple scattering. Hence, we consider the double dip feature of Fig. 8 as the first indication for the existence of a 1D photonic band gap in an optical lattice. B. Experimental side effects Various experimental deficiencies can make the quantitative interpretation of the spectra difficult. First of all, when the probe laser is detuned from resonance, its light will be bent by refraction when it enters the optically thick cloud of atoms. On one hand, this slightly modifies the angle of incidence and thus its deviation from the Bragg angle. Since this deviation depends on the probe beam detuning, the parameters ∆λ dip and ∆ brg are intertwined in a complicated way. We have measured the reflection angle as a function of ∆ brg and found variations up to 0.1 • , which corresponds to ∆λ dip = 0.4 nm. Additionally, the optically thick cloud focuses or defocused the incident beam depending on its detuning [23]. A second important issue is the impact of the finite radial extent of the atomic layers on the reflection angle. In Ref. [15] we have shown that, although the lattice consists of a stack of two-dimensional traps with an aspect ratio smaller than σ z /σ r ≈ 10 −3 , with respect to Bragg scattering it behaves more like a chain of point-like scatterers, than like a dielectric mirror. Thus the reflection angle is not equal to the angle of incidence, but adjusts itself in order to fulfill the Bragg condition. This selfadjustment of the Bragg condition impedes a controlled tuning of ∆λ dip and explains, why the symmetry of the traces in Fig. 7(a) does not change when the lattice constant is varied. Furthermore, the transfer matrix model is purely onedimensional. It assumes not only radially infinite atomic layers, but also a homogeneous density distribution. In reality the radial density distribution is rather Gaussian, which implies a variation of the penetration depth with the distance from the optical axis. The observed reflection spectra thus represent an average of reflection spectra taken for different optical densities. Finally, optical pumping between the ground state hyperfine levels and heating due to resonant absorption may occur while scanning the probe beam frequency and distort the spectra. These effects can however be accounted for in a simple rate equation model, which yields very good agreement with our observations. V. DISCUSSION AND CONCLUSION The observations made with our apparatus clearly show features beyond Bragg scattering. These are due to two effects: Disorder arising from the thermal distribution of the atoms introduces strong absorption close to resonance, which limits the effective number of layers. The second effect is multiple scattering between adjacent layers. Even though the above effects tend to distort or broaden the features, which are characteristic for multiple scattering, we find unambiguous signatures of multiple reflections. At this stage, it is however difficult to exactly quantify the number of layers involved in multiple scattering. In any case, for our present parameters the absorption spectra do not show any significant reduction at resonance, so that the qualification of photonic band gap seems not adequate. An interesting question concerns the signature of atomic ordering in the transmitted and the absorbed light. In the thin grating regime, one would not expect that the atomic positions influence the absorption: the behavior of an absorbing atom does not depend on the location of the other atoms. Moreover, unlike for the reflection signal, interference plays no role in forward scattering nor in diffuse scattering. Hence there is no signature of atomic ordering in T and A. The situation completely changes in the thick grating regime, where multiple scattering between subsequent atomic layers leads to interference between the light reflected from or transmitted through the layers. The globalization of the scattering process leads to interatomic correlation: Now it matters how the atoms are arranged. Multiple reflections give rise to stopping band gaps for certain ranges of light detuning or angle of incidence. It might therefore be more unambiguous to look for signatures of photonic band gaps in transmission or absorption spectra. To conclude we have shown that long-range spatial order in atomic clouds can have a dramatic influence on the scattering of light. We have extented earlier studies on Bragg scattering into the regime of thick gratings characterized by multiple reflections. Although signatures for reduced absorption could not be found due to the fatal influence of diffuse scattering, this represents a first step towards the realization of photonic band structures in optical lattices. Differently from photonic crystals or solid state systems, the scattering off optical lattices is weak except near atomic resonances. Therefore photonic stopping bands are expected to be very narrow. This bears the advantage that we can tune the optical density over a large range. However, the narrow resonance also implies that our system is intrinsically one-dimensional. An extrapolation to three-dimensional systems seems technically demanding, first of all because 3D optical lattices have low filling factors of typically 0.01. There are however examples of lattices with unity filling factor [24]. Bose-condensates in the Mott insulator phase may prove useful to guarantee a high and regular occupation of the lattice sites [6]. On the other hand, the sharpness of the resonance results in a very narrow tolerance angle for the stop band, which will make it difficult to obtain 3D PBGs. Coevorden et al. [16] did numerical calculations of the band structure of a 3D optical lattice. To obtain a 3D PBG around an atomic transition of frequency ω, they had to assume an excessively large spontaneous decay width, Γ > 0.01ω. A major advantage of using ultracold atoms would be the total absence of diffuse scattering. Other possible technical upgrades include the use of standing waves with larger beam waists, thus extenting the radial size of the atomic layers, and the choice of smaller Bragg angles, which could be done by operating the dipole trap a few nm red-detuned from the same transition, the probe laser is tuned to. Thus, although it seems today difficult to compete with photonic crystals in terms of manipulating the propagation of light just by choosing a smart arrangement of gaseous atoms, there is much room left for improvements. We acknowledge financial support from the Landesstiftung Baden-Württemberg.
2019-04-14T03:21:32.884Z
2005-11-29T00:00:00.000
{ "year": 2005, "sha1": "1aad6a7ceb81981570177495df2bc6c3168f9537", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0511258", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9d09eba6068399b9a0d919b5e7a43b65a5e8db65", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10465145
pes2o/s2orc
v3-fos-license
Coupling between the Basic Replicon and the Kis-Kid Maintenance System of Plasmid R1: Modulation by Kis Antitoxin Levels and Involvement in Control of Plasmid Replication kis-kid, the auxiliary maintenance system of plasmid R1 and copB, the auxiliary copy number control gene of this plasmid, contribute to increase plasmid replication efficiency in cells with lower than average copy number. It is thought that Kis antitoxin levels decrease in these cells and that this acts as the switch that activates the Kid toxin; activated Kid toxin reduces copB-mRNA levels and this increases RepA levels that increases plasmid copy number. In support of this model we now report that: (i) the Kis antitoxin levels do decrease in cells containing a mini-R1 plasmid carrying a repA mutation that reduces plasmid copy number; (ii) kid-dependent replication rescue is abolished in cells in which the Kis antitoxin levels or the CopB levels are increased. Unexpectedly we found that this coordination significantly increases both the copy number of the repA mutant and of the wt mini-R1 plasmid. This indicates that the coordination between plasmid replication functions and kis-kid system contributes significantly to control plasmid R1 replication. Introduction Plasmid R1 is an antibiotic resistance plasmid of enteric bacteria that has contributed important insights into plasmid replication and its control as well as into the regulation and role of auxiliary plasmid maintenance systems [1]. R1 is maintained with a low copy number in the host. Its replication is initiated due to specific interactions of a rate limiting protein, RepA, at oriR1, the origin of replication [2]. The frequency of this process is regulated by the copy number control genes copA and copB. copA, the key regulator gene, codes an unstable antisense RNA, CopA, that inhibits at the posttranscriptional level the synthesis of RepA. CopA RNA targets the polycistronic copB-repA mRNA at copT, its complementary sequence, and inhibits translation of the tap orf which is needed for RepA translation [3,4]. Inactivation of copA leads to uncontrolled amplification of the plasmid or run-away replication [5]. CopB is a tetrameric repressor protein that inhibits transcription of repA from the internal and strong promoter Prep. The levels of this protein, that is transcribed from a constitutive promoter, decrease when plasmid copy number is reduced; at a very low copy number, the tetramer disassembles and the protein lose activity as a repressor; this increases transcription of repA and as consequence the copy number of the plasmid increases. In this way, copB acts as a proper copy number control gene that contributes by rescuing inefficient plasmid replication [6][7][8]. Once the plasmid copy number is restored, CopB levels increase and the protein multimerizes and recovers its repressor activity. Increasing CopB levels in trans favours CopB repressor function but removes its potential to rescue very low plasmid copy number [9]. copB, copA, copT, repA and oriR1, the so called "basic replicon" [10], is the essential maintenance module of the plasmid. Kis-kid or parD is an auxiliary maintenance module of R1 that is close to the basic replicon [11,12] (see Figure 1). This system contains two genes, kis and kid, encoding respectively two small proteins: an antitoxin, Kis (killer suppressor), and a toxin, Kid (killing determinant). Kid is an RNase that cleaves RNA at sites containing the core sequence 5'-UA(A/C/U)-3; flanking U increase the efficiency of cleavage at these core sequences [13][14][15]. Kis, the antitoxin, is a protein that interacts with Kid and neutralizes its activity. Kis is also a specific repressor of the operon whose efficiency increases in complex with Kid [16,17]. kis-kid activity is functionally coupled to the efficiency of R1 replication. The first indication of this coupling was indirect: kis-kid interfered with the isolation of plasmid replication mutants; this was due to the activity of Kid: mutations that inactivated the Kid toxin abolished this interference [18]. Since then, this phenotype, called the "interference" phenotype, has been used as one of the signatures of this coupling. It was later reported that the kis-kid system is activated in low copy mutants of the plasmid and that this partially recovered the plasmid copy number [19]. Two findings were key to explaining, in molecular terms, this new and intriguing signature of the coupling: (i) the identification of the RNase activity of the Kid toxin [13][14][15] and (ii) the finding that copB-repA mRNA contains two sites in the intergenic region of copB-repA mRNA that are efficiently cleaved by Kid [20]; this cleavage reduces the CopB levels, activates the repA promoter and increases plasmid replication efficiency. It has been recently reported that Kid cleaves mRNA of key cell division proteins; in this instance, Kid replication rescue occurs before cell division and effectively enforces plasmid retention by uncoupling plasmid replication and cell division [21]. We reported recently that increasing in trans the Kis antitoxin levels suppressed the "interference" phenotype; this suggested that Kis antitoxin levels could act as the switch connecting the replication and kis-kid toxin-antitoxin maintenance functions [22]. Coupling between replication functions and other maintenance modules has been reported in other plasmids systems: in pSM19035 of Streptococcus pyogenes a global regulator couples plasmid replication, partitioning and toxin-antitoxin modules to achieve high plasmid stability [23]; in the broad host range plasmid RK2, a global regulator controls expression of replication and maintenance systems in different hosts [24]. In the repABC plasmid family of Rhizobiales, transcription of the gene of the replication initiation protein is controlled by proteins of the plasmid partitioning system [25]. In ColE1 plasmid, a multimer resolution system, XerCD couples replication and cell division to achieve plasmid maintenance [26] (see Discussion). The results reported here support the role of Kis antitoxin as the switch that couples replication functions and the kis-kid maintenance system; they also support the proposal that a Kid-dependent decrease in the CopB levels increases plasmid replication efficiency. In addition, we found that, beyond playing a role as a replication safety device, coupling between plasmid replication and the kis-kid system increases significantly the copy number of the wt plasmid R1. This implies that this coupling plays a significant role as part of the basic mechanisms that control plasmid R1 replication. Kis Antitoxin Levels Decrease in Cells Containing a pKN1562 Replication Mutant and Increase in a clpP − Background To evaluate the proposal that a reduction in the efficiency of plasmid R1 replication reduced the Kis levels, we compared these levels in cells containing plasmid pKN1562 or its repA55 mutant. This mutation reduces significantly the copy number of the plasmid and activates the modular coupling (see Section 2.2). For these determinations, we tagged kis antitoxin genes of the wt and rep mutant plasmids with a 3×FLAG epitope that is recognized efficiently by specific monoclonal antibodies. Kis antitoxin levels were subsequently evaluated by inmuno-blotting. The data ( Figure 2C,D) show that the Kis antitoxin levels decreased in cells containing the plasmid-repA mutant. Consistently with the role of ClpAP as the specific protease cleaving Kis [27], the levels of this protein increased in a strain carrying a deletion of the gene of the ClpP protease; this increase occurred in cells containing the repA-wt miniR1 plasmid pKN1562 or its repA55 mutant (Figure 2A Cells contained either the pKN1562 mini-R1 plasmid coding for RepAwt or its repA55 mutant. The repA55 mutation changes R for H in codon 55 of RepA and this results in a thermosensitive replication protein that, at the permissive temperature, reduces plasmid copy number. An equal amount of total protein extracts were loaded in each lane of the gels and the inmuno-signal of the 3×FLAG labeled Kis was densitometred. Values in B and D represent the average of seven independent densitometries and are corrected for the number of plasmid-containing cells. This percentage was 100% for cells containing the wt plasmid and 78% for cells containing the repA55 plasmid replication mutant. * or *** indicate differences which p-values are = 0.01-0.05 or <0.001 respectively. Effects of Increasing the Stability of the Kis Antitoxin on the Modular Coupling Coupling between replication functions and the kis-kid system reduces drastically the frequency of isolation of plasmid replication mutants (interference phenotype). This phenotype, a signature of the modular coupling, can be abolished by inactivating the gene of the Kid toxin or by overproducing the Kis antitoxin [22]. We now tested if the increased levels of the Kis antitoxin shown in Figure 2 associated to its stabilization in a clpP − strain abolished this phenotype. To this aim we transformed C600 and its clpP − mutant with a preparation of pKN1562 mutagenized with hydroxylamine. The number of pKN1562 thermo-sensitive replication mutants (rep-ts) recovered in this transformation was compared with the number of kanamycin resistant thermo-sensitive mutants (kmr-ts) rescued in the same screening. Note that the rescue of kmr-ts mutants is independent of the presence of a kis-kid system; therefore, they serve as a reference to determine the relative number of rep-ts mutants isolated in different conditions [22]. The analysis indicated that the rep-ts/kmr-ts ratios were 0.22 in the wild-type strain (2/9) and increased to 1.1 in the clpP − strain (10/9). The increase ratio obtained in this strain indicates that stabilization of Kis antitoxin inactivates the interference phenotype, which implies inactivation of the modular coupling. Note that a similar increase ratio has been reproduced in experiments in which the modular coupling was inactivated either by inactivation of kid or by overproduction of Kis [22]. The experimental procedures are detailed in this reference and these procedures are only briefly discussed here (material and methods). A second signature of the coupling is the replication rescue phenotype, meaning a partial recovery of the efficiency of plasmid replication of repA mutants mediated by kis-kid. Data in Figure 3A show that in a wt background (clp +), the repA55 mutation reduced significantly the copy number of pKN1562. In the isogenic clpP − strain (clp −), this value is further reduced. The difference found is statistically significant and indicates that stabilization of the Kis antitoxin, protein in this last strain abolishes the replication enhancement effect dependent on kis-kid. The results are consistent with the proposal that the clpP − background that stabilizes the Kis antitoxin, inactivates the modular coupling. As a control, it is shown that inactivation of the coupling due to the kid75 mutation, abolishes the replication rescue effect. Important, during this analysis we realized that in the clpP − background, the copy number of the wt plasmid pKN1562 is significantly reduced. Furthermore, the kid75 mutation that inactivates the modular coupling reduces the efficiency of plasmid wt replication to a similar level. This result revealed that the communication between the two maintenance modules form part of the basic mechanisms that control plasmid R1 replication. Results presented in the next two sections further support this conclusion. Coupling between replication and kis-kid modules is also associated with a de-repression of the kis-kid operon. To follow this correlation, we determined the kis-kid mRNA levels in cells containing the wild-type pKN1562 plasmid or its repA55 mutant ( Figure 3B). An increase in kis-kid transcription level is most clearly observed in the repA55 mutant and in the presence of a wt clpP gene (clp +). In the absence of ClpP (clp −), Kis repressor levels are stabilized, and accordingly, transcriptional levels of kis-kid are significantly reduced. In cells containing plasmid pKN1562 wt (kid +), kis-kid transcription is maintained at a basal level. Note that the kid75 mutation inactivates the modular coupling as it inactivates the RNase activity of the toxin; however, the mutations do not interfere with the co-regulatory potential of the toxin and this allows evaluation of kid mutation on kis-kid transcription. We noticed that the presence of the kid75 mutation significantly increases kis-kid transcription levels implying a negative effect of the RNase activity of Kid on the levels of the kis-kid transcript. Effects of Overproducing the Kis Antitoxin on the Modular Coupling Previous results indicate that overproduction of the Kis antitoxin abolished the interference phenotype meaning inactivation of the coupling [22]. We now completed the analysis testing the effect of this overproduction on the replication rescue phenotype and on the transcriptional level of the kis-kid operon. As activation of the modular coupling can also be detected in cells containing the wild-type plasmid pKN1562 (2.1.), we first evaluated the replication rescue phenotype testing the copy number of this plasmid in the presence or absence of excess of Kis antitoxin; this excess was provided in trans by the Kis overproducer pMLM126 in the presence of inducer. Control values were obtained in the presence of the empty vector pLNMBAD. Data show that indeed, the copy number of pKN1562 is substantially reduced in excess of Kis antitoxin ( Figure 4A). This result confirms the effect of the coupling on the basal efficiency of replication of pKN1562 and implies that excess of Kis antitoxin inactivates the coupling. Data in Figure 4A further show that the kid75 mutation (pJLV01) abolishes the replication rescue effect, which is consistent with the dependence of this phenotype on an active toxin. Similar conclusions can be obtained when a pKN1562repA55 mutant is used. As expected the repA55 mutation significantly reduces plasmid copy number. Again the kid75 mutation further reduces the copy number of the repA55 mutant. This reduction is similar to the one observed in excess of Kis meaning that in both cases the modular coupling is lost. Figure 4B shows the effect of excess of Kis on kis-kid mRNA levels determined in the same conditions analyzed in 4A. The results, best shown in the presence of the repA55 mutation, indicate that this mutation de-represses the kis-kid operon and that de-repression is substantially neutralized in excess of the Kis antitoxin. Although to a lower level, the effect is also seen for the pKN1562 wt plasmid. Note that, as previously shown ( Figure 3B), the kis-kid mRNA levels increase in the presence of the kid75 mutation (kid−). The above results confirm that an excess of Kis abolishes the coupling between plasmid replication and kis-kid maintenance systems both in the wt plasmid and in its repA55 mutant. Coupling between Maintenance Modules and Plasmid Stability Modular coupling influences plasmid copy number and this has a direct effect on plasmid stability. We evaluated this correlation studying plasmid stability in the presence or absence of a mutation in the host that stabilizes the Kis antitoxin (clpor clp +) or in the presence or absence of an excess of Kis antitoxin (pLNBAD-kis + or −). As shown previously, both conditions inactivate the modular coupling. In the analysis we used the same strains and constructions used for copy number determinations (see Sections 2.2. and 2.3.). The results ( Figure 5) show that there is a correlation between plasmid copy number and plasmid stability. The effects on plasmid stability are seen more clearly after 60 or 90 generations of propagation in non-selective medium. Conditions that abolish the modular coupling reduce plasmid copy number and this reduces plasmid stability. As predicted, the repA55 mutation that reduces the efficiency of plasmid replication results in all cases in less stable plasmids. The results of the stability analysis are consistent with the role of the coupling between maintenance modules in plasmid copy number and are also consistent with the proposal that Kis antitoxin is the switch that connects these modules. Excess of CopB Abolishes Replication Rescue The replication rescue phenotype is dependent on the cleavage of the copB-repA mRNA mediated by the RNase activity of the Kid toxin [20]. We aimed to test the effects of an excess of CopB protein on this phenotype. The analysis was done in the presence or absence of the kid75 mutation both in the pKN1562 wt plasmid and in its repA55 mutant. As an exogenous source of CopB, we used a multi-copy pUC18-copB recombinant that greatly increases the levels of CopB [7]. The functional effect of this excess can be monitored evaluating the complementation of mini-R1 copB deletion mutation that removes the copB promoter and part of the copB gene, thus increasing the plasmid copy number. The results ( Figure 6A) show that in the presence of the pUC18-copB recombinant, the copy number of copB deletion mutants of pKN1562 or of its kid75 mutant pJLV01 decreases from values close to 20-25 to a basal level. This clearly indicates that the excess of CopB provided in trans complements the copB mutation. The controls made using the pUC18 vector alone indicates that the complementation observed is a specific effect of CopB. We then tested the effect of an excess of CopB on the replication efficiency of plasmids pKN1562 or pJLV01, both of them carrying a wt copB gene and therefore a low copy number. Not that pJLV01 carries the kid75 mutation that inactivates the modular coupling. Data in Figure 6B show that, in the presence of the pUC18-copB recombinant, the copy number of pKN1562 is significantly reduced to the value corresponding to pJLV01. This result indicates that the replication rescue is abolished in excess CopB. As expected, the repA55 mutant of pKN1562 has a lower copy number than the wt plasmid; similarly, excess of CopB reduces the plasmid copy number of this repA mutant and this reduction is similar to the one observed in the repA mutant carrying the kid75 mutation. These data are consistent with the proposal that the replication rescue associated to the modular coupling is dependent on a reduction of the CopB levels due to the action of the RNase activity of Kid toxin. The results confirm again that the modular coupling form part of the basic mechanisms that control plasmid R1 wt replication. Modular Coupling between Maintenance Modules and Efficiency of Plasmid wt Replication The results of this work consistently support the model for the role of Kis antitoxin as the switch that connects the replication and kis-kid toxin-antitoxin modules of the plasmid. The Kis antitoxin levels are clearly reduced when copy number of the plasmid, that activates the coupling, is reduced. Conversely, it is shown that stabilizing the Kis antitoxin or increasing its levels prevented the modular coupling. This coupling was assessed testing the interference and replication rescue phenotypes as well as studying the transcriptional levels of the operon. Modular coupling and effective replication rescue requires normal levels of CopB. The analyses also show that increasing the CopB levels or inactivating the Kid toxin prevents the modular coupling. Interestingly, we found that the different ways of uncoupling plasmid replication and kis-kid modules consistently reduce plasmid copy number not only of the repA55 mutant but also of the wt plasmid. This implies a more direct involvement of this coupling in control of plasmid replication. The plasmid copy number distribution in individual cells of the culture can reach a significant part of the population, the low level required to trigger the coupling (discussed in [9]). Improving copy number or partitioning at cell division has been pointed out to explain the unexpected high stability of low-copy plasmids [28]. It has been pointed recently that Kid can inhibit cell division and that this toxin achieves plasmid retention by uncoupling plasmid replication and cell division [21]; in addition, the same study proposes that the hok-sok antitoxin-toxin system of this plasmid could eliminate plasmid free cells that could arise due to failures in this process. The Pathway: Kis Antitoxin Acts as the Switch Connecting Plasmid Replication and Toxin-Antitoxin Modules The analysis reported here indicate that the coupling between the basic replicon of R1 and the kis-kid system is initiated when the copy number of the plasmid falls; this reduces the antitoxin level and activates the potential of the Kid toxin to induce the "interference" or the "replication rescue" phenotypes. This last phenotype is dependent on kid and copB; it is activated in cells containing the wt plasmid or its replication mutants and is well defined in molecular terms. The interference phenotype is a signature of the molecular coupling and as the replication rescue is dependent on a wt Kid toxin but it is poorly defined in molecular terms. On the Inhibition of the Replication Rescue by Excess of CopB The replication rescue phenotype is a late event in the pathway described above. It is dependent on Kid and CopB activities. Overproduction of CopB abolishes this phenotype but in principle should not affect the regulation of the kis-kid module as the sequence targeted by this transcriptional repressor is not present in the promoter of the kis-kid operon. In addition CopB overproduction should not affect the "interference" phenotype as CopB does not affect the activity of Kid required for this phenotype. When we compared the plasmid copy number obtained in the presence or absence of an excess of CopB we observed a significant replication enhancement in the presence of normal levels of CopB that was abolished in excess of CopB. This effect is measurable and it can be assigned to the molecular coupling described here. The modular coupling forms part of the auxiliary machinery that controls plasmid R1 replication via CopB. Its action depends on a reduction of the antitoxin levels that activates the Kid toxin and this can be affected by plasmid copy number but also by the ClpAP protease. Thus the activity of these effectors can influence the role of CopB as auxiliary copy number control protein. Coordination of Plasmid Maintenance Functions in Different Systems Growing information underlines the relevance of connections replication and plasmid maintenance modules and eventually with cell cycle events in different plasmids. In pSM19035, a plasmid of Streptococcus pyogenes, replication, partition and toxin-antitoxin systems act co-ordinately to achieve high plasmid maintenance level with a minimal fitness cost [23]. In the broad host-range plasmid, RK2 replication and partitioning functions act co-ordinately to achieve stable plasmid maintenance in different hosts. Multiple co-ordinately regulated operons contribute to this [24] and the basic mechanisms involved have been recently reviewed [29]. Other singular case of coordination between different functions related to plasmid maintenance has been reported in the XerCD multimer resolution system of plasmid ColE1. Multimers of this multicopy plasmid compromise cell growth and plasmid stability but they can be resolved by the XerCD-mediated site-specific recombination at cer [30]. In addition, multimer formation induces rcd-RNA, a singular component of the system that interacts with and enhances the action of triptophanase thus increasing the concentration of Indol in the cells. This inhibits cell growth and division as well as plasmid replication, thus timing the resolvase to act before cell division can occur [31]. Similarly, in plasmid R1, coordination of the basic replicon functions and the kis-kid system activates the Kid toxin to rescue replication of cells with very low copy number [16]. Activated Kid toxin inhibit cell division, thus effectively achieving plasmid R1 retention by increasing plasmid replication and timing this replication rescue before cell division can occur [21]. Our report adds to this that the kis-kid system plays a more relevant role in control of plasmid copy number than previously suspected. Cell Cultures, Strains and Plasmids Cells were growth at 30 °C in L-Broth (LB) and L-Broth agar (LA) prepared as described [32]. Antibiotics were supplemented according to the resistances carried by the plasmids. The E. coli K12 clpP − mutant SG12050 [33] and its parental strain C600 [34] were used in this work. Plasmid used in this work were: pKN1562, a wild-type (wt) mini-R1 plasmid conferring resistance to kanamycin [10] and carrying the wt kis-kid system [11]; pJLV01, a pKN1562 derivative that carries the point mutation kidD75E that inactivates the RNase function of the Kid toxin but not its co-regulatory activity [22,35,36]; variants of pKN1562 or pJLV01 carrying the repAR55H mutation [22]; pMLM126, a Kis overproducer inducible by arabinose, and pLNBAD its empty vector [36]; finally, we also used a CopB overproducer, pUC18-copB, and its empty vector pUC18 [6]. Replication Interference Phenotype Plasmid DNA extractions and transformations were essentially as described [37]. In vitro mutagenesis of plasmids was done with hydroxylamine as previously described [38]. This DNA was extensively dialysed against TE buffer (Tris-HCl 10 mM pH. 8.0, 1 mM EDTA) and used to transform the selected E. coli strains. The ratio of replication thermo-sensitive mutants (rep-ts) to kanamycin resistant thermo-sensitive mutants (kmR-ts) rescued by transformation was used to define the replication interference phenotype. This value is very close to 1 when the coupling is inactivated by a mutation in the kid toxin gene but it is reduced to 0.2-0.3 in the presence of a functional coupling [22]. Plasmid Stability and Statistical Analysis The plasmid stability determinations were done in cells containing pKN1562 and its rep and kid mutant derivatives. For this purpose the percentage of colonies carrying the kanamycin resistance marker were determined after cell propagation at 30 °C for 30, 60 and 90 generations in rich solid LA media without kanamycin as previously described [22]. The standard deviation corresponding to the different determinations, were calculated using values obtained at least in three independent experiments. A paired Student's t-test using GraphPad Prism 5 for Mac software was used. For the statistical analysis of Kis antitoxin levels One way ANOVA test and Wilcoxon non-parametric t-test were used. Values linked by brackets and labelled with * or *** correspond to differences which p-values are <0.05 or <001, respectively. Plasmid Copy Number Determinations R1 mini plasmids copy numbers relative to chromosome were determined by quantitative PCR methodologies (qPCR) as described in [39]. In these determinations, kis and lpp genes were selected as plasmid and chromosome markers, respectively. Specific protocols and primers used in these determinations have been described recently [22]. Plasmid copy number (PCN) per genome was calculated as the ratio between (Size of chromosomal DNA (bp) × Amount of plasmid DNA (ng))/ (Size of plasmid DNA (bp) × Amount of genomic DNA bound (ng)). The PCN calculated for pKN1562 was giving the value = 1; the PCNs per genome were normalized to this value. When the analysis was done in the presence of the overproducer of Kis, the primers used for the PCN and transcriptional determinations amplified the 5'region of the Kid toxin and were 5'-GGTCACGCGATTAAAGGC-3' and 5'-GGTCACGCGATTAAAGGC-3'. The PCN values were referred to as the percentage of kanamycin resistant cells evaluated when the sample was taken. Quantification of KIS-KID MRNA LEVELS Transcriptional levels of kis and lpp genes were evaluated by RTqPCR [22]. The kis transcriptional levels were calculated as 2 (Ctlpp−Ctkis) where Ct-lpp and Ct-kis are the threshold values corresponding to the PCR amplification of lpp and kis. When the analysis was done in the presence of the Kis overproducer, the primers used for the kis-kid mRNA determinations (see Section 4.4) amplified the 5' region of kid (see Section 4.4). The relative transcriptional levels were referred in all cases to the plasmid copy number, a value that is already corrected from the contribution of plasmid-free cells. Determination of Kis Antitoxin Levels by Western Blotting-3XFLAG Bacterial cultures were growth as indicated above. Equal samples of the cultures were taken at DO600 = 0.5 and the cells were collected and re-suspend in Laemmli buffer. The number of plasmid-containing cells was evaluated in these samples. Antitoxin expression levels were determined as the means of at least seven Western blotting experiments using a pKN1562 version with 3×FLAG-tagged Kis. Kis protein values were referred to the percentage of plasmid-containing cells; the average level determined in the wild-type strain was used as the reference (value = 1). Western blot analysis was done using anti-FLAG (Sigma-Aldrich, St. Louis, MO, USA) diluted 1:500 (2 h 30 min) and anti-mouse antibodies (Sigma-Aldrich, St. Louis, MO, USA) diluted 1:5000 (1 h 30 min) in TBS-Tween containing 3% non-fat milk. The westerns were developed using ECL (Thermo-Scientific, Waltham, MA, USA); band intensity analysis corresponding to samples from five independent cultures, was determined using Quantity One software (Bio-Rad, Berkeley, CA, USA, version 4.6.3). Acknowledgments This study was founded by projects CSD2008-00013 and BFU2011-25939 of the Spanish Ministry of Science and Innovation. Discussions related to this work with members of the group are kindly acknowledged. The comments and corrections to the manuscript of Elizabeth Diago-Navarro are kindly acknowledged. Author Contributions Juan López Villarejo conceived and performed the evaluation of the kis-kid mRNA levels the copy number determinations and the plasmid stability analysis and also did the relevant constructions needed; Damian Lobato-Marquez was involved in the determinations of the Kis antitoxin levels and did the constructions needed for this analysis; Ramon Dí az-Orejas conceived the first idea of the project, and wrote the manuscript; All the authors were also involved in improving the successive versions of the manuscript.
2016-05-04T20:20:58.661Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "e8d9150368019b3827e2b6e9326bbab34fbf6e0c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/7/2/478/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8d9150368019b3827e2b6e9326bbab34fbf6e0c", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231498179
pes2o/s2orc
v3-fos-license
Robust Kernel-based Feature Representation for 3D Point Cloud Analysis via Circular Convolutional Network Feature descriptors of point clouds are used in several applications, such as registration and part segmentation of 3D point clouds. Learning discriminative representations of local geometric features is unquestionably the most important task for accurate point cloud analyses. However, it is challenging to develop rotation or scale-invariant descriptors. Most previous studies have either ignored rotations or empirically studied optimal scale parameters, which hinders the applicability of the methods for real-world datasets. In this paper, we present a new local feature description method that is robust to rotation, density, and scale variations. Moreover, to improve representations of the local descriptors, we propose a global aggregation method. First, we place kernels aligned around each point in the normal direction. To avoid the sign problem of the normal vector, we use a symmetric kernel point distribution in the tangential plane. From each kernel point, we first projected the points from the spatial space to the feature space, which is robust to multiple scales and rotation, based on angles and distances. Subsequently, we perform graph convolutions by considering local kernel point structures and long-range global context, obtained by a global aggregation method. We experimented with our proposed descriptors on benchmark datasets (i.e., ModelNet40 and ShapeNetPart) to evaluate the performance of registration, classification, and part segmentation on 3D point clouds. Our method showed superior performances when compared to the state-of-the-art methods by reducing 70$\%$ of the rotation and translation errors in the registration task. Our method also showed comparable performance in the classification and part-segmentation tasks with simple and low-dimensional architectures. INTRODUCTION P OINT cloud analysis is becoming a popular research area owing to the growth in the capability of 3D sensors to capture rich geometric 3D information. The applications of point cloud analysis include robotics, autonomous driving, and augmented/mixed reality. Extracting salient local geometric information is a fundamental task for analyzing point clouds to match correspondences between two objects [1] or to analyze the geometric information [2]. Recently, end-to-end learning based on point or graph convolutional networks has outperformed earlier works, which were primarily developed using hand-crafted feature descriptors [3] [4]. However, building rotation-or scale-invariant descriptors remains a difficult task in the field of computer vision research. Descriptors of point cloud applications have been widely researched for point cloud registration, model segmentation, and classification. PointNet [5] shows a new paradigm for point cloud analysis by introducing a permutation-invariant method; however, it is difficult to encode the local geometric information. The point pair feature network (PPFNet) [6] encodes local features by employing PointNet [5] for local regions and a deep graph convolutional neural network (DGCNN) [2] encodes the relative position of neighbors for each point. However, these methods are limited to extracting rotation-invariant features. In practice, it must be noted that the point cloud is not aligned to the same frame, indicating that random rotation of the input point cloud can significantly affect the representation of the descriptors. Kernel point convolution (KPConv) [7] uses kernel points around each point to efficiently handle irregularly distributed point clouds. KPConv has demonstrated groundbreaking performance; however, this rotation-variant descriptor limits the performance for randomly rotated objects, which are obtained by multiview scans. 3DSmoothNet [1] extracts local region points and aligns the local points to the local reference frame of the center point. The primary limitation of 3DSmoothNet is that the sign of a normal axis and the directions of the other two axes are not unique in a planar region. Descriptors that are aligned by an inaccurate local reference frame may encode different geometric contexts. In the example of point cloud registration, if the corresponding points have different normal signs, descriptors from the points may hamper the identification of correspondences. Moreover, local descriptors only encode local geometric information, which results in difficulty in encoding the global geometry. Consequently, local descriptors of monotonous and repeating areas are typically considered to be nonsalient descriptors, which indicates that global registration can be mismatched. To overcome this limitation, we propose a rotationand scale-robust descriptor-generation method. Inspired by arXiv:2012.12215v5 [cs.CV] 31 Aug 2022 ii KPConv [7] and 3DSmoothNet [1], our proposed method aligns the kernels to the normal vector and extracts rotationrobust features. Owing to the nonuniqueness of the local reference frame in the planar region, we distributed kernels in the form of a cylindrical shape. This shape is symmetrical around a tangent plane to handle the sign problem and has a circular cross section to handle the other undefined reference axes. By employing this kernel structure, we applied convolution with adjacent kernels combined together such that the descriptor is not affected by rotation. To make the descriptor robust to the scale of the local frame, we analyzed the geometric information and rebuilt the descriptor with a modified kernel size. In addition, to improve representations of the descriptor from the monotonous and repeating areas, we aggregated all features based on the distances from each point to encode discriminative global features. The major contributions of this work can be summarized as follows: • The rotation-robust descriptor is developed based on the kernel alignment. • The sign problem, which is caused by the normal direction of the vector, is resolved by the proposed angle-based convolution. • The scale factor, which is derived from the size of a kernel, is automatically defined for each point using a scale adaptation module. • Global context is effectively extracted by the proposed aggregation method with each local context. We analyzed the working of our proposed method on three types of tasks: registration, classification, and part segmentation. We trained and tested our proposed method on ModelNet40 dataset [8] for classification and registration and on ShapeNet dataset [9] for segmentation. The remainder of this paper is organized as follows. In Section II, several hand-crafted and deep-learning-based 3D features are reviewed. The proposed method is described in Section III. The experimental results, discussion, and conclusion are presented in Sections IV, V, and VI, respectively. Hand-crafted 3D features Before the advance of deep learning, a 3D feature descriptor was developed based on hand-crafted methods. Local descriptors were generated based on the relationship between a point and the spatial neighborhoods around the point. In addition, certain methods built a rotation-invariant descriptor based on a local reference frame. Spin-images [10] align neighbors using the surface normal of the interest point and represent aligned neighbors to the cylindrical support region using radial and elevation coordinates. The 3D Shape Context descriptor [11] represents neighbors in the support region with grid bins divided along the azimuth, elevation, and radial values. The Unique Shape Context method [12] extends the 3D Shape Context method by applying a local reference frame based on the covariance matrix. Similarly, the signature of histograms of orientations algorithm [3] also calculates the local reference frame and builds a histogram using angles between point normal vectors. Point feature histograms [13] and fast point feature histograms [4] select neighbors for each point and build a histogram using pairwise geometric differences between neighbors and the point of interest, such as relative distance and angles. Recently, with the advent of deep neural networks for point cloud data (e.g., PointNet [5] and DGCNN [2]), feature descriptors have shown groundbreaking results when compared to hand-crafted methods in several vision tasks. Volumetric based Methods The conversion of the point cloud to a volumetric data representation has been widely used to employ grid-based convolutions [14], [15]. However, the quantification of the floating-point data results in an approximation, such that the input data intrinsically contains discretized artifacts. Because the voxelization process severely consumes memory, these methods typically approximate the input data into a coarse grid of volumetric representation. To overcome this problem, certain methods represent point cloud data by optimizing the memory consumption. OctNet [16] divides the space by employing a set of unbalanced octrees based on density. Certain methods use a sparse tensor that only saves the nonempty space coordinates and features [17], [18], [19]. To build a rotation-invariant descriptor, 3DsmoothNet [1] was used to calculate the local reference frame based on the covariance of points and to transform neighbor points within the spherical support area of the interest point using the local reference frame before voxelizing the points. However, the sign of a normal axis and the directions of the other two axes are not unique in the planar region. Descriptors that are aligned using an inaccurate local reference frame may encode different geometric contexts. Therefore, we assume that the sign of the normal vector is not unique and uses a customized kernel similar to the KPConv [7] method to overcome the sign issue. SpinNet [20] aligned each point and neighbors of the point (i.e., patch) with the z-axis and mapped the point patch to the cylindrical volume to build rotation-invariant descriptors. However, since the method used the volumetric-based method for each point patch, it required a lot of computational memory to build descriptors. Moreover, since each volume contained only one point patch, the user has to determine the optimal patch size to be trained (i.e., fixed scale). On the contrary, our proposed method consumed a relatively small amount of computational memory when compared to SpinNet [20] and automatically determined the kernel size using the scale adaptation module. Point based Methods PointNet [5] and PointNet++ [21] are pioneering works for point cloud analysis, which are based on deep neural networks. These methods encode unstructured point clouds using a shared multilayer perceptron and build a permutationinvariant descriptor using a global max-pooling layer. Based on PointNet, various methods have been developed to improve performance. PPFNet [6] extended PointNet [5] to learn local geometric features. PPFNet built local features by employing PointNet and subsequently fused global information based on the local features by employing maxpooling. PPF-FoldNet [22] used rotation-invariant features such as angles and distances between the interest point and its neighbor and trained the descriptor by using foldingbased auto-encoding in an unsupervised fashion. DGCNN [2] selected k-nearest neighbors for each point and encoded the relative locations of the neighbors to encapsulate the geometric information. ShellNet [23] partitioned the neighbors of each point into shells based on the distances from the point to resolve the point order ambiguity. KPConv [7] proposed a kernel-based point convolution method that placed kernel points around each point to effectively handle irregularly distributed point clouds, and further, aggregated the geometric information from the kernel points. We extended the KPConv method by employing normal kernel alignment and angle-based convolution. As described in KPConv, a normal vector is available for artificial data [7]. In the real-world dataset, a local reference frame is inaccurate because of the sign problem (i.e., the uncertain direction of the normal vector; Fig. 1). To overcome the inaccuracy of the local reference frame, we aligned the kernels and extracted features based on the unsigned normal axis, and subsequently applied convolution operations that are invariant to the sign problem. Various rotation-invariant methods have been studied based on the rotation-invariant features, such as relative distance, angle [24] [25], and quaternion [26]. RIF [27] represented neighbors of the interest point by using rotation-invariant features and constructed a point relation matrix to supplement insufficient global information. The method showed better performances when compared to the other methods under rotations, but showed inferior performances under non-rotation environments when compared to non-rotation-invariant methods. It is challenging to develop a rotation-robust descriptor with a good benchmark accuracy because of two reasons: 1) It is hard to represent the accurate relationship between points with rotationinvariant features. 2) Convolution with more than one point in rotation-invariant-order is a challenging problem. In that aspect, our proposed method represented neighbors with not only the rotation-invariant features, but the kernels to supplement the relationship representation. Further, we used the circular convolution method, which processed the adjacent kernels simultaneously to capture the relationships between the points. Figure 2 illustrates an overview of the proposed descriptorgeneration framework. The point descriptor is built by using information obtained from the kernels around the point. To build rotation-invariant kernels, we use the normal vectors of each point to align the kernels (Section A). Local information is extracted from the kernels to encode the feature descriptors (Section B). Subsequently, circular convolution is applied, which is invariant to the sign problem (Section C). A scale adaptation module is employed in the network to resolve the scale issues (Section D). Various convolutional neural network (CNN) encoder architectures are presented in Section E, which were used for downstream tasks in this study. Finally, the global context estimation is demonstrated, which is employed in the encoder architectures (Section F). Kernel alignment To build a rotation-robust descriptor, we aligned the kernels around each point using the local reference axis, i.e., normal vector. Inspired by the 3DsmoothNet [1], we estimated the normal vector using the eigenvector of the neighbor point covariance matrix. However, the sign of the normal vector and the remaining local reference axes are not unique if a point is located on a planar surface. To resolve these ambiguities, we used cylinder-shaped kernels in which the cylinder column is aligned to the normal vector. Figure 3(a) illustrates the kernel distribution. The cross section of the cylinder is a circle along the normal direction. We placed the kernels for each circle (i.e., four to six numbers of kernels), and grouped them as one layer. In total, we used three layers for the cylinder (i.e., additional upper and lower regions of the tangent plane). Rotation robust feature projection Once all the kernels are aligned for each point, the knearest neighbor points from each kernel are extracted. The averaged location is then calculated based on their distance from the kernel: wherex k i ∈ R N x3 is the weighted-average location of the k-th kernel point of x i and d indicates the distance from the center point to the kernel point. For the weighting term, i.e., w j we used the Gaussian function to reduce the influence of outliers in (1). For rotation-robust representations, we estimated four types of features. We first estimated the angles between two vectors: one is from the center point of the kernels to the weighted-average point and the other is the normal vector (the angle f1 in Fig. 3(d)). However, because the normal vector has a normal orientation problem (i.e., sign ambiguity), a negative sign is multiplied with the normal vector if the kernel is located below the tangent plane, as shown below: where v i indicates the normal vector of x i and sign(k) returns a negative sign if the kernel is located below the tangent plane. This term determines the angle value, regardless of the normal sign. Next, we estimated the distances from the point to the averaged neighbors and from the center of the kernels to the averaged neighbors (distances f2 and f3 in Fig. 3(d)): To provide direction to the closest adjacent kernel points, we estimated the distance ratio from two adjacent kernel points to the averaged point (ratio f4 in Fig. 3(d)): The relative location of the averaged neighbor points can be successfully encoded based on the presented angle-and distance-based descriptions. Figure 3 illustrates the entire process of feature extraction, and figure 4 illustrates candidates of the neighbor points corresponding to extracted features. By using our features, we can represent the relative positions of neighbors accurately. Circular convolution Because the kernels are not aligned to the unique local reference frames, the order of the kernels may change depending on the point distribution. However, the adjacent kernels within the cylinder layer are invariant to rotation. Therefore, we extended 1 × 1 × 1 channel-wise convolution based on (6) to (7): where c(k, +1) and c(k, −1) indicate the clockwise and counterclockwise adjacent kernels of the k-th kernel in the cylinder layer. Subsequently, to avoid the sign problem, kernels are divided into two groups: the collection of kernels above the tangent plane and the collection of kernels below the tangent plane. If the kernel belongs to the first group, we select the clockwise adjacent kernel in a clockwise order. Otherwise, we select the kernel in a counterclockwise order. In addition, we processed convolution with multiple layers if the layers belong to the same group, i.e., (7) is extended to where adj(k) indicates a set of adjacent kernel points of the k-th kernel point in the same group. A circular padding convolution operation was used to implement (8). Figure 5 illustrates the convolution process using the kernels. After convolution, the kernel features around the interest point are aggregated by the summation and maximum value selection. Scale adaptation module If a target object has an unusual shape when compared to the training data, the normalization process might fail to resolve the scale problem. Because performing normalization cannot resolve the scale problem completely, we first normalized the target objects, and subsequently performed the scale adaptation module on the normalized objects to supplement scale-robustness. To develop a scale-robust descriptor, we adjusted the kernel size based on an analysis of the multiscaled features (d in (1)). We first extracted multiple features using multiple kernel sizes (feature extraction in Fig. 2). Subsequently, we concatenated the multiscaled features and estimated the interpolation weights between the kernel sizes. Simple convolution operations were employed for the scale analysis, as illustrated in Fig. 2. Finally, the output size of the kernel was used to encode the proposed descriptor for CNN encoding (Fig. 2). CNN encoder architectures The designed CNN encoders are illustrated in Fig. 6. For the registration task, four feature-extraction layers are initially used. Using a shortcut connection, the multiscale features are concatenated. Subsequently, global contexts from the concatenated features are estimated to improve the representations (as described in the following subsection, Section F). Inspired by the deep closest point (DCP) method [28], the singular value decomposition (SVD) module is used to estimate the transformation matrix. For the classification and segmentation tasks, we used the downsampling and upsampling operations which reduces and increase the number of points by using subsampling. Subsequently, Additional fully connected layers (multi-layered perceptron) are used to estimate the scores. Aggregating global context Local descriptors of monotonous and repeating areas are typically considered as nonsalient descriptors. To improve representations of the descriptor, we estimated the global context from local features (global context module in Fig. 6). Rather than estimating a single global context for all points using max-pooling, we estimated the adaptive global contexts for each point by using distance-based weights. To estimate the global feature for the i-th point, weights w ij are calculated based on the Gaussian distance between the i-th and j-th points (weights for an interest point in Fig. 7). Subsequently, the averaged local features are estimated using the weights w ij for all j. Once the global contexts are estimated for each point, the global contexts are concatenated to each local feature. Finally, a convolution operation is performed on concatenated features. RESULT We implemented three tasks: registration, classification, and segmentation. For the registration and classification tasks, we used the ModelNet40 database [8]. For the segmentation task, we used the ShapeNetPart database [9]. For the registration task, we compared our methods with PointNetLK [29] and DCP [28]. In the case of classification and partsegmentation tasks, we compared our methods with several methods, such as PointNet [5], DGCNN [2], and KPConv [7]. We analyzed our method on the registration task using the ModelNet40 database [8]. ModelNet40 contains 12,311 meshed computer-aided design models from 40 categories. ModelNet40 is split by category into training and test sets; the first 20 categories among the 40 categories were used for training. For each model, 1,024 points were used for training and testing. The DCP method [28] presented an end-to-end network for a rigid registration. Inspired by this method, we used the SVD module to compute a rigid transformation. Figure 6(a) vii illustrates the registration architecture. As evaluation metrics, the mean squared error, root mean squared error, and mean absolute error were used for rotation and translation. Registration As listed in Table 1, our method significantly reduced the registration errors when compared to the other methods since the methods used translation-invariant features, not rotation-invariant features. Even when compared to the results trained with all categories, our method showed better performance. We conducted the partial registration task by using the sampled point clouds from ModelNet40 inspired by PRNet [32]. The overall architecture is similar to the registration architecture. One difference is that we computed a rigid transformation using a RANSAC [53] with the generated descriptors. Table 2 lists the partial registration results. Spin-Net [20] aligned and mapped the point patch to the cylindrical volume to capture detailed geometric information. Subsequently, the method used the continuous convolution method to capture the geometric structure in a rotationinvariance manner. However, there are three drawbacks: 1) Severe memory consumption, 2) the optimal patch size, and 3) the sign problem. On the contrary, our proposed method consumed relatively a small amount of computational memory when compared to SpinNet [20] and automatically determined the kernel size using the scale adaptation module. Moreover, our method concerned the sign problem by using the symmetric circular convolution method. As a result, our method significantly reduced the registration errors with simple architecture and less computation memory. Figure 8 illustrates the registration results of the DCP [28] and proposed methods. The results of the DCP method [28] showed a small error between the two point clouds. Conversely, the results of our method showed superior matching performance. These results indicate that the proposed descriptor matches the feature-based correspondences between the source and target points in a superior manner when compared to the DCP [28]. Classification and Segmentation We analyzed the classification and part-segmentation performances of our method using ModelNet40 [8] and ShapeNetPart [9] databases, respectively. ModelNet40 contains 12,311 models. Among the models, 9,843 models were used for training, and the remaining 2,468 models were used for testing. For each model, 1,024 points were used for training and testing. ShapeNetPart contains 16,681 models from 16 categories. Each point is annotated using part labels. For each model, we used 2,048 points for training and testing. Figures 6(b) and (c) illustrate the classification and part-segmentation architectures, respectively. Table 3 lists the classification and part-segmentation results. While considering the evaluation metrics, the overall accuracy was used for Modelnet40 classification and mean intersection over union was used for ShapeNetPart segmentation. We compared our methods with the non-rotation-invariant methods and rotation-invariant methods. The non-rotationinvariant methods typically represented the relationship between points based on point coordinates. Unlike the nonrotation-invariant methods, the rotation-invariant methods used rotation-invariant features (e.g. relative distance and angle) to achieve rotation-invariant property. However, it is hard to represent the accurate relationship between points with rotation-invariant features, and convolution with more than one point in rotation-invariant-order is a challenging problem. Therefore, as listed in Table 3, the non-rotationinvariant methods showed better performances when compared to the rotation-invariant methods under non-rotation environments. To address the problems, our proposed method represented neighbors with not only the rotation-invariant features, but the kernels to supplement the relationship representation. Further, we used the circular convolution method, which processed the adjacent kernels simultaneously to capture the relationships between the points. Therefore, as clearly demonstrated in Table 3, our proposed descriptor outperformed the rotation-invariant methods and achieved comparable performance when compared to the non-rotation-invariant methods. Because the non-rotationinvariant methods typically used point coordinates as the features, it was easy to learn geometric information based on each point location. On the contrary, to develop a rotation-robust descriptor, we used rotation-invariant features. Moreover, we aligned our kernels to the normal vector, and it means that the order of the kernels may change depending on the point distribution unlike the non-rotation-invariant methods. These properties affected the method performance, and thus our method showed inferior performances when compared to the state-of-theart non-rotation-invariant methods under non-rotation environments. However, rotation-invariance is a desired feature for real-world applications. Thus, it is significant that our proposed method achieved superior accuracy among the rotation-invariant methods. Parameter and ablation study We conducted several parameter and ablation studies on the registration task to verify the effect of our method (Table 4), i.e., by varying the following parameters: convolution operation, the number of nearest neighbors for each kernel, and global context. First, we experimented with the convolution methods, i.e., 1 × 1 × 1 channel-wise convolution and circular convolution methods. The network with circular convolution significantly improved the performance in terms of both rotation and translation. These results indicate that the circular convolution operations successfully captured the geometric features based on adjacent kernels. Figure 9 illustrates the registration results according to the kernel alignment and convolution methods. As illustrated, the network with the aligned kernel-based circular convolution showed better registration results. Second, we analyzed our method with a different number of neighbors. As listed in Table 4, the errors were not significantly dependent on the number of neighbors. Because we used the distance-based weights for each neighbor, the closer neighbors had more influence. Owing to the use of the distance-based weights, employing the averaged neighbors reduced the influence of the number of neighbors. Third, we conducted registration with the global context. Consequently, the rotation error decreased significantly when compared to the other experiments. These results demonstrate that the global context resolves the ambiguities of each local descriptor. Robustness study In addition to the parameter studies on the registration task, we conducted scale-and rotation-robustness studies for evaluation. First, we experimented with the scale adaptation module. The network is trained with the original scale (1.00) of point clouds and tested with different scales (0.50, 1.50) to demonstrate the scale robustness. Table 5 lists the results of different scale tests for each model. The mean and standard deviations are presented, and the proposed network showed stable results when the network used the scale adaptation module. The results indicate that the scale adaptation module determined the optimal kernel size to capture geometric information so that the performance of the global registration outperformed. In addition, to demonstrate the rotation robustness, we trained the network with azimuthal rotations (around the gravity axis) (ZR) and arbitrary rotations (AR) and tested with arbitrary rotations (ZR/AR, AR/AR) for the classification and segmentation (Table 6 and 7). (-/-) indicates which rotational metric was used for training/testing, respectively. We compared the results with state-of-the-art rotationinvariant methods to demonstrate the rotation robustness. The rotation-invariant methods [24] [25] used rotationinvariant features to represent the relative positions of neighbors and processed each point using MLP to avoid processing in non-rotation-invariant order. However, since there is no additional reference point, the used rotation- invariant features were insufficient to completely represent the relative positions [24] [25]. RIF [27] used the additional reference points, but the reference points have a chance to be changed depending on object shape variation, and it may result in insufficient consistency of the descriptors between similar object parts. Moreover, the shared MLP simply processed each point feature without considering other points. On the contrary, by using the kernels (i.e., reference points) which have fixed distances from an interest point, our proposed method can represent the relative positions of neighbors accurately. Moreover, by using the circular convolution method which processed the adjacent kernels at once to capture the relationships between the information, our descriptor improved geometric information representations. As a result, in the case of training and evaluating under arbitrary rotation (AR/AR), our method showed superior performances when compared to the state-of-art methods. In addition, in the case of training under azimuthal rotation and evaluating under arbitrary rotation (ZR/AR), the accuracy losses of our methods were not significant when compared to the other non-rotation-invariant methods since our method used rotation-invariant features. DISCUSSION Point cloud analysis requires rotation-and scale-robust feature representation. It is challenging to develop a robust descriptor with a good benchmark accuracy. In this paper, we propose an aligned kernel-based feature representation to resolve these limitations. To make the descriptor robust to rotation, we aligned the kernels to the local reference frame. Subsequently, we applied normal sign-independent convolutions to the descriptors rather than using fixed kernels that are independent of the rotations [7]. Instead of using translation-invariant features [2], we used rotationrobust features from the aligned kernels. In addition, to improve the representations of the descriptor, we estimated the adaptive global context for each point rather than using a single global context [6]. Because the kernel-based descriptors are highly dependent on the size of the given kernels, we adjusted the kernel size based on the trainable weights. The experimental results for various tasks (i.e., registration, classification, and part segmentation) showed promis-xi ing results for feature representation. In the registration task, the rotation and translation errors decreased significantly. This indicates that our descriptors successfully captured the salient and corresponding geometric information between the two transformed point clouds. In the case of classification and segmentation tasks, our proposed method showed the best performances under rotations. These results indicate that our method is not only limited to the transformed point cloud task but also applicable to general purposes (i.e., feature representations). Several parameter and ablation studies have also demonstrated that our proposed methods improved the feature representations and stability of the descriptor. CONCLUSION Encoding rotation-and scale-robust features is a challenging task for point cloud representation. The robustness of each parameter is critical for a successful application in various downstream tasks. In this paper, we proposed a CNN-based feature encoding method to resolve this task. The proposed kernel alignment, feature projection, and kernel-conscious convolution methods demonstrated superior performance on the registration task when compared to previous methods. Moreover, the proposed scale adaptation and global aggregation methods successfully captured the optimum scale parameter and global geometric features for each local descriptor, respectively.
2020-12-23T02:16:06.177Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "19976119dc0054ecd85d4cab731a89e6d58e4578", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0bcd37aa53fbebafd5760c05e3a65bbbd565c2c6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17710295
pes2o/s2orc
v3-fos-license
Joint prosthetic infections: a success story or a continuous concern? Joint prosthetic infections: a success story or a continuous concern? Open Access-This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the source is credited. In this issue of Acta Orthopaedica, there are 2 papers dealing with postoperative infections after joint arthroplasty. Ste-fansdóttir et al. (2009) discuss the timing of the preoperative prophylactic antibiotics and Dale et al. (2009) report a possible increase in the infection rate for total hip arthroplasty in Norway. These papers give us reason to reflect on the question of whether our efforts to prevent surgical site infections are sufficiently effective, and what percentage of infection we should try to achieve as a result of all our preventive measures. A deep postoperative infection in orthopedic surgery involves bone and biomaterials, and is difficult to heal without removal of the biomaterials. Although the infection rate of 1–2% in clean orthopedic operations is low compared to other kinds of surgery, there is a constant need to maintain the best possible infection prevention. Now and then, there is an episode of outbreak of surgical site infections (SSIs), sometimes with infection rates of more than 4–5%. The causes of such disastrous periods mostly remain unclear, but often the result is that the preventive measures are tightened by the orthope-dic surgeons, which often causes irritation and resistance from other workers in the hospital. The necessarily authoritarian way of protocol control in a hospital is often violated. We have the same experience as Stefansdóttir et al. that hygiene standards seem to worsen. People have a tendency to do their work in the easiest and most convenient way, which may cause a regrettable relaxation of hygiene standards, as also mentioned by Hughes and Anderson (1999). Many preventive measures to reduce postoperative infections have been investigated. They are based on improvement of the resistance of the host to infection on the one hand (e.g. body temperature, glucose level, antibiotics, nutritional state), and reduction of peroperative contamination of the wound on the other (e.g. disinfection, clean clothing, ultraclean air). The low infection rate in arthroplasties nowadays makes it almost impossible to perform further randomized trials on infection prevention. In the famous study by Lidwell et al. (1987), which investigated the usefulness of clean air as a prevention measure, more than 8,000 joint prostheses were needed. The Dutch randomized trial … In this issue of Acta Orthopaedica, there are 2 papers dealing with postoperative infections after joint arthroplasty. Stefansdóttir et al. (2009) discuss the timing of the preoperative prophylactic antibiotics and Dale et al. (2009) report a possible increase in the infection rate for total hip arthroplasty in Norway. These papers give us reason to reflect on the question of whether our efforts to prevent surgical site infections are sufficiently effective, and what percentage of infection we should try to achieve as a result of all our preventive measures. A deep postoperative infection in orthopedic surgery involves bone and biomaterials, and is difficult to heal without removal of the biomaterials. Although the infection rate of 1-2% in clean orthopedic operations is low compared to other kinds of surgery, there is a constant need to maintain the best possible infection prevention. Now and then, there is an episode of outbreak of surgical site infections (SSIs), sometimes with infection rates of more than 4-5%. The causes of such disastrous periods mostly remain unclear, but often the result is that the preventive measures are tightened by the orthopedic surgeons, which often causes irritation and resistance from other workers in the hospital. The necessarily authoritarian way of protocol control in a hospital is often violated. We have the same experience as Stefansdóttir et al. that hygiene standards seem to worsen. People have a tendency to do their work in the easiest and most convenient way, which may cause a regrettable relaxation of hygiene standards, as also mentioned by Hughes and Anderson (1999). Many preventive measures to reduce postoperative infections have been investigated. They are based on improvement of the resistance of the host to infection on the one hand (e.g. body temperature, glucose level, antibiotics, nutritional state), and reduction of peroperative contamination of the wound on the other (e.g. disinfection, clean clothing, ultraclean air). The low infection rate in arthroplasties nowadays makes it almost impossible to perform further randomized trials on infection prevention. In the famous study by Lidwell et al. (1987), which investigated the usefulness of clean air as a prevention measure, more than 8,000 joint prostheses were needed. The Dutch randomized trial by Wymenga (1991) compared the deep infection rate between 1 dose versus 1 day (3 doses) of systemic cefuroxim prophylaxis in 2,651 total hip implantations. Even this number was not enough to achieve a statistically significant result (0.83% vs. 0.45%), although the trend was that the 1-dose regimen doubled the infection rate. With such low infection rates, prophylactic studies become so large that they can no longer be financed. The lack of a high level of evidence from a randomized trial is not, however, proof of ineffectiveness: the absence of evidence is not the evidence of absence. In national guidelines, the level of evidence should be given as has been done, for example, in the CDC guidelines (Mangram et al. 1999). Evidence from experiments and also theories based on the understanding of the "route of infection" should also be taken into account. In the Netherlands, a quality improvement program run by the CBO (the Dutch Institute for Healthcare Improvement, Utrecht, the Netherlands) has been in existence for 15 years to reduce postoperative infections (CBO 2009). The method of the "plan-do-study-act" (PDSA) cycle was used to improve process parameters without measuring the SSI rates. Accepted preventive measures were subjected to such PDSA approaches, such as limited preoperative shaving with clippers of only the incision site, minimizing the number of door openings during operations (van Tiel et al. 2006), and also the infusion of the prophylactic antibiotic at the right time, as now discussed in the Swedish study by Stefansdóttir. The acceptance of these hygiene improvements in daily OP practice is slow and takes years, but there is a clear tendency. Whether or not this does indeed result in a lower surgical site infection rate is not yet known, and it has now been seriously called into question by the Norwegian register data. Systemic antibiotics are the best documented-and also the most effective-prophylactic measure to reduce surgical site infections. The reduction rate is about 80% (AlBuhairan et al. 2008). There is no doubt that the timing is crucial: antibiotics must been given intravenously 15-45 min before incision ). The choice of antibiotic (narrow or broader spectrum) and the dose (1 dose vs. 1 day) is more controversial. In general, the 1-day regimen is better in arthroplasty (Wymenga 1991, Engesaeter et al. 2006, and 1 dose is only effective if the half life of the antibiotic is more then 12 hours (Gillespie and Walenkamp 2001). The disappointing result in the paper from Sweden (Stefansdóttir et al. 2009) that in almost 50% of the operations the timing was not correct, illustrates that there is an urgent need for an involved surgeon at each department who repeatedly checks whether the whole package of preventive measures is being applied and who motivates his or her colleagues to adhere to treatment protocols. When working on infection prophylaxis, one must know what SSI rate has to be achieved. The infection rate is one of the most important of the many quality parameters that are used for operations. Increasingly, hospital managers are using these data to judge whether departments are underperforming and the data from the national arthroplasty registers can be used in the same way (Robertsson 2007). In the Netherlands, the government Inspectorate of Healthcare has made it obligatory for surgical departments to organize a reliable infection registration of their operations, and this information is made publicly available. Today, however, insurance companies also ask about data from the infection and complication registrations, and they use these data in their decision on which orthopedics departments and hospitals are contracted to implant prostheses. However, isolated SSI data not related to patient mix may cause misjudgements and incorrect decisions. The question remains as to what SSI rate is acceptable, and where we can find the best benchmark data. Several national surveillance programs for nosocomial infections exist, which gather data on incidence of SSI. In the Netherlands, the national PREZIES surveillance program has been recording postoperative infections from all types of surgery since 1996. The database of 1999-2008 covers 203,359 operations with 5,985 deep and superficial postoperative infections (2.9%). There are 52,133 total hip operations included, of which only 29,876 were adequately followed with a surveillance after discharge as advised (up to 1 year). The incidence of infection in these patients was 1.0% deep and 1.1% superficial (PREZIES National Surveillance Network for Hospital Infections 2009). In other countries, comparative incidence surveillance programs exist: Germany (NRZ-KISS), Belgium (NSIH), England (NINSS), Austria (ANISS), France, the US (Woodhead et al. 2002), and Australia (VICNISS). Comparisons of results between countries have been published: between the Netherlands and Germany (Manniën et al. 2007), and between England and the USA (Leong et al. 2006). In many countries, about 50% of the data collected apply to orthopedic operations, reflecting the relatively high degree of interest of orthopedic surgeons in infection surveillance. Because superficial infections are difficult to distinguish from aseptic wound complications and are often treated by family doctors after discharge, their registration is not reliable and it is better to focus on deep infections only. Surveillance after discharge for up to 1 year, as suggested by the the CDC, is important (Mangram et al. 1999). This minimum followup time at the outpatient clinic requires both involvement and organizational abilities on the part of orthopedic surgeons (Walenkamp 2003). In the several national incidence surveillance programs, there is no indication that the deep infection rate for total hip arthroplasty is increasing: for many years it has remained around 1%. The Dutch data show a statistically significant decrease of 60% (Manniën et al. 2008), as mentioned by Stefansdóttir. In this calculation, however, superficial and deep infections were pooled. If only the deep infections are considered in the Dutch PREZIES registration, there appears to have been no statistically significant change in the infection rate in 10 years (van Benthem and Manniën 2009) The question is whether arthroplasty registers can be used to analyze trends in postoperative infections. As with most other registers, the arthroplasty register in Norway gives information mainly based on the registration of revisions with removal or exchange of the whole or a part of a prosthesis (Engesaeter et al. 2006, Helse-Bergen 2008, Kärrholm et al. 2008, Hooper et al. 2009). If a reoperation is necessary without removal or exchange, then it is not recorded. Early postoperative prosthesis infections should be treated in the first postoperative weeks. With a combination of aggressive surgical debridement with the prosthesis in situ and highdose antibiotics, most infected prostheses can be saved, in total hips nowadays up to 70% (Crockarell et al. 1998, Guilieri et al. 2004, Trebse et al. 2005, Toms et al. 2006). These infected but retained prostheses, treated in situ without removal, are not recorded as infected in a register that is based on removal or exchange of prosthesis parts. Thus, an additional registration of such an early reoperation is necessary-as recently introduced in the Swedish and Finnish registers, for example. In the Swedish register, the reoperations are subdivided into 3 groups: (1) revision with replacement or extraction of implant components, (2) major reoperations without replacement or removal, and (3) minor reoperations without replacement or removal. In this register, the number of reoperations in 2006 and 2007 increased by 2.7%, and for deep infections the number increased by 6.6% (Kärrholm et al. 2008). The percentage of reoperations for infections within the first 2 postoperative years nationwide in Sweden was 0.6%, with a range between the hospitals of 0.0-2.8%. These data do not, however, capture postoperative infections that were not treated surgically. If these would have been included, the total infection rate for total hips in Sweden would appear to approach the 1% level, as found in the surveillance programs mentioned. It has been stated that reoperation within 2 years "reflects mainly early and serious complications such as deep infections and revision due to repeated dislocation" and "is a quicker quality indicator and is easier to use in clinical improvement work than 10-year survival, which is an important but slow and historical indicator" (Kärrholm et al. 2008). The Finnish Knee Arthroplasty Register (FAR) met the same problem in their study of knee prosthesis infections in the past few years. Jämsen et al. (2009) reviewed 38,676 knee prosthesis operations but they used not only revisions but also reoperations as endpoint. Because they supposed that many infection-related operations such as debridement, amputation, and arthrodesis were infrequently reported to the FAR, they collected parallel information from the Finnish Hospital Discharge Register (HDR), which gives better information based on diagnoses. Comparison of the 2 databases gave information about the reoperated infected prostheses, but not about the infections that were only treated with systemic antibiotics. These authors confirmed that the Finnish register underestimates the infection rate. In conclusion, there are 3 levels of registration available in large databases with an increasing degree of reliability: firstly, registrations of revisions for infection with component removal or exchange, then reoperations for infection but with retention of the prosthesis, and finally surveillance programs on incidence of surgical site infection in departments, hospitals, or countries. These combined data should be used for reliable estimation of the true infection rate. The report from the Norwegian register of a probable increase in the percentage of total hip prostheses that had to be removed because of infection is interesting, but the reason for the increase is unclear. The authors' analysis is relevant, but I would like to add the possibility that the increase in more resistant germs such as MRSE and MRSA, and the technically more complex reconstructions have resulted in infections that are more difficult to treat. So the question remains whether the infection rate in total hips increases. In 2001, Lidgren, co-author of the paper by Stefansdótter et al., wrote a guest editorial in this journal on the same subject with the optimistic title: "Joint prosthetic infections: a success story" (Lidgren 2001). They now suggest in their own article that this statement is no longer true, and that the problem remains as before. There is an indication that prophylactic hygiene standards in hospitals should be improved. There is also a need for more exact data on infection rates, perhaps by a smart combination of data provided by the increasing number of arthroplasty registers and by national SSI surveillance programs. We must not be satisfied with a deep infection rate of more then 1% for clean orthopedic operations, and we must be able to prove that relatively low infection rate using reliable surveillance. Geert H I M Walenkamp Professor of Orthopaedic Surgery, Caphri Research Institute, Maastricht University Medical Centre, the Netherlands g. walenkamp@mumc.nl; g.h.walenkamp@home.nl
2018-04-03T02:11:02.776Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "44fa4f5812b5166151173a1fb272a81404b160fc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3109/17453670903487016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44fa4f5812b5166151173a1fb272a81404b160fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16690129
pes2o/s2orc
v3-fos-license
Financial team incentives improved recording of diagnoses in primary care: a quasi-experimental longitudinal follow-up study with controls Background In primary care, financial incentives have usually been directed to physicians because they are thought to make the key decisions in order to change the functions of a medical organization. There are no studies regarding the impact that directing these incentives to all disciplines of the care team (e.g. group bonuses for both nurses and doctors) may have, despite the low frequency with which diagnoses were being recorded for primary care visits to doctors. This study tested the effect of offering group bonuses to the care teams. Methods This was a retrospective quasi-experimental study with before-and-after settings and two control groups. In the intervention group, the mean percentage of visits to a doctor for which a diagnosis was recorded by each individual care team (mean team-based percentage of monthly visits to a doctor with recorded diagnoses) and simultaneously the same data was gathered from two different primary care settings where no team bonuses were applied. To study the sustainability of changes obtained with the group bonuses the respective data were derived from the electronic health record system for 2 years after the cessation of the intervention. The differences in the rate of marking diagnoses was analyzed with ANOVA and RM-ANOVA with appropriate post hoc tests, and the differences in the rate of change in marking diagnoses was analyzed with linear regression followed by t-test. Results The proportion of doctor visits having recorded diagnoses in the teams was about 55 % before starting to use group bonuses and 90 % after this intervention. There was no such increase in control units. The effect of the intervention weakened slightly after cessation of the group bonuses. Conclusion Group bonuses may provide a method to alter clinical practices in primary care. However, sustainability of these interventions may diminish after ceasing this type of financial incentive. Background Tailored payment systems have been used in attempts to achieve policy objectives, such as improving the quality of care or recruitment to under-served areas, because the method by which physicians are paid may affect their professional practice [1,2]. Conventionally, these financial incentives have been directed to physicians because they are thought to make the key decisions to change the functions of a medical organization [3][4][5]. There are ample recent studies concerning how delivering financial incentives to primary care physicians alone may alter the physicians' behavior of and thereby the performance of the care system [6][7][8][9][10][11][12]. Yet, in modern multidisciplinary health care systems there are also other quite autonomous actors, such as nurses who specialize in the treatment of diabetes [4,5,13,14], who may influence the functions of their organization significantly and also improve the quality of the care. Thus, also other disciplines than doctors might well be considered as objects for financial incentives in improving the quality of care. Improving the recording of diagnoses of acute and chronic diseases might theoretically serve as one of the most important targets [15][16][17][18], and would therefore be a suitable element to improve by financial incentives. The recording of diagnoses in only 40-60 % of doctor visits in the care units was deemed insufficient by the administration of the primary health care of Espoo City. A higher frequency of recorded diagnoses was deemed necessary for planning activities and to manage the resources of primary health care. This led to the present intervention and study. The aim of the present study was to examine whether it is possible to improve clinical practice by increasing the recording of the diagnoses through the use of financial incentives to all disciplines in the care team (e.g. group bonuses). We were also interested in how enduring these changes in the frequency of to registering the diagnoses would be after cessation of payment of these group bonuses. Methods This study was performed in Espoo city where in 2006 there were 230,000 inhabitants. As everywhere in Finland, primary care is non-profit and it is maintained by municipalities which fund this activity with taxes. In Espoo, there are five municipal health service areas which each contain 3-6 care teams. Altogether the number of care teams was 23. There were 6-8 doctors and 6-8 nurses per team. The precise amount of doctors and nurses varied slightly over the study period. This is a retrospective quasi-experimental study. The executive of Espoo primary care defined the areas where improvement was desired and their goal levels at the start of 2005. Improvement of recording diagnoses of the patient charts was chosen as the main goal. In order to obtain the group bonus it was necessary for teams to record diagnoses for doctor visits at a significantly higher rate than before intervention. The proportion of monthly doctor visits having recorded diagnoses was selected as the main measure to study the effect of implementing group bonuses. In practice this meant that to get a group bonus a care team had to take care that diagnoses were recorded in more than 75 % of all doctor visits of that team. Diagnoses were recorded with ICD-10 or ICPC systems by the doctors. Both systems, Effica and Finstar, gave a similar specific place in the electric patient chart where appropriate ICPC-2 or ICD-10 diagnoses could be placed during the patient visit. Both systems assisted the GP to find a proper diagnosis code or allowed the doctor to use directly the right code for the desired diagnosis. To commit the staff to the change in function, a multidisciplinary team contract was signed with the members of the care teams. The contract defined the rules and approaches of the functions of the teams. The team contracts were signed by all of the five service areas between 1.3.2005 and 30.5.2005, which was considered to be the time of the start of the intervention. The data was specifically derived from the electrical Effica patient chart system (Tieto LTD, Helsinki, Finland) from which the data had been reliably obtainable since 1.5.2003. No ethical approval was required because this study was made directly from the patient registry without identifying the patients. The registry keeper (the health authorities of Espoo and Vantaa) granted permission to carry on the study. The report generator provided figures for the total number of doctor visits, the number of recorded diagnoses and thus a percentage for the recording of diagnoses for each individual doctor and thereby also for the care unit per month. This allowed the calculation of a mean of these percentages for each individual care team (mean care unit-level percentage of doctor visits with marked diagnoses/care unit/month) which was thus the main measure for analysis in the present study. The obtained data were analyzed by comparing the recording of diagnoses during similar time periods before and after the initiation of group bonuses to all nurses and doctors belonging to the care teams (intervention) in primary care in Espoo city. As control data, we had the corresponding data on the single doctor and care team level frequencies of the recording of monthly diagnoses from two different primary care units where no similar team incentives were applied: dental primary care of Espoo and Länsimäki-Hakunila primary care health center from the neighboring city of Vantaa. Vantaa resembles Espoo in its location (neighboring Helsinki) and number of inhabitants (about 200,000) and also in other factors such as age, sex, morbidity levels, deprivation and other demographic factors as much as possible in Finland (see http://www.aluesaarjat.fi, and http://pxweb2.stat.fi/database/StatFin/data-basetree_fi.asp) and therefore we have used Vantaa as a control in our former studies of primary care, too [19]. From Espoo dental care the data were analogously obtainable from 1.5.2003 (also using Effica patient chart system, Tieto LTD, Helsinki, Finland). The data of the combined Länsimäki-Hakunila health center were obtained from Finstar patient chart system (Logica LTD, Helsinki, Finland). To get reliable data from Finstar-system the report generator requires precise pre-identification of the doctor under study at a given time point and it is therefore not able to produce continuous monthly data throughout the whole system, as is the Effica system. Therefore the busiest month of the year (November) was chosen as the control data and comparisons between controls were made by using this single time period. The Effect of the intervention on the mean team-based percentage of monthly doctor visits with recorded diagnoses was monitored for a 2 years time period before intervention and 1.5 years after it. Since the group bonus system was altered in such a way that after 2010, e.g. during 2011 and 2012, the recording of diagnoses did not produce a financial bonus for the care team, we collected the data of the same parameter from the teams existing in Espoo health center in November 2011 and 2012 after the cessation of the intervention. In this way we hoped to have some information about how enduring the changes obtained with the team bonus combined with team contract were. The within-care team variation in Espoo primary care, Espoo dental care and Länsimäki-Hakunila primary care were analyzed by using the mean team-based percentage of monthly doctor visits with recorded diagnoses over the whole study period. The comparisons were then performed by using One Way Repeated Measures ANOVA with suitable corrections (Bonferroni) for multiple comparisons when following the development of the studied units as a function of time. One way ANOVA on Ranks followed by Dunns' test was used to compare Espoo primary care with the control units at corresponding time points. The rate of change in diagnosis marking was analyzed with regression analysis followed by t-test (GLM procedure of Sigma Plot 10.0 Statistical Software, Systat Software Inc.,Richmond, CA, USA) [20,21]. Results In Espoo primary care, the mean number of monthly visits in office-hour services of primary care doctors was about 18,000 in 2003-2006 and in primary care EDs the mean number of doctor visits per month was about 4000 in the same period suggesting a total number of about 22,000 monthly doctor visits in the whole public primary care system of Espoo city. In the complementary private sector primary care the mean number of monthly doctor visits was about 4500 in 2006. The mean team-based percentage of monthly doctor visits with recorded diagnoses increased from about 55 to 90 % after application of group bonuses in Espoo primary care (one way repeated measures analysis of variance P < 0.001; Figs. 1, 2). There was already a slight increase (0.79 ± 0.12 %/month, mean ± SEM) in the rate of marking diagnoses before the intervention in Espoo primary care. However, during the six first months of intervention this rate doubled statistically significantly to 1.65 ± 0.39 %/month (P = 0.005). After the six first months of intervention, the increase in the rate plateaued to the level of 0.31 ± 0.07 %/month, which was statistically significantly less than before the intervention (P = 0.002) or during the first six months of the intervention (P < 0.001). Simultaneously, when the group bonus system was associated with an increased proportion of monthly doctor visits having recorded diagnoses in care teams of Espoo primary care there were no increases in the same parameters in either of the controls (Fig. 2). The teambased mean percentage in recording diagnoses did not differ statistically significantly in Espoo primary care and Länsimäki-Hakunila, but in both these units this frequency was statistically significantly higher than in Espoo dental care in the beginning of the follow up period in 2003 (one way ANOVA on ranks P < 0.001, Dunns' test, P < 0.05). However, after the intervention in 2005 and 2006 the mean frequency of recording diagnoses was higher in Espoo primary care teams than in either of the teams of the control units (one way ANOVA on ranks P < 0.001). The rate of marking diagnoses increased in in the Espoo primary care 12.95 ± 1.13 %/year during the follow up time. This was statistically significantly more (P < 0.001) than in either of the controls: in Hakunila-Länsimäki primary care (Vantaa) this rate (mean ± SEM) was 1.99 ± 3.05 %/year and in Espoo dental primary care it was negative −0.53 ± 0.53 %/year. In practice this means that the controls did not differ from each other statistically significantly and that there was no change in the rate of marking diagnoses in either of the controls during the follow up. The mean team-based percentage of monthly doctor visits with recorded diagnoses started to decrease within 2 years of cessation of the team bonus (2010; one way analysis of variance on Ranks, P < 0001, Fig. 3 Discussion A financial incentive, introducing group bonus with team contracts, improved the recording of diagnoses in the patient charts. Neither in the dental unit of Espoo, which here represented the part of the same primary care organization where no intervention was applied (e.g. different specialization in the same organization; internal local control) nor in Länsimäki-Hakunila, which here represented a neighboring organization with the same specialization (e.g. somatic primary care in different organization: external peer control) were there similar increases in the studied parameters during the same time period. Although differences across computer systems exist and even these systems may be predictors of care quality outcomes [22] this does not explain the results of our study because we had an internal control (primary dental care of Espoo) which used the same computerized patient chart system as the intervened Espoo primary care. Before the intervention it was expected that the low level of recording diagnoses was due to different administration and management cultures in different health service areas, lack of permanent doctors and deficiency in tutor services. However, financial incentives combined with team contracts with the care teams seemed to overcome these putative hindrances to proper recording of patient data. The present finding with group bonuses is in line with a former study where financial incentives to GPs increased diagnosis making and recording of certain diseases [7,9]. Altogether, our work with other recent studies suggests that financial incentives may be used to alter behavior of physicians towards improving the quality of care [8,[23][24][25][26]. Furthermore, financial incentives may promote other important values in primary care such as reduction of inequalities in the delivery of clinical care related to area deprivation [6]. Whether rewarding staff with financial incentives leads to real quality improvement in care is still an open question [27]. Evidence about the effectiveness of this kind of intervention at the population level appear contradictory as in a large scale former study there was no evidence of lower mortality rates of the population [12] but in another study of similar scale there was evidence for reduction in emergency hospital admissions after a pay-for performance intervention for GPs [10]. Altogether, this means that one must choose carefully the measures of improved care: a hard end point like mortality is not easily affected [15]. Furthermore, improved recording of diabetes and related parameters does not automatically guarantee better care of the disease itself [28] and improvements in quality aspects of care do not automatically result in better outcomes of care [8]. Nevertheless, recorded diagnoses make it possible to analyze data further and possibly find areas of improvement in the local quality of care [17]. Several questions, such as how reliable and valid the data obtained with the present intervention is, have to be answered before giving any recommendations about the usefulness of group bonuses in improving clinical practices. There appeared to be units where frequencies of recording diagnoses in doctor visits decreased after cessation of payment of the group bonuses. This latter is in line with former reports suggesting that those parameters in quality work which are not sustained with financial incentives to GPs may even weaken, if improving other parameters is rewarded [6,29]. Yet, partial withdrawal of financial incentives did not largely hamper results obtained with this type of intervention to GPs [11]. Altogether, financial incentives have been reported to provide large initial gains which, however, diminished over time [8]. This holds true for financial incentives directed to patients, too: only that part of behavior which is paid for is improved [30]. Thus, the eventual consequences of the behavior change are not necessarily in line with the original intention of the intervention driven with economic incentives [8,30]. The present follow-up time (2 years) is relatively short to answer the question of what will eventually happen in the long term when the group bonuses are totally withdrawn. Yet, the level of recording diagnoses in 2012 was still clearly superior to the years before the intervention (2003 and 2004). Altogether, the present finding is thus in line with a hypothesis that financial incentives are effective primers in interventions of primary care [6,8], even when group incentives to the care team are applied. Conclusion Group bonuses may provide a method to improve clinical practices in primary care. Yet the putative desired effects obtained with these financial incentives may slowly start to erode if these bonuses are withdrawn. Authors' contributions TL planned and supervised the intervention and designed the study and wrote the manuscript, TK analyzed the data, designed the study and wrote the manuscript, JK provided the dental control data from Espoo, MR provided the Vantaa control data, LS provided the study data from Espoo and AMH designed the study and wrote the manuscript. All authors read and approved the final manuscript.
2016-05-04T20:20:58.661Z
2015-11-11T00:00:00.000
{ "year": 2015, "sha1": "5d848b79e324a4dcdc21b66a2356df68824e33f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13104-015-1602-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8c864a4aa8110247168039b488bc42de92bb7f7", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256600989
pes2o/s2orc
v3-fos-license
NUPR1 inhibitor ZZW-115 induces ferroptosis in a mitochondria-dependent manner Ferroptosis is an iron-dependent cell death characterized by the accumulation of hydroperoxided phospholipids. Here, we report that the NUPR1 inhibitor ZZW-115 induces ROS accumulation followed by a ferroptotic cell death, which could be prevented by ferrostatin-1 (Fer-1) and ROS-scavenging agents. The ferroptotic activity can be improved by inhibiting antioxidant factors in pancreatic ductal adenocarcinoma (PDAC)- and hepatocellular carcinoma (HCC)-derived cells. In addition, ZZW-115-treatment increases the accumulation of hydroperoxided lipids in these cells. We also found that a loss of activity and strong deregulation of key enzymes involved in the GSH- and GPX-dependent antioxidant systems upon ZZW-115 treatment. These results have been validated in xenografts induced with PDAC- and HCC-derived cells in nude mice during the treatment with ZZW-115. More importantly, we demonstrate that ZZW-115-induced mitochondrial morphological changes, compatible with the ferroptotic process, as well as mitochondrial network disorganization and strong mitochondrial metabolic dysfunction, which are rescued by both Fer-1 and N-acetylcysteine (NAC). Of note, the expression of TFAM, a key regulator of mitochondrial biogenesis, is downregulated by ZZW-115. Forced expression of TFAM is able to rescue morphological and functional mitochondrial alterations, ROS production, and cell death induced by ZZW-115 or genetic inhibition of NUPR1. Altogether, these results demonstrate that the mitochondrial cell death mediated by NUPR1 inhibitor ZZW-115 is fully rescued by Fer-1 but also via TFAM complementation. In conclusion, TFAM could be considered as an antagonist of the ferroptotic cell death. INTRODUCTION The concept of ferroptosis, an iron-dependent mode of cell death, characterized by the accumulation of lipid reactive oxygen species (ROS) was first proposed in 2012 [1]. Morphologically, ferroptosis occurs mainly in cells with reduced mitochondrial size, increased bilayer membrane density, and reduction or disappearance of mitochondrial cristae [1][2][3]. Biochemically, ferroptotic cells usually present strong depletion of intracellular glutathione (GSH) with a concomitant decreased activity of glutathione peroxidase 4 (GPX4), leading to accumulation of Fe 2+ -dependent lipid hydroxyperoxidation, resulting in a large amount of ROS, promoting ferroptosis-cell death [2][3][4]. Ferroptosis may be induced by some mechanisms, such as (i) the inhibition of the cysteine import via the glutamate/cysteine antiporter (Xc-system), the reduction of GSH pool and the concomitant decrease in cell antioxidant capacity, accumulating the hydroperoxided phospholipids, and ultimately causes the occurrence of oxidative damage and ferroptotic cell death via membrane disintegration [5]; (ii) suppression of the GPX4 activity which prevents GSH-dependent of lipid hydroperoxides (L-OOH) into corresponding alcohols (L-OH). Therefore, genetic or pharmacological inhibition of GPX4 leads to the accumulation of lipid hydroperoxides, which induces ferroptosis [6,7]. On the contrary, there are at least two key pathways antagonizing ferroptosis. The first one is controlled by the ferroptosis suppressor protein 1 (FSP1), a reductase of Coenzyme Q10. This gene was found to be strongly related to ferroptosis by using an expression cloning or metabolic method developed to identify regulators that complement the loss of GPX4 [8,9]. The second one was recently described by Liu et al. [10]. They identified that the stress-inducible protein NUPR1 is strongly activated in response to ferroptosis induction which in turn, induces the expression of lipocalin 2 (LCN2) which blocks the ferroptotic cell death by diminishing iron accumulation and subsequent Fenton-dependent oxidative damage. NUPR1 is a gene described for the first time by our laboratory because it is activated during the acute phase of pancreatitis [11]. Then, it has been shown that NUPR1 is expressed in most, if not all, cancerous tissues. At the cellular level, NUPR1 has been described as participating in many processes associated with cancer, including cell cycle regulation and apoptosis, senescence, cell migration and invasion, development of metastases [12]. Importantly, NUPR1 has recently received significant attention due to its role in promoting the development and progression of PDAC [13,14]. Some NUPR1-dependent effects are also involved in the resistance to some anticancer drugs [15][16][17]. The crucial role of NUPR1 as a potential therapeutic target has been previously reported since its genetic inactivation completely prevented the growth of PDAC [18]. Remarkably, other laboratories have also shown that genetic inactivation of NUPR1 stops the growth of HCC [19], non-small cell lung cancer [20], cholangiocarcinoma [21], glioblastoma [22], multiple myeloma [23], osteosarcoma [24], and more recently ovarian [25] and gastric cancer [26]. These results prompted us to identify a small inhibitor of NUPR1 to be used for treating cancers. Unfortunately, NUPR1 is an 82 residue intrinsically disordered nuclear protein (IDP) [27] and consequently, a high throughput screening, based on the principles that apply to well-folded proteins for selection of inhibitors, is inappropriate for NUPR1. Therefore, we have developed a small molecule screening by a multidisciplinary approach combining biophysics, chemistry, bioinformatics, and biology, and we have demonstrated that ZZW-115, a trifluoperazine-related compound is more effective than trifluoperazine in vitro and in vivo and without side effects [28,29]. Treatment of PDAC xenografts, but also glioblastoma [30] and HCC [31], with ZZW-115 induces growth arrest followed by complete tumor regression. Mechanistically, ZZW-115 binds with a strong affinity to Thr68, which is located into the nuclear localization signal (NLS) of NUPR1, hampering the interaction with importins and displacing them, and therefore preventing NUPR1 to translocate from the cytoplasm to the nucleus [30]. Treatments of cancer cells with siRNA directed against NUPR1 or with ZZW-115 induce a collapse of ATP, associated with a strong reduction in OXPHOS metabolism and overproduction of ROS. The cells respond by activating glycolysis, to compensate for this energetic deficit, which rapidly consumes all energy resources, triggering necroptosis and apoptosis simultaneously [32,29]. Altogether, these data indicate that NUPR1 inactivation with ZZW-115 is a promising anticancer strategy for PDAC, but also other cancers, and therefore to characterize its mechanism of action is clinically relevant. In this study, we demonstrate that ZZW-115 induces a strong mitochondrial dysfunction with a ROS overproduction in combination with the collapse of the antioxidant defense system, leading to combined cell death via apoptosis and ferroptosis. Importantly, this effect is in part rescued by a forced expression of the mitochondrial factor TFAM. ZZW-115-induced cell death is rescued by Fer-1 and antioxidants With the aim to determine if ZZW-115-treatment induces ferroptosis in cancer cells, we tested the potential rescue effect of Fer-1, a specific ferroptosis inhibitor [33], in ZZW-115-treated cells. MiaPaCa-2 and HepG2 as tumor cells, derived from PDAC and HCC, respectively, were challenged with increasing concentrations of ZZW-115 in the presence or not of Fer-1 (1 µM), Z-VAD-FMK (20 µM), or Nec-1 (40 µM) at 24, 48, and 72 h. As shown in Fig. 1A and Supplemental Fig. 1A, Fer-1 treatment increased cell viability upon ZZW-115 treatment. In addition, Fer-1 reduced intracellular ROS as well as specific mitochondrial ROS production in ZZW-115-treated cells ( Fig. 1B and Supplemental Fig. 1B). Then, we hypothesized that the combination of ZZW-115 treatment with the drugs inducing ferroptosis or targeting antioxidant systems should increase its anticancer effect. In combination with ZZW-115, we used L-Buthionine-(S,R)-Sulfoximine BSO, a specific GCLC inhibitor; Erastin, a small molecule capable of initiating ferroptotic cell death by activating the Voltage-Dependent Anion Channels (VDAC); and RSL3, a ferroptosis activator in a VDAC-independent manner as a proof-of-concept, to treat PDAC-and HCC-derived cells. The results showed that the three drugs were able to improve ZZW-115 efficiency in both MiaPaCa-2 and HepG2 cells ( Fig. 1C and Supplemental Fig. 1C). Finally, in order to determine if ROS accumulation induced by ZZW-115 treatment is involved in cell death, we performed experiments combining ZZW-115 with several unrelated antioxidants agents and measured their survival effect. Cells were treated with increasing concentrations of ZZW-115 in combination with subcytotoxic concentrations of butylated hydroxytoluene (BHT), a synthetic lipophilic organic compound, at 100 µM; NAC, a cysteine glutathione precursor, at 20 mM for MiaPaCa-2 and 15 mM for HepG2 cells; Ascorbic acid, a natural reducing agent, at 100 µM for MiaPaCa-2 and 40 µM for HepG2 cells; Trolox, a vitamin E analog, at 100 µM or MitoQ, a mitochondria-targeted antioxidant, at 1 µM for MiaPaCa-2 and 0.1 µM for HepG2 cells. Viability was systematically rescued when MiaPaCa-2 or HepG2 cells were co-treated with each of these antioxidants ( Fig. 1D and Supplemental Fig. 1D). Altogether, the results confirmed that ZZW-115-induced ferroptosis is ROSdependent, which may be prevented by ROS-scavenging agents and enhanced by inhibiting antioxidant factors. ZZW-115 reduces the antioxidant homeostasis in vitro and in vivo GSH plays an important role as an antioxidant molecule in cells; however, imbalances in GSH/glutathione disulfide (GSSG) ratio leads to an increased susceptibility to ROS accumulation, oxidative stress, and finally ferroptosis [10]. Taking this into account, we measured the GSSG level and calculated GSH/GSSG ratio to study whether ZZW-115-treatment alters the GSH homeostasis. Data presented in Fig. 2A, B, and Supplemental Fig. 2A, B, showed that ZZW-115 induced, in a dose-dependent manner, a decreasing of the reduced/oxidized glutathione ratio with a strong increase of the intracellular GSSG level, in both cellular models. Moreover, we have also studied the activity of GPX4, an antioxidant enzyme that neutralizes lipid peroxides and protects membrane fluidity. As shown in Fig. 2C and Supplemental Fig. 2C, a decrease of GPX4 activity was observed in both cell types in a dose-dependent manner. Then, we studied the mRNA levels by qRT-PCR analysis of GPX4 and key genes involved in ferroptosis, such as FSP1, which confers protection against ferroptosis elicited by GPX4 deletion [8]; PTGS2, the prostaglandin-endoperoxide synthase 2, a key enzyme in prostaglandin biosynthesis [34]; or SLC7A11, a member of the cystine/glutamate transporter system [1]. We found that ZZW-115-treatment dramatically dysregulated the expression of these genes in both cells ( Fig. 2D and Supplemental Fig. 2D). The induction of ferroptosis could be considered a promising therapeutic approach for treating resistant tumors [35]. We have recently demonstrated a strong anticancer effect of ZZW-115 in a panel of xenografted human tumors in vivo [29,30]. However, whether the ferroptosis induced by ZZW-115 participates in this effect is currently unknown. We induced xenografts with MiaPaCa-2 and HepG2 cells in nude mice and treated them for 4 or 3 weeks, respectively, with vehicle alone and 2.5 or 5.0 mg/kg/day of ZZW-115. Then, we measured the GPX4 activity ( Fig. 2E and Supplemental Fig. 2E) and analyzed the mRNA levels of the key genes involved in ferroptosis by qRT-PCR analysis ( Fig. 2F and Supplemental Fig. 2F). Consistent with the in vitro results, we found that GPX4 activity was significantly decreased and the mRNA expression was dysregulated upon ZZW-115-treatment. Together, these results suggest that the key antioxidant systems fail to protect cells against oxidative damage induced by ZZW-115. Lipid peroxidation is increased upon ZZW-115 in vitro and in vivo To further explore the molecular mechanisms of ZZW-115 in inducing ferroptosis, we analyzed lipid peroxidation, which is an important signaling event in activating ferroptosis. Lipid peroxidation is indispensable for ferroptosis, and GPX4 prevents ferroptosis through clearance of the lipid peroxides [34]. Malondialdehyde (MDA) is one of the most important end-products of lipid peroxidation; therefore, we tested whether ZZW-115-treatment increases MDA accumulation in PDAC and HCC cells. As shown in Fig. 2I. Thus, we conclude that ZZW-115 also induces ferroptosis in vivo. In addition, previous studies demonstrated that the accumulation of iron is a key mediator of cytotoxicity in ferroptosis. We explored the level of intracellular concentration of iron in MiaPaCa-2 cells treated with ZZW-115 for 24 h and found a significant increase in iron accumulation (Fig. 2J). ZZW-115 induces mitochondrial dysfunction by ROS overproduction In our previous studies, we demonstrated that NUPR1 inactivation was associated with a strong mitochondrial dysfunction [31,32,36]. Here we investigated the effect of antioxidants and Fer-1 treatment on mitochondria of MiaPaCa-2 cells treated with ZZW-115. Using the MitoTracker red to determine the cellular mitochondrial network, we observed that ZZW-115 treatment induces strong disorganization, which agrees with the pictures obtained with transmission electron microscopy (TEM). Of note, this mitochondrial network disorganization was completely rescued by the treatment with NAC or Fer-1 as presented in Fig. 3A. As shown in Fig. 3B, treatment with ZZW-115 induced strong morphological changes of these organelles with an obvious decrease in their volume compared to normal mitochondria, increased membrane density and an important reduction or (1 µM). AUC was calculated by integration. For each treatment, statistical significance is *P < 0.05, **P < 0.01, ***P < 0.001, ****P ≤ 0.0001 (two-way ANOVA with Sidak correction). Data represent mean ± SEM, n = 3 (with technical triplicates). disappearance of mitochondrial cristae as explored by TEM. Importantly, all these morphological features are usually observed in ferroptotic cells [34]. We then studied the OXPHOS activity of the mitochondria after treatment with ZZW-115 alone or in combination with NAC or Fer-1. As expected, ZZW-115-treatment induced a strong decreased oxygen consumption rate (OCR), particularly in the maximal respiratory capacity, which was rescued by NAC or Fer-1 as showed in Fig. 3C. High mitochondrial membrane potential (MMP) is required for mitochondrial ATP production and OXPHOS, which could be disrupted by lipid peroxidation or high ROS level, thereby resulting in cascade amplification in cells [37]. To test this possibility, we monitored MMP by using TMRM staining upon ZZW-115 and Fer-1 treatment. As expected, ZZW-115 decreased MMP in a dose-dependent manner, which was inhibited by Fer-1 as shown in Fig. 3D. Furthermore, because glutamine metabolic reprogramming is For each treatment, statistical significance is *P < 0.05, **P < 0.01, ***P < 0.001, ****P ≤ 0.0001 (one-way ANOVA, Tukey's post-hoc test, Student's two-tailed unpaired t-test or two-way ANOVA with Sidak correction). Data represent mean ± SEM, n = 3 (with technical triplicates). required for fuel supply of glutathione and redox homeostasis in cancer cells during ferroptosis [38,39], we studied the glutamine oxidation pathway in mitochondria after treatment with ZZW-115 alone or in combination with Fer-1. We found a dramatic decrease in glutamine capacity and dependency upon ZZW-115 treatment, the effect that was reversed by Fer-1 as showed in Fig. 3E. Altogether, these results demonstrate that ZZW-115 induces morphological changes in mitochondria, compatible with ferroptotic features, as well as strong mitochondria dysfunction, that can be rescued by both Fer-1 and antioxidant agents. It suggests that these morphological and functional mitochondrial changes are, at least in part, downstream of the ROS production. ZZW-115 induces changes in mitochondrial master genes Among the mitochondria-related genes previously described as deregulated after inactivation of NUPR1, such as LONP1, PINK1, NRF1, and TFAM [32], TFAM downregulation seems to be a promising candidate responsible for mitochondrial dysfunction since it is a key regulator of mitochondrial biogenesis. In fact, TFAM is a core mitochondrial transcription factor, responsible for recruiting mitochondrial RNA polymerase and transcription factor T2BM to activate transcription [40]. Additionally, TFAM is an abundant protein that coats and packages mitochondrial DNA forming the mitochondrial nucleoid [41]. Remarkably, TFAM acts as an antioxidant factor under strong oxidative stress conditions in fly [42] and mammalian cells through the inactivation of the proinflammatory factor NFAT [43]. We hypothesized that the downregulation of TFAM could be a mediator of the ferroptotic cell death induced by ZZW-115. First, we measured the TFAM protein levels in MiaPaCa-2 cells treated with ZZW-115 alone or together with Fer-1 or NAC. As presented in Fig. 4A, treatment with ZZW-115 decreased to 53 ± 13% of the TFAM protein compared to control cells. Importantly, treatment with Fer-1 or NAC did not prevent this decrease indicating that the ZZW-115 effect on TFAM level is mediated by NUPR1 inhibition, rather than ROS induced by ZZW-115 treatment. Then, we overexpressed TFAM through a plasmid transfection followed by challenging with a dose-response treatment of ZZW-115 and found a significant rescue in terms of cell survival, ATP production and OXPHOS capacity, as shown in Figs. 4B, C, and D. We also measured the MMP, the mitochondrial ROS and total ROS production, and in response to increasing dose of ZZW-115 on For each treatment statistical significance is *P < 0.05, **P < 0.01, ***P < 0.001, ****P ≤ 0.0001 (two-way ANOVA with Sidak correction or one-way ANOVA, Tukey's post-hoc test). Data represent mean ± SEM, n = 3 (with technical triplicates). TFAM-transfected cells and found that all these biological parameters were strongly improved by TFAM as presented in Figs. 4E, F, and G. Finally, we analyzed the mitochondrial network in both, GFP-and TFAM-transfected, cells in response to the ZZW-115 treatment and found that the mitochondrial network disorganization induced by ZZW-115 was completely rescued by TFAM complementation (Fig. 4H). Moreover, in order to demonstrate that the previous results are the consequence of the inhibition of NUPR1 by ZZW-115, we used NUPR1-depleted MiaPaCa-2 cells by a specific NUPR1-siRNA. Upon NURP1 inhibition, TFAM expression was downregulated, as shown in Fig. 4I. Interestingly, TFAM overexpression, in NUPR1-depleted For each treatment, statistical significance is *P < 0.05, **P < 0.01, ***P < 0.001, ****P ≤ 0.0001 (two-way ANOVA with Sidak correction or Student's two-tailed unpaired t-test). Data represent mean ± SEM, n = 3 (with technical triplicates). cells, was able to rescue the ATP content (Fig. 4J), the MMP (Fig. 4K), as well as to decrease the mitochondrial and cellular ROS level ( Fig. 4L and M, respectively). Altogether, these results demonstrate that mitochondrial cell death induced by ZZW-115 treatment or by inhibiting siNUPR1 treatment is rescued by TFAM complementation. Consequently, TFAM could be considered as an antagonist of the ferroptotic cell death regulated by NUPR1. DISCUSSION In this work, we describe a parallel induction of apoptosis (Z-VAD-FMK sensitive) and a genuine ferroptotic cell death pathway (Fer-1 sensitive) launched by the NUPR1 inhibitor ZZW-115. This oxidative pathway induces downregulation of the biogenic mitochondrial factor TFAM, a strong mitochondrial dysfunction with high ROS production and lipid hydroperoxidation, and a concomitant fail of the key endogenous antioxidant systems, which can be reversed by TFAM complementation. Consequently, TFAM is in part an antagonist of ferroptotic-induced cell death. It has been recently established a central role of NUPR1 stress protein against ferroptosis acting as a transcription inductor of LCN2 [10]. Notably, NUPR1 is also involved in resistance to other cell deaths such as apoptosis and necroptosis [29] and its expression is associated with the resistance to several drugs [16]. In this work, we demonstrated that inhibition of the NUPR1 by ZZW-115 induces ferroptosis which is reversed by Fer-1 and ROS scavengers and, most important, by TFAM complementation. It is important to note that NUPR1 inactivation induced downregulation of TFAM expression. Because NUPR1 acts as a transcriptional regulator, we investigated whether downregulation of TFAM was directly regulated by NUPR1 inactivation or indirectly thought the consequent ROS production. Remarkably, this effect was undoubtedly induced by a direct effect since antioxidants treatment did not prevent this downregulation as demonstrated in Fig. 4A. It is important to note that ROS-induced production by ZZW-115 treatment is responsible for the strong mitochondrial network disorganization and mitochondrial dysfunction since it is reversed by Fer-1 and ROS scavengers. TFAM downregulation seems to play a key role in this process since, on one hand, it is directly downregulated by NUPR1 inactivation and, on the other hand, its complementation reverses the mitochondrial dysfunctions, network organization, and ROS production. All in all, our data indicate that mitochondrial cell death mediated by TFAM downregulation is central in cell death by ferroptosis. Another important point to be noted is that, concomitantly to the increased ROS accumulation, lipid hydroperoxidation, and elevated iron levels found in cells treated with ZZW-115, we observed a dramatic fail in the key endogenous antioxidant systems such as GSH/GSSG ratio and GPX4 activity and expression of key genes involved in ferroptosis. Altogether, we showed that NUPR1 inactivation induces the accumulation of ROS with a concomitant decrease of antioxidant mechanisms. Remarkably, in vivo treatment of PDAC-and HCC-derived xenograft showed a dose-dependent effect on GPX4 activity and lipid peroxidation indicating that ZZW-115 induced tumor growth arrest by, at least in part, ferroptosis. Inducing ferroptosis is considered a promising strategy to treat aggressive cancers. For example, several ferroptotic agents are in evaluation for PDAC such as artesunate (ART) [44], the combination of cotylenin A (CN-A) and phenylethyl isothiocyanate (PEITC) [45], or the combination of piperlongumine (PL), CN-A and sulfasalazine [46]. Also, in HCC this strategy is in evaluation. Sorafenib, a tyrosine kinase inhibitor widely used in the treatment of advanced HCC, induces ferroptosis of HCC as part of its biological effects [47]. In addition, inhibition of sigma 1 receptor (S1R), which is abundantly expressed in hepatocytes, also promotes ferroptosis in HCC cells [48]. Other anticancer approaches are able to induce ferroptosis in HCC [49]. However, PDAC, as well as HCC, are resistant tumors that express a high level of NUPR1 [50], which may justify the failure of this approach. Therefore, a promising strategy to improve this treatment could be the association of ferroptosis-inducing agents with NUPR1 inhibitors like ZZW-115. Several defense mechanisms protect against ferroptosis have been reported but that mediated by NUPR1 deserves particular attention. On one hand, its activation in response to ferroptotic agents mediates the activation of the LCN2 [10], which acts directly against ferroptosis. Furthermore, an additional and complementary system is reported in this work in which NUPR1 inactivation mediated the downregulation of the TFAM. Interestingly, NUPR1 is a stress-induced protein suggesting that its role is exclusively under stress conditions. How TFAM acts against ferroptosis is suggested by its antioxidant effect but we cannot exclude another additional effect at this time. Cell viability Cell viability was determined by crystal violet assay. Cells were plated in triplicate in 96-well plates allowed to attach overnight, then incubated with various concentrations of ZZW-115 in the presence or absence of inhibitors at the indicated time. Medium was discarded, cells were fixed with 1% glutaraldehyde solution, washed with PBS and stained with 0.1% crystal violet solution in 70% methanol. After discarding the crystal violet solution, cells were washed with PBS three times and 1% SDS solution was added to solubilize the stain. Absorbance was read at 590 nm on Epoch™ Microplate Spectrophotometer. AUC values were calculated by nonlinear regression curves with a robust fit using GraphPad software. GPX4 activity assay Glutathione peroxidase activity assay kit (Abcam, #ab102530, Cambridge, MA) was used to determine the activity of GPX4. It was based on the oxidation of glutathione (GSH) to oxidized glutathione disulfide (GSSG) catalyzed by GPX4, which was then recycled back to GSH using glutathione reductase and NADPH. The oxidation of NADPH to NADP+ indicated GPX4 activity. In brief, 5 × 10 5 cells were reseeded in 10 cm cell culture dishes for attachment overnight and then treated with the indicated concentrations of ZZW-115 for 72 h. Cells or 100 mg tumor tissues were harvested, washed, resuspended in cold assay buffer. Cells were homogenized quickly by pipetting up and down and tumors were homogenized with a Dounce homogenizer. Supernatants were collected and kept on ice after centrifuge. Samples were mixed with reaction reagent, following manufacturer's instruction and we measured the OD at 340 nm. Then, cumene hydroperoxide solution was then added to the samples. The enzymatic reaction was run in 96-well plates and NADPH oxidation was monitored by OD at 340 nm over 5 m at 25°C on a FLUOstar Omega plate reader. Measurement of ROS and mitochondrial ROS Cells were seeded at 8 × 10 4 cells per well in 24-well plates. The next day, cells were treated with indicated concentrations of ZZW-115 alone or in the presence of 1 μM Fer-1 for 72 h. After that, cells were incubated with 5 μM CellROX Green Reagent (C10444, Thermo, USA) or 10 μM MitoSOX Red (M36008, Thermo, USA) at 37°C for 30 m in the dark. Then, the unincorporated dye was removed by washings with prewarmed PBS. Samples were then harvested by accutase, centrifuged at 1500 rpm for 5 m and the pellets were resuspended in 200 µL prewarmed HBSS (Gibco, Life Technologies) for flow cytometry. 10,000 events per sample were collected in a MACSQuant-VYB, and data were analyzed with FlowJo software. Measurement of OXPHOS and glycolysis Cells were plated at 24-well plates (Seahorse) and incubated overnight in Standard DMEM. Cells were treated with ZZW-115 (1 μM) alone or in the presence of Fer-1 (1 μM) or NAC (10 mM) for 72 h. The Oxygen Consumption Rate (OCR) (pmol O2/min) and Extracellular Acidification Rate (ECAR) ECAR (mPH/min) were measured using the Seahorse Bioscience XF24 Extracellular Flux Analyzer. Before the measurement of OCR or ECAR, cells were incubated in XF assay medium supplemented with 2 mM L-glutamine, with or without 10 mM glucose, with or without 1 mM pyruvate in a 37°C non-CO 2 incubator for 1 h. OCR measurement is under basal conditions in response to 1 μM oligomycin, 0.5 µM rotenone (Millipore Sigma), and 0.25 μM or 0.5 μM carbonylcyanide p-(trifluoromethoxy)phenylhydrazone (FCCP) in MiaPaCa-2 or HepG2 cells, respectively. ECAR measurement was measured under basal conditions and in response to 1 μm oligomycin, 10 mM glucose and 100 mM 2-deoxyglucose (2DG). The rate of glutamine fuel oxidation was determined by the Seahorse XF mito fuel flex test. glutaminase inhibitors (BPTES 3 μM), carnitine palmitoyl-transferase 1 A (Etomoxir 4 μM), and glucose oxidation (UK5099 2 μM) were used in the test. Glutamine capacity and dependency were calculated accordingly to the manufacturer's instructions. The OCR and ECAR values were calculated normalized with the number of cells. Lipid peroxidation assay MDA lipid peroxidation assay kit (ab118970, ABCAM, Cambridge, UK) was used according to the manufacturer's specifications. For determining MDA production in MiaPaCa-2 and HepG2 xenografts, 10 mg of tumor tissue was used. For cell experiments, 5 × 10 5 cells were reseeded in 10-cm cell culture dishes and allowed to attach overnight. Then, cells were incubated with the indicated concentration of ZZW-115 and Fer-1 for 72 h. Tumor tissues and harvested cells were homogenized in lysis solution and centrifugated to recover the supernatant. The equivalent amount of protein was used and the thiobarbituric acid solution was added at 95°C and incubated for 1 h. Then, samples were cooled to room temperature in an ice bath for 10 m. Fluorescence was read at Ex/Em = 532/553 nm on a TECAN infinite 96-plate reader. Detection of lipid hydroperoxides cells were seeded in 12-well plates at a density of 2.5 × 10 4 cells per well. The next day, cells were treated with indicated concentrations of ZZW-115 alone or in the presence of 1 μM of Fer-1 for 72 h. After that, cells were incubated in 200 µL fresh medium for 30 m, containing 2 µM BODIPY™ 581/591 C11 (Invitrogen Molecular Probes, D3861) at 37°C. Then, cells were washed two times with PBS. For cytometry experiments, Samples were then harvested by accutase, centrifuged at 1500 rpm for 5 m and the pellets were resuspended in 200 µL prewarmed HBSS (Gibco, Life Technologies) for flow cytometry. 10,000 events per sample were collected in a MACSQuant-VYB, and data were analyzed with FlowJo software. For fluorescence microscopy experiment, imaging acquisition was directly performed on a Zeiss Axio Imager Z2 microscope. Glutathione assay GSH/GSSG-Glo assay kit (V6611, Promega) was used following manufacturer's protocol. In brief, 5000 cells per well were seeded overnight in 96well plates. Cells were treated with indicated concentrations of ZZW-115 for 72 h in triplicates. Total intracellular glutathione and GSSG were measured. Luciferin Generation Reagent and Detection Reagent were added to all wells, respectively, assays were mixed and then luminescence was measured using the Tristar multimode microplate reader. GSSG and total glutathione concentration were calculated using a glutathione standard curve and normalized by the cell number. GSH/GSSG ratios were calculated using the following equation: GSH/GSSG = [Total GSH-(2 × GSSG)]/GSSG. Electron microscopy Cells were prepared according to the NCMIR protocol for SBF-SEM. Seventy nanometers ultrathin sections were cut using a Leica UCT Ultramicrotome (Leica, Austria) and deposited on formvar-coated slot grids. Samples were observed in an FEI Tecnai G2 at 200 KeV and acquisition of imagine was performed on a Veleta camera (Olympus, Tokyo, Japan). Mitochondrial network Mitochondrial network localization was performed by incubation of cells for 30 m at 37°C with MitoTracker DeepRed FM (200 nM, Molecular Probes). Subsequently, cells were washed and fixed with 4% paraformaldehyde for 10 m. Finally, samples were mounted using the Prolong Gold antifade reagent with DAPI. Confocal images were acquired using an inverted microscope equipped with LSM 880 controlled by Zeiss Zen Black software. Mitochondrial membrane potential assay Mitochondrial membrane potential assay was performed using MitoProbe TMRM Assay Kit (M20036, Invitrogen) following the manufacturer's protocol. After incubation, cells were dissociated using accutase and resuspended in 200 μL PBS at the density of 1 × 10 6 cells/mL. Add 1 μL of 20 μM stock TMRM reagent solution to the cells and incubate for 30 m at 37°C, 5% CO 2 . Data were analyzed on flow cytometry with 561 nm excitation. Ten thousands events per sample were collected in a MACSQuant-VYB (Miltenyi Biotec, Surrey, UK). Data analysis was performed using the FlowJo software. Western blot Protein extracts were resolved by SDS-PAGE and then transferred onto the nitrocellulose membranes for 1 h. Then, membranes were blocked for 1 h at room temperature with TBST (tris-buffered saline), 5% BSA, and blotted overnight in TBST 5% BSA containing primary antibodies at 1:500 overnight with corresponding antibodies at 4°C. Subsequently, the blot was washed and incubated with HPR-conjugated secondary antibody (Boster, Pleasanton CA, USA) for 1 h at room temperature at 1:5000 before being revealed with ECL (enhanced chemo-luminescence). The acquisition was performed by a Fusion FX7 imagine system (Vilber-Lourmat, Sud Torcy, France). The following primary antibodies were used: rabbit polyclonal TFAM (# 7495, cell signaling), mouse monoclonal β-actin (#A5316, Sigma). siRNA transfection Cells were plated at 70% confluence and INTERFErin™ reagent (Polyplustransfection) was used to perform siRNA transfections, following manufacturer's protocol. Scrambled siRNA that targets no known gene sequence was used as a negative control. The sequence of Nupr1-specific siRNA was r(GGAGGACCCAGGACAGGAU)dTdT. Iron levels The intracellular iron concentration was measured using the iron assay kit (MAK025, SIGMA-ALDRICH) following the manufacturing instructions with small modifications. Briefly, 5 × 10 6 MiaPaCa-2 cells were homogenized in 200 µl of Iron Assay Buffer and centrifuge at 16,000 × g for 10 m at 4°C to remove insoluble material. Following, 75 µl samples were added in a 96well plate and the volume were brought to 100 µl per well with Assay Buffer. Five microliters of Iron Reducer were added to each well to reduce Fe 3+ to Fe 2+ . The plate was incubated for 30 m at room temperature and protected from light. After incubation, 100 µl of Iron Probe were added to each well and incubated for 60 m at room temperature and protected from light. The absorbance was measured at 593 nm. Iron concentrations were evaluated from an iron standard curve and normalized by the number of cells. The data are represented as total iron concentration (µM)/ number of cells. Statistics Statistical analyses were conducted by using the unpaired two-tailed Student t-test, or one-way ANOVA with Tukey's post-hoc test or two-way ANOVA with Sidak correction. The results were expressed as the mean ± SEM of at least three independent experiments. A p-value of <0.05 was regarded as statistically significant. DATA AVAILABILITY All the data used during the study are available from the corresponding author on request.
2023-02-06T15:12:40.310Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "69917103245bfc950c155ba0819ee365088d47ed", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41420-021-00662-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "69917103245bfc950c155ba0819ee365088d47ed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
260415498
pes2o/s2orc
v3-fos-license
EPENTHESIS IN ELEMENTARY SCHOOL Focus of this research was regarding epenthesis in written words of students in Pasir luhur elementary school, Bandung. the writer assumed to identify [1] the way of epenthesis that is written by students and [2] which grapheme that will be found frequently add the written words. The researcher used descriptive qualitative research methods in this study, which meant that participants were involved in case studies that produced narratives and descriptive explanations about settings or practices. The object of exploration that the writer analyzed in this study is the handwriting made by 10 representative students in one classroom. From the analysis, the writer found a way that made the result of written word became so identic. The results of the research showed that the students experienced epenthesis by added certain letter in the middle of word. The most frequently epenthesis happened is in consonant grapheme. this research can add to the literature for other researchers in the field of psycholinguistics and become a reference for other researcher for linguistic studies Introduction The field of morphographemics, particularly computational morphology, is wellknown in linguistics. However, this time the author's discussion of morphographemics does not fall under that subfield of research; rather, it is a component of psycholinguistics, a larger discipline. As a result, as opposed to computational morphology, which focuses solely on affixes, the author examines the fundamental English words, which are the third language of the majority of Indonesians. The morphographemic changes in this psycholinguistic study must be investigated in light of the numerous phenomena that the author discovers in the children that are the subject of this study-but not in affixes. Morphographic changes can take many forms. It might be subtitution, metathesis, addition, deletion, or metathesis. However, this investigation will concentrate solely on the addition, specifically epenthesis. Epenthesis can occur in any child, regardless of whether they have a language disorder or not. The same mistakes can be made in writing by elementary students. The writer assumes that writing errors are caused by psychological factors. In elementary school, students make mistakes in their writing due to other psychological factors that are still related to the brain. These factors are not caused by damage to the brain; rather, they are caused by other factors, such as delayed language processing in the brain, which causes them to fail to produce written language that has actually been mastered. Other psychological factors, such as limited memory or forgetfulness in processing a language, may also cause errors in writing. The author of this study focuses on examining student language difficulties, particularly those related to writing and spelling. The results of writing and spelling are the data used in this study by the author. who are currently students enrolled at SDN Pasir Luhur Bandung. The author wishes to determine: [1] the way of epenthesis that is written by students and [2] which grapheme that will be found frequently add the written words. Research Method In this study, handwriting was the subject of investigation that the author examined. One group of research subjects, the experimental group of ten students at SDN Pasir Luhur Bandung, wrote in their own handwriting. The author asked the students to complete a 64-item pictorial questionnaire with each image written in their native Indonesian and English. In addition, in response to requests from the school and parents not to mention the students' real names, the authors name students with letters of the alphabet, such as student A, student B, and so on. The level of the students' writing ability was meant here. In this study, qualitative research methods were used, and participants were used as subjects in case studies that produced narratives and descriptive explanations about settings or practices (Nayak & Sing, 2015). Saldana (2011) said that qualitative research was a collection of different methods and approaches used in different social science fields. One's comprehension of the various patterns and intricate meanings of social life improves as one gains experience with various field methods. According to Jain (2019), qualitative research may require the collection and analysis of non-numeric data or the examination of a single case study. The author employed a causal descriptive case study approach in this investigation. Result and Discussion In the previous study mentioned by atika (2021) have found out all error types and all source types with omission as the most type of error found. In this research on the writings of SDN Pasir Luhur Bandung Students found that there were several changes of word that could be identified from their written language production, namely epenthesis. Below is a list of students who have an epenthesis a type of addition in their handwriting: The difference between target and actual writing can be seen more specifically by drawing constituent model of written word that is proposed by Weingarten et al. (2004). In the written word model of the word APPLE and the written word model of the word APPELE written by student G, there is a difference. Both have pattern sales from graphemic word level to syllable level. However, from the level of syllable constituents to the lowest level, there is something different. The grapheme in the word APPLE is GV+GC+GC2+GV with the constituent syllables R+O+R. Then the grapheme in the word APPELE is GV+GC+GC+GV+GC+GV with the constituent syllables R+O+R+R. So that a word that should consist of the letters A, P, P, L, and E instead becomes the letters A, P, P, E, L, and E. Thus, student G has added letters to the word. Morphographically, a symptoms were found in student G's writing in the word APPELE. A morphographemic phenomenon that indicates the addition of letters in the middle of a word. These symptoms are called symptoms of "Epenthesis". In the written word model from the word CAR and the written word model from the word CHAR written by students G, J shows similarities and differences. The two words have the same pattern from the graphemic word level to the syllable constituent level. Each consists of one graphemic word, one lexical constituent, one syllable tier, and consists of two syllable constituents. However, from the graphemic tier to the lowest level, something is different. Rhyme (R) in the level of syllable constituents does not show any difference. However, it is different with onset (O) which has a difference between the two words. The onset (O) in the word CAR consists of only one consonant letter, namely the letter C with the graphemic tier GC. But in the word they wrote, the onset (O) has two consonant letters, including the letter C and the letter H with the graphemic tier GC2. (Graphemic tier) said CAR is GC+GV+GC. (Graphemic tier) in the word CHAR is GC2+GV+GC. So that words that should consist of the letters C, A, and R instead become letters C, H, A, and R. This causes the number of letters in the word to increase by one. From what should have been three letters, it turned into four letters. There is a difference of one letter between the word CAR and the word CHAR. Thus, they have inserted letters in the word. Morphographically, a symptom was found in the writing of students G, J in the word CHAR. A morphographemic phenomenon that indicates the addition of letters in the middle of a word. These symptoms are called symptoms of "Epenthesis." Target In the written word model from the word FLOWER and the written word model from the word FLOWWER written by students G, J shows similarities and differences. Both have the same pattern from the graphemic word level to the syllable level. However, from the level of the constituent syllables to the lowest level, there is something different. The grapheme in the word FLOWER is GC2+GV+GC+GV+GC with O+R+R syllable constituents. Then the grapheme in the word FLOWWER is GC2+GV+GC+GC+GV+GC with the constituent syllables O+R+O+R. So that a word that should consist of the letters F, L, O, W, E, and R instead becomes the letters F, L, O, W, W, E, and R. Thus, they have added letters in the word. Morphographically, a symptom was found in the writing of students G, J in the word FLOWWER. A morphographemic phenomenon that indicates the addition of letters in the middle of a word. These symptoms are called symptoms of "Epenthesis." The written word model of the word MANGO and the written word model of the word MANGGO written by students D, E, F show similarities and differences. Both have the same pattern from the graphemic word level to the syllable constituent level. However, from the grapheme level to the lowest level, something is different. The grapheme for the word MANGO is GC+GV+GCn+GC+GV. Then the grapheme in the word MANGGO is GC+GV+GCn+GC2+GV. So that words that should consist of the letters M, A, N, G, and O instead become letters M, A, N, G, G, and O. In terms of the number of letters in the word, there is also a difference where in the word MANGO there is five letters, whereas in the word MANGGO there are six letters. The number of letters between the two words has a difference of one letter. Thus, they have inserted or added letters in the word. Morphographically, a symptom was found in the writing of students D, E, F in the word MANGGO. A morphographemic symptom indicating the insertion or addition of letters in the middle of a word. These symptoms are called symptoms of "Epenthesis." Target In the written word model from the word ZEBRA and the written word model from the word ZEBBRA written by student F, they show similarities and differences. Both have the same pattern from the graphemic word level to the syllable constituent level. Each word consists of one graphemic word, one lexical constituent, two syllables, and four syllable constituents. However, from the grapheme level to the lowest level, something is different. The grapheme in the word ZEBRA is GC+GV+GC2+GV. Then the grapheme in the word ZEBBRA is GC+GV+GC+GC2+GV. Thus, student F has inserted letters in the word ZEBRA. Morphographically, a symptom was found in student F's writing in the word ZEBBRA. A morphographemic phenomenon that indicates the addition of letters in the middle of a word. These symptoms are called symptoms of "Epenthesis." Conclusion Based on the analysis, the writer finally identified the way of epenthesis written by students and which grapheme that will be found frequently add the written words. [1] From the analysis, the writer found a way that made the result of written word became so identic. [2] The results of the research showed that the students experienced epenthesis by added certain words, such as in the words of APPLE, CAR, FLOWER, MANGO and ZEBRA. They made some epenthesis in those words, became APPELE, CHAR, FLOWWER, MANGGO and ZEBBRA. The most frequently epenthesis occured is in consonant grapheme. The writer suggests to re-examining this finding because there are many factors that influence students' writing errors and the writer hoped that this research can add to the literature for other researchers in the field of psycholinguistics and become a reference for other researcher for linguistic studies.
2023-08-03T15:08:51.901Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "ede4de81af454a9ac1a09cc5e7edd6a4332d4bbf", "oa_license": "CCBYSA", "oa_url": "http://ejurnal.budiutomomalang.ac.id/index.php/journey/article/download/3084/1742", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "22c92e3123a9794b2f9547ec7da1fbc70c9ca56d", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
260406615
pes2o/s2orc
v3-fos-license
Compass adjustment by GPS (or any other GNSS receiver) and a single visual reference Abstract This paper proposes a proper compass adjustment method using only a GPS (or any other GNSS receiver) and a single visual reference to enhance the efficiency of compass adjustment. During compass adjustment, the ship proceeds on magnetic courses using a gyroscopic or satellite compass and considering magnetic declination. However, non-magnetic compasses are only compulsory for ships of 500 gross tonnage or upwards (SOLAS V/19.2.5.1). Many ships of less than 500 gross tonnage have only a magnetic compass to indicate heading. In these cases, a minimum of five leading lines or a minimum of five bearings of conspicuous and distant points or sun azimuths are necessary to adjust the compass. This makes compass adjustment more laborious and time consuming. To expedite this process, a reliable and practical method was developed to use the courses over ground given by a GNSS receiver and a single visual reference instead of the headings provided by a gyroscopic or satellite compass. The method is valid for all ships, but is primarily intended for those equipped with only a magnetic compass to indicate heading. Introduction The objective of this paper is to propose a proper compass adjustment method using only a GNSS receiver and a single visual reference for ships equipped with only a magnetic compass to indicate heading. Compass adjustment is required for the correct operation of the magnetic compass, namely the first nautical equipment mentioned in SOLAS V/19.For many years, the process of compass adjustment has remained stagnant.However, with the emergence of new technologies, magnetic compass applications and adjustment techniques have again become subjects of research.Several works focus on the improvement of magnetic compass performance.Recently, Androjna et al. published a compendium on the current use of the magnetic compass (Androjna et al., 2021).Other authors have taken a closer look at specific items.For example, Felski applies the least squares method to determine residual deviations (Felski, 1999); Basterretxea updates the residual deviations according to latitude (Basterretxea Iribar et al., 2014); Martínez-Lozares obtains the deviations in real time (Martínez-Lozares, 2009a, 2009b) and Lushnikov updates the table of residual deviations for any single course (Lushnikov, 2011).The present paper follows this line of research by tackling the efficiency of compass adjustment. The paper is divided into seven sections.Section 2 explains compass adjustment on ships of less than 500 gross tonnage.In Sections 3-5, the proposed method is developed, discussed and verified, respectively, while in Section 6 the proposed method is completed by applying Lushnikov's method (Lushnikov, 2011).Section 7 describes the application of the complete method.Finally, conclusions are drawn in Section 8. Introduction to compass adjustment The process of compass adjustment has two phases: actual compass adjustment or compensation and creation of the table of residual deviations (deviation table). Deviation equation The deviation equation commonly applied is where Δ is the deviation, is the compass course ( indicates the magnetic course, see Subsection 2.3 and Section 3), and A, B, C, D and E are the approximate coefficients, with the exact coefficients being the sines of the approximate ones (Gaztelu-Iturri Leicea, 1999;National Geospatial-Intelligence Agency, 2004). Course deviation comprises three parts: constant deviation, A, which does not depend on the course; semicircular deviation, B • sin + C • cos , which depends on the course; and quadrantal deviation, D • sin 2 + E • cos 2 , which depends on twice the course.The second and the third are called semicircular and quadrantal deviations because they are repeated with a different sign every 180°and 90°, respectively, where 180°corresponds to half a circle (semicircle) and 90°to a quarter of a circle (quadrant). Semicircular deviation depends mainly on the ship's hard iron, which has permanent magnetism and is corrected with magnets.On the other hand, constant and quadrantal deviations depend solely on the ship's soft iron, which does not have permanent magnetism, but is induced according to its orientation within the earth's magnetic field. Considering = 0°, 90°, 180°and 270°, the expressions of the deviations on the cardinal courses are obtained as (5) Therefore, these deviations depend on the constant (coefficient A), semicircular (coefficients B, C) and part of the quadrantal (coefficient E) deviation, with the semicircular one being the main deviation. Compensating device Semicircular deviation of magnetic compasses on ships of less than 500 gross tonnage can be compensated in two ways.Many compasses have a mechanism that adjusts the position of longitudinal and transversal magnets by using an anti-magnetic screwdriver to turn one screw for longitudinal magnets and one for transversal ones.If the compass does not have this device, the magnets must be stuck directly on the compass or in its vicinity.Quadrantal deviation can be compensated using soft iron correctors, such as small spheres or cylinders, or boxes where several soft iron plates can be placed.However, as this is not a common practice, this paper does not consider the compensation of this deviation.Note, however, that the effect of the quadrantal and constant deviations is always included in the residual deviations. A common compensating device consists of one (or two) longitudinal and one (or two) transversal magnets that can rotate vertically around their centres, as shown in Figure 1.Note that the longitudinal magnets are inside the transversal rotating cylinder and the transversal magnets are inside the longitudinal rotating cylinder.The horizontal component of the magnetic moment of the magnets is used to adjust the compass.If a magnet is completely vertical, the horizontal component (longitudinal or transversal, depending on the type of magnet) of its magnetic moment is zero.If it is completely horizontal, the horizontal component is equal to its own magnetic moment, with a polarity that can be changed by turning the magnet 180°.On the other hand, if the magnet is fitted at a vertical angle of less than 90°, the horizontal component is smaller than its own magnetic moment and smaller the larger the angle. Traditional method of compensation Compensation is typically accomplished by proceeding on the four cardinal magnetic headings, a manoeuvre known as swing (Gaztelu-Iturri Leicea, 1999;National Geospatial-Intelligence Agency, 2004).If the ship is equipped with a gyroscopic or satellite compass, a magnetic heading is followed by keeping the corresponding true course, TC (i.e.TC = + , where is the magnetic course and is the magnetic declination). On the east (or west) magnetic heading, the deviation is nullified by setting = 90°(or 270°) with longitudinal magnets because they are perpendicular to the earth's magnetic field and can alter the compass course, while transversal magnets are in the same direction as the earth's magnetic field and cannot therefore alter the compass course.Next, on the north (or south) magnetic heading, the deviation is also nullified by setting = 0°(or 180°) but with transversal magnets, which are perpendicular to the earth's magnetic field.The effect of the ship's hard iron (coefficients B and C) is thus minimised but not completely eliminated because the deviations on the cardinal courses also depend on the constant (coefficient A) and part of the quadrantal deviation (coefficient E) (see Subsection 2.1).Consequently, residual magnetic effects remain after the compensation, i.e. (3) where B and C are the new coefficients corresponding to the hard iron and smaller than the original coefficients B and C. Next, we continue the swing.On the west (or east) magnetic heading, we have (5) ⇒ Δ = − − (8) or and on the south (or north) magnetic heading, we have Consequently, (6)-( 8) or (6 bis)-(8 bis) Assuming that Δe (or Δw) and Δn (or Δs) are exactly zero, expressions (10) and ( 11) show that only half the deviations on the west (or east) and on the south (or north) must be nullified with the longitudinal and transversal magnets, respectively, to eliminate coefficients B and C . (13) Hence, Thus, expressions ( 14) and ( 15) give coefficients A and E, respectively.If half the deviations on the west (or east) and on the south (or north) are not nullified, expressions (10) and (11) give coefficients B and C , which are residual coefficients B and C. By contrast, if half the deviations on the west (or east) and on the south (or north) are nullified or minimised, the coefficients corresponding to the hard iron are altered again, nullifying or minimising coefficients B and C .In this case, new deviations and new coefficients are obtained, i.e. or or where B and C are zero or have very small values.The magnets alter coefficients B and C but not the other coefficients.Hence, expressions ( 14) and ( 15) are valid to determine coefficients A and E. Once coefficients A and E, and deviations Δ w (or Δ e) and Δ s (or Δ n), are known, coefficient B is calculated from expression ( 16) or (16 bis) and coefficient C is analogously calculated from expression (17) or (17 bis). Thus, residual coefficients B and C are B and C or B and C depending on whether the deviations on the third and fourth courses of the swing, i.e. west (or east) and south (or north), are altered. We now know coefficients A, B, C and E but not coefficient D. To obtain coefficient D, it is necessary to complete the swing on a fifth course, which must be a quadrantal one, i.e.NE, SE, SW or NW, with the deviation equations ( 1) for these courses ( = 45°, 135°, 225°and 315°) being where 0 • 707 is a sufficient approximation of the sine and cosine of 45°.Note that sin 2 is always ±1 and cos 2 is always zero.For this reason, coefficient E does not appear in the deviations on the quadrantal courses.Expression ( 18), ( 19), ( 20) or ( 21) is used to obtain coefficient D, depending on which the fifth course is. Once all coefficients, A, B, C, D and E, are known, the deviation on different compass courses, typically each 10 or 15 degrees from the north, is calculated by applying the deviation equation with a spreadsheet.Finally, the obtained deviation table is attached to the certificate of compass adjustment, in compliance with SOLAS V/19.2.1.3;IMO Resolution A.382 (X), Annex I.3; ISO Standard 25862:2019, Annex G.7 and the corresponding national regulations (IMO, 1977;ISO, 2019).According to ISO Standard 25862:2019, Annex G.1, the deviation on any course must not exceed 4°for ships of a length less than 82 • 5 m. Approach and development of the method The method aims to determine coefficients A, B, C, D and E of the deviation equation ( 1) by comparing the compass courses with the courses over ground (COGs), indicated by a GNSS receiver.It is based on type of courses and triangle of speeds, as shown in Figure 2. References TN, MN and CN correspond to the true, magnetic and compass north (i.e. the origins of the true course, TC; magnetic course, ; and compass course, , respectively, where is the magnetic declination, Δ the deviation, and S the vessel's speed through the water).The parameters of the external forces are set, , and drift, d, where set is expressed as a magnetic course, and is the course difference due to the external forces, = COG-TC (Moncunill Marimón et al., 2020). Deviation equation referred to course over ground According to Figure 2, by the law of sines (Moncunill Marimón et al., 2020), Developing, , by multiplying the numerator and denominator by 1-d/S • cos (-), we obtain . Because d 2 is much smaller than S 2 , the denominator can be considered 1.Also, since is a small angle, its tangent can be replaced by its sine, which in turn can be replaced by • sin 1°.Thus, Consequently, and Since the magnetic and compass courses are similar, The deviation is the difference between the magnetic course and the compass course, i.e.Δ = - .On the other hand, the magnetic course is the difference between the true course and the magnetic declination, i.e. = TC-.Hence, Δ = TC-- , and TC is the difference between COG and , i.e.TC = COG-.Therefore, and the deviation equation ( 1) is Now let the pseudo-deviation, Ψ, be defined as the difference between the COG and the compass course, i.e.Ψ = COG- .Then, Compensation and calculation of coefficients of deviation equation Particularising expression ( 22) for the four cardinal compass courses (Moncunill Marimón et al., 2020), we obtain If we apply the traditional method of compensation (see Subsection 2.3) but consider the COGs provided by a GNSS receiver instead of the true courses provided by a gyroscopic or satellite compass, the first COGs must be 90°+ (or 270°+ ) and 0°+ (or 180°+ ).Then, when the ship proceeds on these COGs, the compass course must be altered by the longitudinal and transversal magnets to obtain the following compass courses, respectively: 90°(or 270°) and 0°(or 180°), resulting in Ψe = (or Ψw = ) and Ψn = (or Ψs = ).Thus, Next, we continue the swing on the other cardinal courses but by steering the ship on compass courses, which are easier to handle than are COGs, and we observe the corresponding COGs to obtain the pseudo-deviations.Then, we have Consequently, Expressions ( 31), ( 32), ( 35) and ( 36) give coefficients B , C , A and E, respectively.Coefficient A depends solely on the pseudo-deviations and the magnetic declination, which are known data.The other coefficients depend on the pseudo-deviations, which are known data, but also on parameters x and y, which are not known.However, Finally, we complete the swing by proceeding on a quadrantal course, for example NE: Replacing ( 39), ( 40) and ( 41) in ( 38), we obtain where and analogously for the other quadrantal courses. Residual deviations and verification of compensation The residual deviations cannot be determined because, except A, the coefficients of the deviation equation depend on factors x and y, which are unknown.Consequently, we cannot check whether any residual deviation exceeds 4°(in accordance with ISO Standard 25862:2019, Annex G.1).If one does, coefficients B or C must be completely nullified.Section 7 explains how to nullify coefficients.In Section 4, coefficients D and E are obtained, and in Section 6, residual coefficients B and C are calculated to finally check the deviation table and make the necessary readjustments. Discussion of method Expressions (31) and ( 32) are not reliable for the calculation of residual coefficients B and C because an imprecise d/S ratio can lead to a considerable error.It is observed, however, that, at a sufficient speed, the d 2 /S 2 ratio is very small, so that expressions (37) and ( 43) can be considered zero.Thus, coefficients D and E can be determined solely from the pseudo-deviations, i.e. or analogously for the other quadrantal courses, where The maximum error in the calculation of coefficients D and E is shown for the S/d ratio in Table 1. These results show that coefficients D and E can be determined solely from the pseudo-deviations but with a small error that is negligible for sufficiently high speeds, i.e. for a ratio of S/d equal to or greater than 8, which causes an error less than 0 • 5°for each coefficient.Assuming a drift of 1 knot or less, the minimum speed is 8 knots, and assuming a drift of 0 • 5 knots or less, the minimum speed is 4 knots.A suitable minimum speed could be 7 knots. It should be emphasised that the method is not based on an exact calculation of the effect of the external forces, which is variable and has no exact vector behaviour, but on determining which coefficients are affected by this effect and which are not. Verification of the method The method was verified by performing a swing on a recreational fishing ship in the Bay of Santoña (Cantabria, Spain).The swing was carried out on 24 May 2021 between 1255 and 1335 local time, once the ship was inside the bay (outside the harbour and its channel).The wind was from the west, force 5-6 on the Beaufort scale, generating waves of approximately 1 • 5 m in the bay.The cloud cover was 7 oktas stratocumulus mainly as well as clouds of greater vertical development, causing intermittent rains of moderate intensity.The ship was navigating at about 7 knots, an adequate speed as stated in Section 4. Equipment An integral magnetic compass, IMC (Martínez-Lozares, 2009a, 2009b), was used to find the deviations in real time.The true course input to the IMC was obtained from the ship's satellite compass (Figure 3 shows the satellite compass antenna with its clover-shaped base).The compass course input to the IMC was obtained using a magnetic sensor (Figure 4 shows the magnetic compass and the sensor being adjusted, and Figure 5 shows the magnetic compass with the sensor already adjusted). The IMC installed in a PC, with the inputs of both courses obtained from the magnetic compass, C, and satellite compass, G, can be seen in Figure 6.It is observed how the position obtained from the GPS given by the IMC is used to calculate the magnetic declination, , with the US National Oceanic and Atmospheric Administration (NOAA) calculator (Figure 7).The true course, the compass course and the magnetic declination provide the deviation value at all times, i.e.Δ = G--C. Data collection Using the satellite compass, the ship proceeded on the eight main true headings, i.e.N, NE, E, SE, S, SW, W and NW.For each heading, the COG was observed and the compass course was recorded by the IMC.The reading of the COGs followed the same technique as the observation of draughts in wave conditions or of the compass course in gyrocompass navigation: observation of variations in data (draught, compass course or COG in this case), estimation of an average value and, for the courses, observation of the value of the data to be compared (gyroscopic and compass courses, or true course and COG in this case) at different times to check the average.To facilitate the comparison of headings, the IMC was selected in G mode, i.e. showing the true course determined by the satellite compass as main course information.Figure 8 shows the IMC and the GPS receiver when a COG was being obtained. Data processing: obtaining coefficients from deviations The column arithmetic in Figure 9 corresponds to the deviations calculated by direct comparison between the true course and the compass course and taking into account the magnetic declination.The deviations of the intermediate courses (other than the eight main courses) could have been determined when the ship changed course during the swing, assuming there was sufficient course stabilisation or, more likely, there were previous records.Therefore, only the deviations of the main courses are considered here.From the deviations on the cardinal courses, we obtain The column deviation corresponds to the deviations determined by the deviation equation ( 1), which considers the calculated coefficients A, B, C and E and coefficient D obtained from a quadrantal deviation that, in this case, is Δse = −4 • 279°.From expression (19), we have 19) + ( 20)-( 21 The coefficients obtained from the deviations are Data processing: obtaining coefficients from pseudo-deviations Applying the magnetic declination, , and the deviations, Δ, obtained from the IMC to the true courses, TC, the compass courses, , are determined, and with them, the pseudo-deviations, i.e.Ψ = COG- (see Table 2). From the pseudo-deviations in Table 2, we have Analogously to expression (46), we have The coefficients determined from the pseudo-deviations are Data analysis 1. Coefficients A, D and E determined from the pseudo-deviations are very similar to those obtained from the deviations (Table 3 shows these differences in absolute value).By contrast, the differences between coefficients B and C determined from the deviations and B and C, respectively, are greater than the differences between coefficients A, D and E. The same value of 0 • 25 is a coincidence.Note that if Dif = Ψ-Δ, the difference for coefficients A and E is negative, while for coefficient D it is positive. Thus, we can compare each coefficient D calculated from the deviations by expressions ( 18), ( 19), ( 20) and ( 21) and from the pseudo-deviations by expressions ( 44), ( 48), ( 49) and ( 50).The deviations are obtained from column arithmetic in Figure 9 and the pseudo-deviations from Table 2. On the NE heading, Table 4 shows the values of coefficient D determined from the deviations and the pseudo-deviations for each quadrantal course and their differences. 3. Coefficient D takes different values.As can be seen, it is similar for the opposite headings NE and SW, i.e. about 8°, while it is different for the opposite headings SE and NW, i.e. about 1°and −12°, respectively, and the mean coefficient D is about 1 • 5°.This is because the compass was not adjusted and therefore, other higher order deviations, such as sextantal and octantal deviations, which depend on the triple and quadruple of the course, respectively, occurred, in agreement with the observations in Smith and Evans (1861).However, if coefficients B and C are reduced beforehand, this should not happen. Conclusion No matter how the method is applied, the results are satisfactory, i.e. coefficients A, D and E obtained from the deviations and pseudo-deviations are highly similar.Even when coefficient D varies with the quadrantal course, the results from the deviations and pseudo-deviations are similar for the same heading, except for the SW, but even in this case, the difference is not significant. Complete method: obtaining residual coefficients B and C for any single heading In addition to coefficients A, D and E, two further headings, 1 and 2 , are necessary to obtain residual coefficients B and C. First, their deviations, Δ 1 and Δ 2 , must be determined.The ship must head for two visual references with known true bearings or azimuths (for good discrimination, the angle between both headings must be between 60°and 120°): ( Analogously, where both 1 and 2 are known data.Thus, we have the following system of two equations: whose solutions are By contrast, Lushnikov proposed a method in which, if coefficients A, D and E are known, only a single heading is required (Lushnikov, 2011).In Subsection 6.2, Lushnikov's method is applied to obtain residual coefficients B and C. Obtaining deviation on visual reference heading If the ship proceeds to a shore point, its position can be determined from the chart or another source, such as Google Maps.This position is then entered as a waypoint into the GNSS receiver, and the function GO TO is used to obtain and compare the true course with the compass course to determine the deviation considering the magnetic declination. If the ship heads towards the sun, its true azimuth, Z, is the true course, and is calculated by one of the azimuth formulae, e.g. where is the ship's latitude; Dec is the sun's declination, and LHA is its local hour angle.Dec and LHA are taken from the nautical almanac and corrected as necessary, bearing in mind that a proper sign convention must be applied in the formula.For example, , Dec and Z are positive when they are north and negative when they are south, and regardless of the sign convention, Z is east before noon (LHA greater than 180°) and west after noon (LHA smaller than 180°). The calculation of azimuths does not require great accuracy for the ship's position and the sun's declination.For example, the ship's position can be considered as that of the green light of the harbour breakwater and the sun's declination as that of the estimated compensation time to prepare the calculation data.It is possible to determine LHA by considering only the UTC provided by the GNSS receiver when the ship is heading towards the sun, time of meridian passage, MP, taken from the nautical almanac, and longitude of the ship, L, which is positive when it is east and negative when it is west, i.e. Note that in this expression, LHA is negative before noon but this does not affect (51), where the numerator must be in absolute value. Simplification of method with single visual reference: application of Lushnikov's method Compass needles are oriented towards the horizontal component of the magnetic field at the compass location.The horizontal component of the earth's magnetic flux density, H, can be found using an earth's magnetic field calculator, such as the NOAA calculator, or a map, such as the World Magnetic Model (WMM).However, the total magnetic flux density at the compass location, H , includes not only that of the earth but also that of the ship irons, which varies with the course.Each course therefore has a concrete H , where the directive force (strictly speaking, magnetic flux density) towards the magnetic north is H • cos Δ, but also Gaztelu -Iturri Leicea (1999) and Lushnikov ( 2011), where is the mean directive force coefficient, specific to each ship, and the sines of the approximate coefficients B, C, D and E are the exact coefficients of the deviation equation, as indicated in Subsection 2.1.Therefore, Since the approximate coefficients are small angles, their sines can be replaced by their values multiplied by sin 1°, i.e. approximately 1/57 • 3. Hence, Then, if only a single visual reference is considered, we have And since the magnetic and compass courses are similar, On the other hand, where Hv is the specific value of H when the ship proceeds on the magnetic course v.Let 4. The ship proceeds to a visual reference and its deviation is calculated as described in Subsection 6.1.Residual coefficients B and C are then determined from expressions ( 58) and ( 59), respectively, where factors Θ 1 and Θ 2 are determined from expressions ( 56) and (60), respectively.In order to avoid the impact of errors on coefficient D, we recommend proceeding to a visual reference within the same quadrant as the quadrantal course to calculate this coefficient.However, in line with the recommendation in point 2, the choice of the visual reference fixes the quadrantal course and, therefore, the two cardinal courses to which the compass courses are adjusted (point 1).It is thus proposed that the ship proceeds on the cardinal courses that delimit the quadrant containing the visual reference.Next, the compass is adjusted on these courses and the ship proceeds on the quadrantal course or the visual reference course.Finally, the ship proceeds on the other course within the quadrant, and then on the other two cardinal courses. 5. Once residual coefficients B and C are obtained, the deviation on various compass courses, typically every 10 or 15 degrees from the north, is calculated by the deviation equation ( 1) with a spreadsheet.6.The process can next be completed if no deviation exceeds 4°, whereas if a deviation exceeds 4°, the largest value between coefficients B and C must be nullified.To increase accuracy, both coefficients can be nullified even if no deviation exceeds 4°.The process for nullifying a coefficient is described in point 7. Application when deviation exceeds 4°or when greater accuracy required 7. To nullify coefficient B, we must remember that if the other coefficients were equal to zero, the deviation would be Δe = B and the compass course would be = -Δe = 90°-B on the east magnetic heading and analogously, Δw = −B and = -Δw = 270°+ B on the west magnetic heading.The procedure is, therefore, to proceed on one of these courses, 90°-B or 270°+ B, with the magnetic compass, observe the COG, proceed on this COG and nullify the coefficient by setting = 90°or 270°with the longitudinal magnets.To nullify coefficient C, the procedure is to proceed on = 0°-C or = 180°+ C, observe the COG, proceed on this COG and nullify the coefficient by setting = 0°or = 180°with the transversal magnets.8.When a coefficient is nullified, the deviation table must be obtained as described in point 5, but without considering this coefficient.It is important to ensure that the coefficient is nullified exactly and is not simply reduced to avoid errors in the deviation table.If the magnetic moment of the compensating device's magnets cannot nullify the coefficient completely, then the procedure must be repeated, but this is uncommon.9.If no deviation of the table obtained in point 8 exceeds 4°, then the procedure can be completed, while if a deviation exceeds 4°, then the other coefficient must be nullified.With the help of the spreadsheet, we can know in advance whether only one or both coefficients must be nullified because the deviation table can be calculated using the deviation equation with all coefficients, without B, without C and without both B and C. Finally, if coefficients B and C are nullified but some deviations of the table exceed 4°, these deviations cannot be reduced unless the compass is fitted in a different location.In this case, the residual table shall be considered final, with the corresponding exemption from compliance with ISO Standard 25862:2019, Annex G.1, if necessary. Reliability of method When the drift is less than one knot, we conclude that a suitable minimum speed could be 7 knots (see Section 4).Even if the drift were one knot, the maximum error of coefficients D and E would only be slightly greater than half a degree (0 • 585°exactly, as indicated in Table 1).Therefore, we can affirm that the method is reliable if the drift does not exceed one knot and the vessel speed is 7 knots or more, a speed that most vessels can reach.Then, the method may not be reliable when the drift is greater than one knot, and usually this can occur in any of the following cases: Figure 2 . Figure 2. Types of courses and triangle of speeds. Figure 3 . Figure 3. Ship on which the swing was carried out (the blue hull one). Figure 4 . Figure 4. Ship's magnetic compass while adjusting the IMC sensor. Figure 5 . Figure 5. Ship's magnetic compass with the IMC sensor already adjusted. Figure 6 . Figure 6.IMC with the compass course, true course obtained from the satellite compass, position obtained from GPS and magnetic declination obtained from the NOAA calculator. Figure 8 . Figure 8. True course and COG comparison.True course from the satellite compass is shown on the IMC. Figure 9 . Figure 9. Deviations recorded by the IMC after the swing. Table 1 . Maximum error of coefficients D and E for the S/d ratio. Table 2 . Determination of the pseudo-deviations. Table 3 . Differences between coefficients A, D and E determined from the deviations and the pseudodeviations. Table 4 . Differences between coefficient D determined from the deviations and the pseudo-deviations for each quadrantal course.
2023-08-03T15:19:53.489Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "778c1d8cc693e94b43a310c8c75efebebe601af9", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6774FD45B08F0D4B5226DFC17712007C/S0373463323000176a.pdf/div-class-title-compass-adjustment-by-gps-or-any-other-gnss-receiver-and-a-single-visual-reference-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "1444aa3f41b52c1c6d10e7dc6dd630d3ae0dbe26", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
117853687
pes2o/s2orc
v3-fos-license
A revisit to the GNSS-R code range precision We address the feasibility of a GNSS-R code-altimetry space mission and more specifically a dominant term of its error budget: the reflected-signal range precision. This is the RMS error on the reflected-signal delay, as estimated by waveform retracking. So far, the approach proposed by [Lowe et al., 2002] has been the state of the art to theoretically evaluate this precision, although known to rely on strong assumptions (e.g., no speckle noise). In this paper, we perform a critical review of this model and propose an improvement based on the Cramer-Rao Bound (CRB) approach. We derive closed-form expressions for both the direct and reflected signals. The performance predicted by CRB analysis is about four times worse for typical space mission scenarios. The impact of this result is discussed in the context of two classes of GNSS-R applications: mesoscale oceanography and tsunami detection. I. INTRODUCTION GNSS-R, the use of Global Navigation Satellite Systems (GNSS) reflected signals is a powerful and potentially disruptive technology for remote sensing: wide coverage, passive, precise, long-term, all-weather and multi-purpose. GNSS emit precise signals which will be available for decades as part of an emerging infrastructure resulting from the enormous effort invested in GPS, GLONASS, Galileo and augmentation systems. A key advantage of GNSS-R is its "multistatic" character: unlike monostatic systems, a single receiver will collect information from a simultaneous set of reflection points associated to GNSS emitters. A system in low Earth orbit capable of collecting GPS, Galileo and GLONASS data would potentially be combing the surface with more than a dozen reflection tracks at the same time (for a review, see [Ruffini 2006]). An important aspect is that GNSS signals are very weak as they were not designed for radar applications; yet they contain a wealth of information. For this reason, signal processing plays an important role. The first detection of GNSS signals from space was documented in [Lowe et al., 2002]. More recently, GPS-R L1 C/A signals have been successfully detected from a dedicated experiment in space using a moderate gain antenna [Gleason et al., 2005], complementing a large number of experiments from aircraft and stratospheric balloons. The resulting data will be used to further validate models. The reflection process affects the signal in several ways, at the same time degrading (from the point of view of detection) and loading it with information from the reflecting surface. The waveform amplitude is normally reduced, the shape distorted and signal coherence mostly lost. While GNSS-R cannot provide the precision of dedicated radar altimetry missions, it offers a significant advantage thanks to its multistatic character. The impact of GNSS-R altimetry data to global circulations models has been studied through simulations, with very promising results [Le Traon et al., 2002]. Another recent impact study has focused on the potential of GNSS-R to detect Tsunami's [Martín-Neira et al., 2005]. A dedicated GNSS altimetry system could provide timely warnings, potentially saving many lives. As described in [Soulat et al., 2005], simulations have indicated that a global 100% tsunami detection rate in less than two hours is possible with a ten satellite GNSS-R constellation. Altimetry in GNSS-R can be carried out in two general ways, depending on the ranging principle used. In code altimetry, our focus here, the code is used for ranging with the direct and reflected signals. In phase altimetry, the phase of the signal is used. All of this is rather similar to normal GNSS processing. The main difference is that the reflected signal is affected by the reflection process, which generally distorts the triangular waveform shape of the return and renders the reflected signal very incoherent. This makes the ranging task rather challenging. II. RANGE PRECISION AND ALTIMETRY Contrarily to classical radar altimetry, range precision is a dominant factor in the error budget for a GNSS-R codealtimetry space mission, due to the much lower modulation bandwidth (1 MHz or 10 MHz for the GPS C/A and P codes respectively). If the direct signal error is considered negligible in front of the reflected signal error, the altimetry precision σ h writes simply as a function of the reflected signal range precision σ R : where ε is the transmitter elevation angle. [Lowe et al. 2002] proposed a simple approach to assess σ R and since then, the majority of space mission feasibility studies (e.g. the ESA PARIS and STERNA studies) rely on this reference as an approximation. However, this model is known to neglect important aspects-notably speckleand a re-evaluation of the matter is necessary. Section III presents a critical review of the state of the art and discusses the model validity. Section IV introduces the Cramer-Rao Bound (CRB) theory which constitutes the foundation of our analysis approach. This methodology is then applied to both the direct and reflected GNSS signals to derive closed-form expressions of range precision in sections V and VI respectively. Finally, the impact of new performance predictions is illustrated in section VII where mission scenarios are discussed in the light of two classes of applications. III. STATE OF THE ART REVIEW The approach proposed in [Lowe et al. 2002] basically assumes that range precision for the reflected signal can be evaluated (to first order) in the same way as for the direct signal. The reflected waveform is assumed to be re-tracked using the algorithm of [Thomas, 1995]. This algorithm estimates the direct waveform's delay using three points (the peak and its two immediate neighbours) to determine the peak sub-sample position. In the limit of low thermal noise the precision of this algorithm turns out to be Eq. 2 where τ c is the chip length, C(2) is the correlation factor between amplitudes separated by two lags and snr is the signal to noise ratio, defined as the ratio between average and standard deviation of the peak amplitude. The approach proposed by [Lowe et al.] suffers from several limitations. First, it is valid for relatively high SNR only. Second, the derived expression is tied to the choice of a particular estimator. It cannot then be considered applicable to others and as such, it does not address the general case of retracking where an arbitrary number of waveform's points are fit by a model. Third, the derived expression (and associated estimator) assumes a direct signal statistical model whereas the reflected signal is quite different. The waveform's fluctuations are caused by thermal noise but also by speckle. Besides, the waveform's shape is far from the triangular aspect of the direct signal. Finally, the retracking will presumably not be done on the peak of the waveform (which is known to be an unstable and badly localized feature of the reflected signal) but rather on its leading edge. For these reasons, it appears necessary to re-assess the matter in a more systematic fashion, using appropriate tools from Estimation Theory. IV. CRAMER-RAO BOUND The context of the present problem is Estimation Theory. The CRB methodology allows predicting the best achievable performance in estimation problems for which the stochastic nature of the observation can be described by a probability distribution function (PDF). Formally, the problem comes to estimate a parameter θ (e.g., the delay) from a random observation X (the complex waveform, a vector), knowing its PDF p(X,θ). Then, the RMS precision of any non-biased estimator of θ has a lower bound (see e.g. [Kay 1993]): Eq. 3 Focusing on complex, vectorial Gaussian-distributed signals, the PDF is given by where m=<X> and Γ=<(X-m)(X-m) + > are the mean vector and covariance matrix of the complex signal vector X respectively. In this case, the CRB expression is This expression is the starting point for evaluating the GNSS direct/reflected range precisions, as developed in the two following sections. V. DIRECT SIGNAL RANGE PRECISION The RF signal received by the direct antenna can be seen as an attenuated (α factor) and delayed (by θ) version of the GNSS code C emitted by the transmitter, and corrupted by additive thermal noise σ b (where b is a complex zeromean unit-variance white-noise Gaussian random process and σ a real scaling factor). The waveform is produced by correlating this input signal with a clean replica of the GNSS down-converted signal, leading to the complex waveform it is immediate to write expressions for the mean complex waveform and its covariance matrix: Eq. 9 Having a Gaussian-distributed signal allows to use Eq. 5 and plugging the mean and covariance expressions leads to the CRB for the direct signal delay estimation, that is, the best possible performance for direct signal range precision: Eq. 10 where we have introduced the one-shot thermal SNR 1 , defined as the ratio between the mean amplitude of the peak to the thermal noise amplitude STD (the so-called, "grass fluctuations"): Eq. 11 Note that for the direct signal this SNR definition can be linked to the previous one, Eq. 12 The CRB expression can now be compared to the state of the art model. For this purpose, Eq. 10 should be further simplified by adopting the assumptions that χ is a triangle function and that only three points of the waveform are retained for retracking (the peak and its two immediate neighbours). Doing this, we recover Eq. 2. This exercise illustrates the strength of the CRB approach for deriving generic performance expression adaptable to a particular algorithm and also proves that the Thomas estimator is an efficient one (i.e. reaching its Cramer-Rao bound) in the limit of high enough SNR. Figure 1 gives values of the direct signal (GPS C/A code) range precision as a function of 1/SNR (i.e. NSR). The waveform is sampled from -300m to 300m with a step of 15m (i.e. 20 MHz). As expected, the CRB approach is in full agreement with the state of the art model. To further validate these results, we have performed Monte-Carlo simulations. Realizations of the model of Eq. 6 have been re-tracked using two estimators: the Maximum Likelihood Estimator (MLE), known to be efficient, and the Thomas algorithm. MLE results match well to the theoretical CRB, except for very low SNR where a slight departure is observed. As expected, the Thomas algorithm is efficient for high SNR but deviates from optimality at severe noise levels. VI. REFLECTED SIGNAL RANGE PRECISION The expression for the complex reflected waveform involves two contributions: one is the GNSS electric field scattered by sea-surface and the other is thermal noise, as for the direct signal, Eq. 13 where U is the scattered electric field and u the electric field after correlation with a signal replica. From space and for the majority of sea-states, it can reasonably be assumed that the sea-surface scattering contribution follows fullydeveloped speckle statistics, that is, a complex, vectorial, zero-mean, Gaussian PDF. Since thermal noise is also Gaussian, the reflected complex waveform is Gaussian distributed with parameters 0 = i m , Eq. 15 The CRB expression immediately follows: Eq. 16 The tricky part is now to evaluate the covariance of the scattered filtered field <u i .u * j >. The starting point is the EM integral equation of [Zavorotny and Voronovich, 2000] modelling the scattered filtered field u. Now, we emphasize that the critical feature for our purpose is the waveform leading edge which is obtained by integration of sea-surface scatterers in the vicinity of the specular point. In this regime and from space, the signal covariance is largely dominated by the radar ambiguity function, i.e. by the GNSS autocorrelation. In other words, we assume that the antenna pattern and the glistening zones are much larger than the first-chip zone. In addition, we simplify further the study by limiting ourselves to reflections occurring at nadir. Under these assumptions, the covariance of the scattered filtered field, in the leading edge regime, simplifies to Eq. 17 Figure 2 illustrates the Γ covariance matrix. We highlight again that this model is acceptable for the description of the leading edge but cannot render the behaviour of the waveform's trailing edge (which is affected by the finite size of antenna beam and glistening zone). Figure 3 provides values for the reflected signal (GPS C/A code) range precision as a function of NSR. The waveform is sampled from -400m to 300m with a step of 15m (i.e., ~20 MHz). The re-assessment leads to more pessimistic results than in previous analyses: typically, the range precision computed with the CRB approach is predicted ~4 times worse. Besides, it is worth noting that the asymptotic range precision for infinite SNR is now predicted finite. Even without thermal noise (e.g., with a very large antenna), the waveform is still degraded by speckle and this remains a limitation for delay estimation. VII. ALTIMETRY SCENARIO STUDY The impact of this result is now discussed. A simple error budget is assessed for two generic space missions and compared to the requirements of space altimetry applications potentially suitable for GNSS-R. The two proposed missions receive the GPS C/A code and are characterized by their altitude (500 or 700 km) and antenna gain (28 or 34 dB). A link budget model developed elsewhere [CNES ALT GNSSR, 2006] allows computing the expected thermal SNR and the coherence time of the reflected signal, which is needed to compute the number of independent samples in one second. The altimetric precision is then derived according to Eq. 1. Parameter Mission 1 Mission 2 Altitude (km) 500 700 Antenna Gain (dB) 28 34 Waveform sampling step (m) 15 15 One-shot thermal SNR (linear) 12 22 Coherence time (ms) 0.8 0.9 One-shot nadir range precision (m) 32.6 20.8 One.sec nadir range precision (cm) 92 62 One-sec nadir altimetric precision (cm) 46 31 Table 1 -Performance of two GNSS-R space missions using the GPS C/A code (nadir case). The same exercise has been conducted for a GNSS code with a ten times broader bandwidth, namely the GPS P code (Table 3). The performance improvement is rather clear and becomes now compatible with the requirements of mesoscale oceanography. VIII. CONCLUSIONS In this paper, we have carried out a critical review of the state of the art model for GNSS-R range precision. The goal was to revisit the baseline assumption (known to be incorrect) that reflected and direct signals can be treated the same. A rigorous evaluation of the problem, based on the Cramer-Rao Bound methodology, has been conducted. For the direct signal, we have obtained results in agreement with the state of the art, as expected. For the reflected signal, we have shown that precision degrades, as suspected. For instance, a mission receiving the C/A code at 500km with 28dB gain would have a 1-second range precision of 1m at nadir. This is due to the impact of speckle noise and the shape change in the reflected signal. These results question the suitability of a C/A code GNSS-R mission focusing on mesoscale altimetry. The use of 1-MHz codes (e.g. GPS C/A) remains acceptable to detect strong tsunamis (20 cm over 100 km) but mesoscale oceanography (5 cm over 100 km) would be realistic only with 10-MHz codes (e.g., GPS P code). The availability of such signals as well as those with an even higher bandwidth (up to 50 MHz with the E5 signal) provided by the European Galileo system will further increase the potential of this technique [Galileo OS SIS ICD, 2006]. Future work should consolidate these results with more numerical simulations and experimental validation either using space data or adapting the model to low altitudes and take benefit of available airborne/coastal data. Finally, an in-depth study of the Galileo signal structure impact on GNSS-R is a very important future research line.
2019-04-14T03:15:15.035Z
2006-06-20T00:00:00.000
{ "year": 2006, "sha1": "0bc336cd16f9756a34d1b1c6866ab13e436f2dc5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2a87543f7c3e97eafada8cf5e34e1ddbd7e8f9e0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
217903313
pes2o/s2orc
v3-fos-license
WET SEASON CHLOROPHYLL a, b AND PHAEOPHYTIN a LEVELS IN THE WESTERN LAGOS LAGOON AND ITS CREEKS Algae are known to contain a wide array of pigments that absorb light for photosynthesis and which are useful in measuring primary production levels. In this study, the levels of these algal pigments (Chlorophyll a, b and phaeophytin a) in the Lagos lagoon and adjoining creeks were investigated in relation to water chemistry of the water bodies. Fifteen sampling stations including four of the creeks were investigated for the study. For the creeks, chlorophyll a ranged from 9.6 to 23.3μg/L, chlorophyll b, 0.4 1.1ug/L and phaeophytin a, 1.1 2.9μg/L. For the open lagoon areas chlorophyll a ranged between 8.9 and 11.8μg/L, chlorophyll b, between 0.6 and 1.1ug/L and phaeophytin a between 0.7 and 3.4μg/L. The Pearson correlation co-efficient matrix among chlorophyll a, b and phaeophytin a were all positive (r = 0.28 – 0.35). The positive correlation among all the three algal pigments points to their direct relationship. Chlorophyll a was negatively correlated with salinity. The levels of algal pigments increased as Nitrate (nutrient) and Total Suspended Solids increased, however chlorophyll a was negatively correlated with salinity and was higher in the creeks than the lagoon. Salinity was a limiting factor to algal production in the Lagos lagoon (especially in the wet season) chiefly because most phytoplankton forms at this time were (freshwater forms). Chlorophyll b levels indicated the presence of green algae and euglenoids within the study area. Phaeophytin a also increased with increasing floodwater inputs reflected by its correlation with Nitrate and Total Suspended Solids or detrital materials (r = 0.44, 0.60). The measurement of phytopigment content could also be a useful tool in the establishment of eutrophic levels within within aquatic ecosystems in Nigeria, particularly the Lagos lagoon complex. INTRODUCTION The algae are a very diverse group of simple mostly aquatic (both marine and freshwater) photosynthetic organisms (Castro and Huber, 2005).According to Opute and Kadiri (2013), algae are defined as chlorophyll containing photosynthetic lower plants without true roots, stems, leaves but which have primitive reproductive structures.Algae are known to contain a wide array of pigments that absorb light for photosynthesis (Kadiri, 1999).The major photosynthetic pigments are chlorophylls, carotenoids and phycobiliproteins (Nwankwo, 2004).Chlorophylls are green pigments with a porphyrin-like ring structure, a central magnesium atom and usually a long hydrophobic tail.Chlorophyll a is a specific form of chlorophyll used in oxygenic photosynthesis whereas chlorophyll b is a form of chlorophyll that helps in photosynthesis by absorbing light energy and found usually in the green algae (Lee, 2008). Phaeophytin on the other hand is a natural degradation product of chlorophyll.Alagl phaeophytin pigment is a chlorophyll molecule lacking a central Mg 2+ ion (Lee, 2008).Chlorophylls are capable of channeling the energy of sunlight into chemical energy through the process of photosynthesis (Bold, 1967).In photosynthesis, the energy absorbed by chlorophyll transforms carbon (IV) oxide and water into carbohydrate and oxygen hence the concentration of photosynthetic pigments is commonly used to estimate phytoplankton biomass (Onyema and Ojo, 2008).This feature makes chlorophyll a and other algal pigments a convenient indicator of algal biomass. Photosynthesis in the algae and phytoplankton takes place in the chloroplast that contain photosynthetic pigments.The colour of algae is usually a result of these pigments and their concentration (Bold, 1967;Castro and Huber, 2005;Lee, 2008).It is useful to measure primary production levels especially in the phytoplankton because the process supplies food at the base of the aquatic trophic pyramid (Thurman, 2007).According to Castro and Huber (2005), the standing stock of phytoplankton is the total amount in the water columns. Studies on the Lagos lagoon and adjoining creeks since the 1950's have largely been on biomass measured in terms of numbers or phytoplankton cell counts (Fox, 1957;Hendey, 1958;Olaniyan, 1969;Nwankwo, 1988;1996;Onyema, 2007;Onyema et. al., 2003;2007).This is also similar to studies in other parts of the country hitherto (Mills, 1932;Kadiri, 1999;2005;Holden and Green, 1960;Chindah and Pudo, 1991;Erondu and Chindah, 1991).The measurement of algal biomass as algal pigments has received little attention and only recently in the study area (Onyema and Ojo, 2008;Onyema and Nwankwo, 2009;Nwankwo et. al., 2012;2103).There is no previous study on the use of chlorophyll a and b as well as phaeophytin a in the measure of the standing crop of phytoplankton in the western Lagos lagoon and its connecting creeks either in the wet or dry season.This study measured these algal pigments in relation to the water chemical characteristics of the water bodies as a part of a much larger study on the production status of the Lagos lagoon. Description of Study Area The Lagos lagoon is located in Lagos State, Nigeria.It is one of the ten lagoons in South-western Nigeria (Onyema and Bako, 2015).It is a large, open, shallow and tidal coastal lagoon connected to a number of creeks (Fig. 1).It covers an area of 208km 2 (FAO, 1969) and has an average depth of less than 2m (Ajao, 1996) except for areas that are often dredged for marine traffic or from sand mining operations.The Lagos lagoon is connected to the Epe, Lekki and Mahin lagoons to the east respectively and it falls within the rainforest zone which experiences a well-marked dry and a wet season with two peaks of rainfall.The area experiences the semi-diurnal tidal regime which is a characteristic of the whole of the coast of West African.The Lagos habour is the only connection to the sea for nine of the ten lagoons in south-western Nigeria.The Onijegi lagoon is the only true closed lagoon in the area (Onyema, 2013).The lagoon environment is largely influenced by rainfall and its associated floodwaters which dilutes the lagoon water, breaks down environmental gradients and enriches the environment through floodwaters.On the other hand marine influence from tidal incursion from the Lagos harbor is experienced inland especially in the dry season.Lagos lagoon is an estuarine lagoon which serves as a fertile ground for feeding, breeding and as a nursery area for a number of aquatic organisms (Nwankwo, 2004).It provides habitation for a number of anadromous, catadromous and estuarine fin and shell fish species (Solarin, 1998) while being a site for fin and shellfish capture and culture (Akpata et al., 1993).With regards to capture and culture of fish in the Lagos lagoon, the brush parks or "Acadja" and other semi-extensive systems in the lagoons of South-western Nigeria and adjoining creeks are noteworthy (Onyema, 2011;Onyema et al., 2011).The lagoon also serves as a waste dumping site for unmonitored and unregulated discharges at various points over the years.These points are more on the more industrialized and more impacted western parts of the lagoon area.Hence this area of the lagoon was the focus of this study.The study area specifically, covers the western parts of the Lagos lagoon stretching from the Ikorodu area through Ajegunle and Agboyi creeks (Ogun river tributaries), Oworonsoki and Bariga areas, Abule-eledu and Abule -agege creeks, western mid lagoon points, Okobaba and Ebutemeta areas.In this region, creeks, channels, storm water drains and rivers flow into the lagoon as well as tidal seawater incursion which is semidiurnal in nature from the Lagos habour.Nutrient rich water and pollutants are known to flow into the lagoon through these points.Furthermore, poor sewerage systems are common among the dwellers of the immediate area.Hence direct dumping of domestic wastes is rampant in the region.According to authors, the biotic spectrum of the Lagos lagoon depends on the dynamic interplay between the volume of freshwater inflow and seawater incursion.Table 1 presents the name and approximate G.P.S. grid co-ordinates of the fifteen sampling studied (including four creeks).All samples were collected once monthly. Pigment Analysis Chlorophyll concentrations in water samples were determined using a spectrophotometer with a 2 nm spectral bandwidth.It was prepared in line with the guidelines of EPA Method 446.0, Revision 1.2, 1997 and Standard Methods for the Examination of Water and Wastewater, 20th Edition, Method, 10200H.By this method, a 200 mL aliquot of the water sample was filtered, in a dark room, through a membrane or glass fibre filter paper.The pigment is extracted from the filter paper through maceration, and centrifugation in 90% acetone.The extract is then analyzed, before and after acidification, using a spectrophotometer.Addition of acid converts chlorophyll a to pheophytin a.The detection limit for this method is 5 μg/L, for a 200 mL filtered sample volume and a 20 mL extract volume.The algal pigments were determined as follows: RESULTS The water chemistry data of the 15 sampled stations showed variations from station to station even within the same creek or immediate area.Table 2 shows the minimum, maximum values as well as the mean and standard deviation of all the investigated parameters at the 15 stations.Table 3 shows the lowest and highest values of Chlorophyll a, b and phaeophytin a algal pigments as well as the mean and standard deviation values.Over all the 15 sampled stations, chlorophyll a ranged from 8.9 to 23.3µg/L, chlorophyll b ranged from 0.4 to 1.4ug/L and phaeophytin a was between 0.7 and 3.4µg/L.For the four creeks stations, chlorophyll a ranged from 9.6 to 23.3µg/L, chlorophyll b ranged from 0.4 to 1.1µg/L and phaeophytin a was between 1.1 and 2.9µg/L, while for the open lagoon areas chlorophyll a ranged from 8.9 and 11.8µg/L chlorophyll b ranged from 0.6 to 1.1ug/L and phaeophytin a was between 0.7 and 3.4µg/L.The Ajegunle creek was the most productive of all the creeks studied followed by the Agboyi creek.The pearson correlation co-efficient matrix between chlorophyll a, b and phaeophytin a was positive in all counts ranging from r = 0.28 -0.35 (Table 4).The Pearson correlation coefficient matrix between the water chemistry parameters and the three algal pigments are shown in Table 5.For instance chlorophyll a, b and phaeophytin a were all negatively correlated with Dissolved oxygen, Copper, Alkalinity and Chemical Oxygen demands whereas they were positively correlated with Iron, Nitrate, Phosphate and Total suspended solids.More specifically chlorophyll a was negatively correlated with salinity and salinity related parameters (Total dissolved solid, conductivity, Sodium, Potassium, Calcium and Magnesium).Phaeophytin a was also positively correlated with total Suspended solids (r = 0.60) and negatively correlated with Chemical Oxygen demand (r = -0.71). DISCUSSION The chemical data obtained from this study show an estuarine zone stretching from low to high brackish water situations (0.11 -22.9‰). Ecologists have attributed salinity gradients in the Lagos lagoon to two main factors namely influx of flood waters from rivers, creeks, surrounding wetlands and tidal sea water inflow through the Lagos harbour (Nwankwo, 1990;Onyema, 2009;Nkwoji, et. al., 2010).Lagoons and creeks are diluted considerably by freshwater from rainfall and river systems in the wet season, while in the dry season tidal seawater inflow becomes more prominent (Chukwu, 2002;Nwankwo, 2004).The positive correlation among all three algal pigments points to the connectedness of the pigments having a similar and related trend.Increase in the level of nutrients led to corresponding increases in all algal pigments within the aquatic systems.Iron as a limiting factor in the marine and oceanic environments has been researched (Lee, 2008;Sverdrup et. al. 2003).Allochthonus (Land based) materials (Total Suspended solids) are known sources of nutrients, heavy metals and even pollutants (Ajao, 1996;Nwankwo, 2004;Onyema, 2009).Chlorophyll a was indirectly related with salinity, Total dissolved solids and cations levels.As reported by Onyema and Nwankwo, (2009) the range of chlorophyll a values for the Iyagbe lagoon in a two-year study was between 12 and 55µg/L that is between the mesotrophic and eutrophic productivity status (Suzuki et. al., 2002, APHA, 1998).For the wet season in the western Lagos lagoon and its creeks, levels of between 8.9 and 23.3 µg/L for chlorophyll a, 0.4 and 1.4 µg/L for chlorophyll b 0.7 and 3.4 µg/L for Phaeophytin a were reported.Furthermore, Ogamba et al., (2004) reported a chlorophyll a range of 0.15 -37.4µg/L for the wet season and 0.10 -40.28µg/l for the dry season in the Elechi creek in the Niger delta.Kadiri (1993) also reported a range of 4.20 -35.20 mgm -3 for chlorophyll a for the Ikpoba reservoir in Benin.It is worthy of note that salinity is acting as a limiting factor to algal production in the Lagos lagoon especially in the wet season.This may be as a results of the fact that most phytoplankton forms at this time are freshwater species and have drifted from freshwater creeks and rivers drifted downstream into the Lagos lagoon with floodwaters.This is evident in the negative correlation between salinity and algal pigments.Hence, areas with higher salinities (in the wet season) had lower algal pigments concentrations, particularly Chlorophyll a.According to Onyema (2008), reduced phytoplankton densities as reflected in chlorophyll a values in the wet season may be linked to the low water clarity which reduces the amount of light getting to planktonic algal component for photosynthesis.Higher chlorophyll a values recorded in the dry season is a pointer to improved water clarity (higher transparency and lower total suspended solids) at this time which probably allowed greater light penetration.According to Suzuki et al. (2002), low chlorophyll a values reflecting limited phytoplankton growth in an investigation of a Mexican lagoon were associated to dark water which reduced light penetration into the lagoon considerably.1995;Nwankwo and Akinsoji, 1992;Onyema, 2008Onyema, , 2010)).This may explain why Chlorophyll b concentration reduced with increasing salinity and vice versa.Additionally, it also indicated the presence of green algae and or euglenoids within the study area.According to Sheath and Wehr (2003), green algae are widespread in inland habitats, but certain groups may have specific ecological requirements.The green algae and euglenoids are usually found in standing or slowly moving nutrients rich waters with light and temperature usually high.They are also common in stagnant waters, ditches, streams and ponds and the littoral zones of lakes and on soil and sub-aerial habitats. Phaeophytin a also increased with increasing floodwater inputs reflected by its correlation with Total Suspended Solids or detrital materials (r = 0.60).Correlation trends among the water chemistry parameters are similar to that described by Onyema and Nwankwo, (2009) for the Iyagbe lagoon.Conversely, according to Kowalewska et al., (2004), a lack of correlation between chlorophylls b, c and chlorophyll a indicated that the intensive blooms of cyanobacteria occurs in the Szczecin lagoon, which is a characteristic eutrophic zones.The measurement of phytopigment content could also be a useful tool in the establishment of eutrophic levels (Kowalewska et al., 2004). Table 5 : Pearson Correlation Co-efficient Matrix between Chlorophyll a, b and Phaeophytin a at the Western Parts and Creeks of the Lagos lagoon. (June, 2015) Chlorophyll a (µg/L) According toSheath and Wehr (2003), Chlorophyll b is only found in the green algae and euglenoids with regards to algal photosynthetic pigments.Green algae and euglenoid species have been recorded almost exclusively in the wet season and in fresh or very low salinity seasons in the Lagos lagoon system and creeks (Nwankwo,
2019-12-24T16:34:15.338Z
2016-12-01T00:00:00.000
{ "year": 2022, "sha1": "3163c70ad080c752ca57ce4b92804bad8a1c3a8c", "oa_license": "CCBY", "oa_url": "https://unibenlsj.org.ng/downloads/instruction_for_authors_njlsc.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3163c70ad080c752ca57ce4b92804bad8a1c3a8c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
232405088
pes2o/s2orc
v3-fos-license
The Epidemiology and Genetics of Hyperuricemia and Gout across Major Racial Groups: A Literature Review and Population Genetics Secondary Database Analysis Gout is an inflammatory condition caused by elevated serum urate (SU), a condition known as hyperuricemia (HU). Genetic variations, including single nucleotide polymorphisms (SNPs), can alter the function of urate transporters, leading to differential HU and gout prevalence across different populations. In the United States (U.S.), gout prevalence differentially affects certain racial groups. The objective of this proposed analysis is to compare the frequency of urate-related genetic risk alleles between Europeans (EUR) and the following major racial groups: Africans in Southwest U.S. (ASW), Han-Chinese (CHS), Japanese (JPT), and Mexican (MXL) from the 1000 Genomes Project. The Ensembl genome browser of the 1000 Genomes Project was used to conduct cross-population allele frequency comparisons of 11 SNPs across 11 genes, physiologically involved and significantly associated with SU levels and gout risk. Gene/SNP pairs included: ABCG2 (rs2231142), SLC2A9 (rs734553), SLC17A1 (rs1183201), SLC16A9 (rs1171614), GCKR (rs1260326), SLC22A11 (rs2078267), SLC22A12 (rs505802), INHBC (rs3741414), RREB1 (rs675209), PDZK1 (rs12129861), and NRXN2 (rs478607). Allele frequencies were compared to EUR using Chi-Square or Fisher’s Exact test, when appropriate. Bonferroni correction for multiple comparisons was used, with p < 0.0045 for statistical significance. Risk alleles were defined as the allele that is associated with baseline or higher HU and gout risks. The cumulative HU or gout risk allele index of the 11 SNPs was estimated for each population. The prevalence of HU and gout in U.S. and non-US populations was evaluated using published epidemiological data and literature review. Compared with EUR, the SNP frequencies of 7/11 in ASW, 9/11 in MXL, 9/11 JPT, and 11/11 CHS were significantly different. HU or gout risk allele indices were 5, 6, 9, and 11 in ASW, MXL, CHS, and JPT, respectively. Out of the 11 SNPs, the percentage of risk alleles in CHS and JPT was 100%. Compared to non-US populations, the prevalence of HU and gout appear to be higher in western world countries. Compared with EUR, CHS and JPT populations had the highest HU or gout risk allele frequencies, followed by MXL and ASW. These results suggest that individuals of Asian descent are at higher HU and gout risk, which may partly explain the nearly three-fold higher gout prevalence among Asians versus Caucasians in ambulatory care settings. Furthermore, gout remains a disease of developed countries with a marked global rising. Introduction Gout is an inflammatory arthritic condition caused by the deposition of monosodium crystals (MSU) into the distal joints and peripheral tissues. Elevated serum urate (SU) levels, a condition known as hyperuricemia (HU), generally precedes the formation of MSU. Developing HU could be caused by increased consumption of high fructose corn syrup, a purine-rich diet, and high alcohol intake [1]. Additionally, certain medications, such as diuretics and low-dose salicylates, can decrease urate excretion, increasing SU levels, and the risk of developing HU and gout [2]. Other risk factors for developing HU or gout include renal impairment, cardiovascular diseases, obesity, diabetes, and genetic factors affecting urate production, excretion, and reabsorption. Urate concentration greater than 6.8 mg/dL exceeds urate solubility, leading to the formation of MSU crystals. The deposition of MSU crystals in the synovial fluid could trigger an inflammatory response in local joints. Gouty arthritis often presents as recurrent painful flares in the monoarticular joints, usually in the first metatarsophalangeal joint of the lower extremities. Further, the development of HU and gout is significantly associated with the development of cardiovascular diseases and all-cause cardiovascular mortality [3][4][5]. The 2007-2016 National Health and Nutrition Examination Survey (NHANES) estimated the prevalence of gout in the United States (U.S.) to be 3.9%, which corresponds to approximately 9.2 million people [6]. Stratified by race, the 2007-2016 NHANES data estimated the prevalence of gout in African Americans, Caucasians, and Hispanics to be 4.8%, 4%, and 2%, respectively [6]. It is important to note that the gout incidence and prevalence across indigenous populations is different from the incidence and prevalence in the U.S. In the U.S., Asians are 2.7 times more likely to have a gout diagnosis compared with Caucasians [7]. Consistent with Asian subgroups being at a higher gout risk, a study of hospital charts for gout diagnosis found a 2.5% incidence of gouty arthritis in Filipino males versus 0.13% incidence in non-Filipino males (p < 0.001) [8]. Similar to the hospitalization reports of gout incidence in Filipino men, other studies have also suggested that Filipinos could be genetically predisposed to a higher HU and gout risk, especially Filipinos living in the United States compared with the Philippines [9][10][11]. Moreover, studies in the Hmong population, a group commonly ascribed as Han-Chinese residing in Minnesota, showed that gout prevalence could range from 5.1-6.5% [12,13]. In addition to the gender difference in disease risk, these epidemiological data suggest that acculturation to a Western lifestyle, a high-purine diet, and other socioeconomic factors, such as access to healthcare, may have a significant effect on the development of HU and gout [14,15]. Differential gout prevalence across racial populations have suggested that developing gout is compounded by genetics, significantly modulating the individual's risk for HU or gout when exposed to select environmental or dietary factors [16,17]. Indeed, genetic variations in urate-related genes could lead to an increased or a decreased activity of urate transporters, decreasing or increasing the disposition of SU [18]. Numerous studies have also characterized the effect of specific single nucleotide polymorphisms (SNPs) on SU levels and gout risk within different populations [19][20][21][22]. However, no studies have yet compared the epidemiology of HU and gout across ethnic populations and their relationship with the risk allele occurrence in the same populations. We hypothesize that populations enriched with gout risk alleles will have a high gout prevalence. Furthermore, select genetic polymorphisms associated with developing gout were also associated with the response to urate-lowering therapy. Therefore, the objective of this genetic analysis is to estimate the frequency of risk alleles associated with elevated SU levels or gout in select racial groups compared to Europeans. Some of these risk alleles are of specific interest as they may play a role in personalizing diet and treatment in patients with gout. The ultimate goal of this genetic analysis is two-fold. First, to interrogate genetics as a contributing factor to the racial health disparities of gout prevalence. Second, to elucidate possible genetic sources of differential response to urate-lowering therapy among the U.S. populations. SNP Selection The candidate gene approach was employed to select the gene/SNP pairs in this genetic analysis. The gene/SNP pairs were known to be physiologically and significantly associated with urate disposition, MSU-induced inflammatory response, and the risk of developing gout (Table 1). All gene/SNP pairs were previously validated across different populations and with directionally consistent effect on urate levels or gout risk. Our targeted SNPs were predominantly identified from a meta-analysis study done in over 28,000 individuals of European descent and large genome-wide association analysis in over 440,000 individuals of European descent with cross-validation in other ethnic groups [23,24]. When presented with multiple SNPs within the same locus, polymorphism with the largest effect size was prioritized for inclusion in the genetic analysis. [24] * Bolded letter allele indicates the risk allele, which is defined as the allele that is associated with baseline or higher risk for HU or gout. Statistical Analysis In this genetic analysis, the risk allele was defined as the allele that was associated with baseline risk or increased risk for HU or gout. The risk allele was noted for each SNP and then compared across the following populations: EUR, ASW, CHS, JPT, and MXL. Chi-square or Fisher's exact test was used to test for differences in allele and genotype frequencies of the population of interest, compared with EUR. A Bonferroni adjustment for multiple comparisons was used, with p < 0.0045 for statistical significance. The risk allele index was then estimated as the count of possible risk alleles that had significantly different frequencies between the target population and EUR. The risk allele index for a given population, in our genetic analysis, could range from 0-11. Epidemiology of Hyperuricemia and Gout A literature review using PubMed was conducted to gather the most recent global HU and gout prevalence in non-U.S. populations. These populations included Africans living in Africa, Asians living in Asia, Europeans living in Europe, and Hispanics living in Mexico. Additionally, we used the 2007-2016 NHANES to extract HU and gout prevalence across the United States for non-Hispanic whites, non-Hispanic Blacks, and Hispanics. Hyperuricemia and Gout Risk Alleles Frequencies Risk allele and genotype frequencies of the 11 SNPs in our targeted population are summarized in Tables 2 and 3, respectively. In the African American (ASW) population, 7 out of the 11 SNPs were significantly different compared with EUR (Table 4). Among those seven significantly different SNPs, ASW had five risk alleles that were significantly more prevalent (71.4%) than EUR. These alleles included: In the Han-Chinese (CHS) population, 9 out of the 11 SNPs were significantly different compared with EUR (Table 4). All nine alleles (100%) were considered risk alleles and were significantly more prevalent in CHS than EUR. These risk alleles included: rs2231142 In the Japanese (JPT) population, 11 out of the 11 targeted SNPs were significantly different compared with EUR (Table 4). All 11 alleles (100%) were considered risk alleles and were significantly more prevalent in JPT than EUR. These risk alleles included: rs2231142 In the Mexican (MXL) population, 8 out of the 11 SNPs were significantly different compared with EUR (Table 4). Among those eight significantly different SNPs, MXL had six (75%) risk alleles that were significantly more prevalent than EUR. These risk alleles included: rs2231142 Among all studied populations, Asian subgroups, JPT and CHS, had the highest risk allele indices, 11 and 9, respectively. The percentage of risk alleles was 100% in JPT and CHS, followed by MXL and ASW populations at 75% and 71.4%, respectively (Table 4). (50) * Bold letter allele indicates the risk allele, which is defined as the allele that is associated with baseline or higher risk for HU or gout. * Indicates statistical significance p < 0.0045 between population of interest and reference group (EUR). Percentage of risk allele * 71.4% (5/7) 100.0% (9/9) 100.0% (11/11) 75.0% (6/8) * Risk allele is defined as an allele that is associated with baseline or higher risk for hyperuricemia (HU) or gout. Global Gout Epidemiology Recent reports of the prevalence and incidence of gout vary widely due to the population demographics, regional differences, and methods employed. Nonetheless, these reports could range from a prevalence of <1% to 10% and an incidence of 0.58-2.89 per 1000 person-years [29,30]. The burden of gout is generally highest in developed regions and countries. Countries with the highest age-standardized point prevalence estimates of gout in 2017 were New Zealand, Australia, and the U.S. The countries with the highest increases in age-standardized point prevalence estimates of gout from 1990 to 2017 were the U.S., Canada, and Oman. Globally, the annual percent change in age-standardized prevalence (males, 0.22%; females, 0.38%) of gout increased every year from 1990 to 2017 [31,32]. U.S. Populations The prevalence of HU and gout across racial groups in the United States are summarized in Table 6. According to NHANES 2007-2016, gout prevalence was 4.8% in African-Americans and 4% in Caucasians (6). The most recent data (2015-2016) collected on Hispanics, which may include Mexican-Americans, reported a 2.1% gout prevalence [33]. This has increased from the 1.0-1.1% reported in 2008 from the Population Architecture using Genomics and Epidemiology (PAGE) study in over 3500 Mexican-Americans as well as previous prevalence data from NHANES 2009-2010 [17,33]. African Populations Epidemiological studies of HU and gout prevalence in Africa is limited. However, these studies suggest that while the trends and the patterns of gout remain similar to other populations, the incidence and prevalence of HU and gout is low [34,35]. Furthermore, a study found that the prevalence of gout was 14.1% among 85 African patients in Southeast Gabon (Table 5) [36]. This prevalence of gout was likely high due to the study's inclusion criteria of participants who were requesting urate level tests, leading to a biased and underrepresented sample of the African population. Another cross-sectional study prescribed anti-gout medications to 4.0% in 400 African patient enrollees seeking treatment for joint pain in Madagascar [37]. Another study done to characterize the prevalence of rheumatic disorders in Africans found no cases of gout out of 450 respondents in four South African populations [35]. Asian Populations Gout prevalence among Asians living in oriental countries (Table 5) tended to be lower compared to Asians in the U.S., except for the aborigines living in Taiwan (Table 6) [46,47]. The Community Oriented Program for Control of Rheumatic Diseases (COPCORD) reported that gout prevalence in some Asian countries, including Bangladesh, China, India, Philippines, and Thailand remained low (<0.5%) [30]. In 2014, a database of health insurance claims in Japan reported that gout prevalence to be 1.6% for men aged 20-64 and remained constant for Japanese women at 0.09% in 2010-2014 [42]. Gout prevalence could also vary by region within the same population. For example, a study of approximately 5000 Chinese subjects estimated HU and gout prevalence at 13.2% and 1.1%, respectively in the Shandong coastal cities of Eastern China, which is higher compared to the rest of China [30,40]. Furthermore, the prevalence of either condition was significantly higher in Chinese men compared to women (18.3% vs. 8.6% for hyperuricemia and 1.9% vs. 0.4% for gout) [40]. Other Asian countries, such as Indonesia and Kuwait, were reported to have gout prevalence of 1.7% and 0.8%, respectively [30]. Hispanic Populations The gout prevalence in Mexicans was lower versus other targeted populations in our analysis (Table 5). For example, a 2015 cross-sectional community-based study conducted in the Chontal and Mixtec indigenous communities of Oaxaca, Mexico, reported one gout case out of 1061 participants (0.09%) [44]. Another study used the COPCORD questionnaires estimated gout prevalence to be 0.3-0.4% in suburban communities located in Mexico [45]. European Populations Gout prevalence appears to be lower in European countries (Table 5) compared to Caucasians living in the U.S. (Table 6). For example, European countries such as Germany, France, Portugal, Sweden, and the Czech Republic, reported gout prevalence ranging from 0.3-1.8% [30,48]. The highest prevalence of gout recorded in Europe was in Greece and the United Kingdom at 4.8% and 2.5%, respectively [30,48]. Discussion Our genetic analysis identified that the CHS and JPT populations as having the highest prevalence of validated HU and gout risk alleles compared with EUR. Specifically, all the nine significantly different alleles in CHS were considered HU or gout risk alleles. The eleven of the significantly different alleles in JPT were considered HU or gout risk alleles. (Table 4). These results suggest a possible genetic basis of the documented higher prevalence of HU and gout in Asian populations compared to EUR [7]. Further discussion of the gene/allele pairs included in our analysis is therefore warranted. The ABCG2 gene is strongly associated with SU levels, early-onset gout, and the progression from HU to gout [25,49,50]. The encoded protein, ATP-binding cassette superfamily G member 2 (ABCG2), is expressed in the gastrointestinal tract, kidney, liver, and functions as a urate efflux transporter. The genetic polymorphism rs2231142 (G > T) in ABCG2 leads to Glu141Lys amino acid change, which results in a reduced ABCG2-mediated urate efflux activity and inflammation dysregulation via augmented IL-8 release (Table 1) [51,52]. Individuals with this polymorphism are at a higher risk for HU and gout. A genomic meta-analysis of SU levels in over 28,000 European individuals showed that the rs2231142 (G > T), with the risk allele T, was present in only 10.8% and was significantly associated with increased SU levels (Effect size = 0.173, p = 3.10 × 10 −26 ) [23]. In our study, the risk allele T of rs2231142 (G > T) was present in 9.4% of Europeans, 25% in CHS, and 32% in JPT ( Table 2). The genetic polymorphism rs2231142 (G > T) in ABCG2 is strongly associated with increased risk for HU and gout across different populations. A study of 1206 Chinese individuals found that the rs2231142 (G > T) polymorphism was associated with HU risk (OR = 1.63, 95% CI: 1.27; 2.11) and increased SU levels (Effect size = 0.16, p = 6.75 × 10 −9 ) [53]. Additionally, a population-based study showed that the rs2231142 (G > T) is a causal variant for gout in Whites and Blacks with OR = 1.68 per risk allele. Across the four major populations in the United States, the association between the rs2231142 (G > T) and prevalent gout was significantly stronger in men (OR = 2.03, p = 1.53 × 10 −13 ) than in women (OR = 1.37, p = 0.03). Among women, the association was statistically significant only in postmenopausal women (OR = 1.45, p = 0.03) compared with premenopausal women (OR = 0.96, p = 0.94) [17]. Collectively, the genetic polymorphism rs2231142 (G > T) in ABCG2 is believed to be the most significant gene variant associated with HU and gout compared to other risk alleles. These results support that the genetic polymorphism rs2231142 (G > T) in ABCG2 may not only lead to a higher risk for developing HU and gout in Asian populations compared to EUR, but it may also explain early-onset gout in select Asian subgroups. SLC2A9 encodes the GLUT9, a high-capacity transporter for fructose, glucose, and SU [16,54]. GLUT9 is not only expressed in the kidney and liver, but it is also expressed in the chondrocyte of human articular cartilage [55]. The rs734553 (G > T) in SLC2A9 is an intronic polymorphism that could result in an increased susceptibility to develop HU, gout, and diabetes due to altered transporter affinity [23,26,56]. Particularly, this genetic polymorphism has one the largest effect size on SU levels in EUR and could have a greater effect on SU in women (Effect size = 0.315, p = 5.22 × 10 −201 ) [23]. Our analysis showed that the prevalence of the risk allele T in ASW and MXL (53.3% and 61.7%, respectively) was significantly lower than EUR (75.5%). On the other hand, the frequency of risk allele T was significantly higher in CHS and JPT (95.5% and 98.6%, respectively) than EUR (75.5%). With such distinct differential prevalence and large effect size on SU levels, our data suggest that CHS and JPT populations are at greater risk for developing HU or gout compared to other populations. SLC16A9 encodes for monocarboxylic acid transporter, a significant urate transporter (Table 1) [24]. The genetic polymorphism rs1171614 (C > T) in SLC16A9 was reported to influence SU levels and the risk of gout [24]. Genome-wide association analysis showed that effect allele T of rs1171614 (C > T) was associated with lower SU levels and with a frequency of 22% in EUR (Effect size = −0.079, p = 2.3 × 10 −28 ). Our analysis showed that the frequency of the risk allele C in the rs1171614 (C > T) within SLC16A9 was 100% in CHS and JPT, and 89.9% in MXL compared to 75.7% in EUR. However, the risk allele frequency between ASW and EUR was not significant (77% vs. 75.7%, p = 0.75, Table 2). This finding suggests that the polymorphism rs1171614 (C > T) in SLC16A9 may be significantly contributing to the high gout prevalence among Asians and the increased risk of HU among CHS and JPT populations. SLC17A1 encodes for voltage-gated cotransporter protein NPT1, which is expressed on the apical side of the proximal tubule. The genetic polymorphism rs1183201 (T > A) in SLC17A1 was found to be associated with decreased SU levels (Effect size = −0.062, 95% CI: −0.078; −0.459) with the effect allele A being the protective allele in European descent [23]. For the intronic SNP rs1183201 (T > A), the A allele was associated with lower SU levels and had a 48.2% prevalence in individuals of European descent [23]. Our analysis showed a similar prevalence of the effect A allele to be 46.1% in EUR. In contrast, the A allele was significantly lower in all our targeted populations, with 12.3% in ASW, 11.9% in CHS, and 16.3% in JPT (p < 0.005, Table 2). These results suggest that specific populations could be genetically predisposed to elevated SU levels. Specifically, ASW, CHS, and JPT populations could garner less protection against HU or gout because of the lower frequency of the A allele of the rs1183201 (T > A) in SLC17A1 compared to the European population. SLC22A11 and SLC22A12 encode for organic anion transporter 4 (OAT4) and urate transporter 1 (URAT1), respectively. These transporters are responsible for the majority of urate reabsorption in the kidneys and the primary targets for urate-lowering therapies. Genome Wide Association Studies (GWAS) in different populations identified that genetic polymorphisms rs2078267 (C > T) in SLC22A11 and rs505802 (C > T) in SLC22A12 could significantly modulate SU levels (Table 1) [23,24]. Particularly, the T allele of rs2078267 (C > T) in SLC22A11 was associated with reduced SU levels (Effect size = −0.073, p = 9.4 × 10 −38 ) in EUR with a prevalence of 53.1%. Additionally, the T allele of the rs505802 (C > T) in SLC22A12 was found to be associated with lower SU levels (Effect size = −0.056, p = 2.04 × 10 −9 ) in EUR with a prevalence of 70.7%. Consistent with previous GWAS, population studies reported that the rs505802 (C > T) within SLC22A12 was associated with lower SU levels in Chinese and Japanese populations [21,57]. Our study showed that the frequency of risk allele C in both loci-SLC22A11 and SLC22A12 was significantly higher in all targeted populations (ASW, CHS, JPT, MXL) compared to EUR (Table 2). Specifically, the frequencies of the risk allele C of both polymorphisms, rs505802 (C > T) and rs2078267 (C > T) were highest in Asian subgroups CHS and JPT compared with the rest of other populations. The CHARGE meta-analysis along with multiple GWAS have identified RREB1 and IN-HBC loci as having genome-wide significance association with SU levels [24,27,58]. RREB1 encodes for zinc finger transcription factor and is responsible for binding to RAS-responsive elements of gene promoters and regulating the androgen receptor and calcitonin gene. INHBC encodes for a member of the transforming growth factor β family [24,27]. The polymorphism rs675209 (C > T) in RREB1 was associated with increased SU (Effect size = 0.061, p = 1.3 × 10 −23 ) and increased risk for gout (OR = 1.09, p = 1.1 × 10 −2 ) in individuals of European ancestry [24]. In contrast, rs3741414 (C > T) within INHBC was associated with lower SU concentrations in individuals of European ancestry (Effect size = −0.072, p = 2.2 × 10 −25 ) and decreased risk for gout (OR = 0.87, p = 2.7 × 10 −4 ) [24]. Though the exact biological mechanism underlying the association of the forementioned SNPs and the risk of HU or gout is inconclusive, it is presumed that these genetic polymorphisms may reduce the repressor activity functions of RREB1 and INHBC [27,60]. Compared to EUR, the CHS population had significantly higher frequencies of both risk alleles of rs675209 (C > T) in RREB1 (91.4% vs. 26.9%, p < 0.0001) and rs3741414 (C > T) within INHBC (91.4% vs. 80.5%, p = 0.0002) ( Table 2). Compared to EUR, the JPT population also had significantly higher risk allele frequencies compared to both of the previously mentioned polymorphisms. Specifically, the frequency of the risk allele T of rs675209 (C > T) in RREB1 was 92.3% in JPT compared to 26.9% in EUR (p < 0.0001) ( Table 2). The frequency of the risk allele C of rs3741414 (C > T) in INHBC was 94.2% in JPT compared to 80.5% in EUR (p < 0.0001) ( Table 2). However, the MXL population had mixed results of allele frequencies of the forementioned polymorphisms. Compared to EUR, the MXL had a higher frequency of the risk allele T of rs675209 (C > T) in RREB1 compared with EUR (47.7% vs. 26.9%, p < 0.0001), while having a lower frequency of the risk allele C of rs3741414 (C > T) in INHBC compared with EUR (53.1% vs. 85.5, p < 0.0001) ( Table 2). PDZK1 is expressed in the kidney and encodes PDZ domain-containing molecules, which act as a scaffolding protein for a variety of subcellular transport proteins [28]. The results of a case-control study suggest that PDZK1 genetic polymorphism rs12129861 (C > T) is associated with reduced gout risk in male Han Chinese (OR = 0.727, 95% CI: 0.562; 0.940) [28]. A similar observation was reported in GWAS where the T allele was significantly associated with lower SU levels compared with the C allele (Effect size = −0.062, 95% CI: −0.083; −0.042). In our analysis, CHS had a significantly higher frequency of the risk allele C compared to EUR (78.1% vs. 54.1%, p < 0.0001) ( Table 2). In the JPT population, however, the risk allele C was markedly higher compared to EUR (91.3% vs. 54.1%, p < 0.0001). In contrast, the risk allele frequencies were not significantly different between ASW versus EUR and MXL versus EUR (Table 2). Collectively, these results suggest that CHS and JPT populations are enriched with the HU and gout risk alleles, contributing to a higher prevalence of gout among Asians compared with EUR. NRXN2 encodes a member of the neurexin gene family, which produces cell adhesion molecules and receptors in the nervous system. Nonetheless, the same gene family was linked to urate levels in multiple populations [24]. Although the mechanism remains elusive, a GWAS showed that the intronic genetic polymorphism rs478607 (G > A) in NRXN2 could affect SU levels and the fractional excretion of urate (FEUA). Particularly, the A allele was associated with reduced SU levels (Effect size= −0.047, p = 4.4 × 10 −11 ) and increased FEUA (Effect size = 0.046, p = 0.046) [24]. Notably, except for the ASW population, the interrogated genetic polymorphism rs478607 (G > A) was in strong linkage disequilibrium with the missense genetic polymorphism rs12273892 (A > T). In our analysis, ASW and JPT populations had a significantly higher frequency of the risk allele G compared to EUR (46.7% vs. 15.4, p <0.0001; 24.5% vs. 15.4%, p = 0.0014, respectively) ( Table 2). The risk allele frequency was not significantly different between the rest of our selected populations and EUR. The rising of HU and gout prevalence in specific populations in recent decades suggest substantial changes in the lifestyle and the global rising of gout risk factors [38,48]. Moreover, gout prevalence could also differ between rural, urban, and coastal regions, reinforcing the interaction between social determinants of health, lifestyle factors, and existing comorbidities in gout development [40]. Indeed, nongenetic factors such as diet, obesity, physical activity, and other environmental factors could further modulate the risk of developing gout [61][62][63][64]. Developed countries accustomed to westernized diets (overintake of purine-rich foods and alcohol), such as the U.S., have been shown to have higher gout prevalence (Table 5) compared to non-U.S. countries (Table 6). Additionally, this might explain the health consequences of immigration and or acculturation to a high purine diet in the U.S., among population subgroups. While we recognize the critical role of nongenetic factors in the development of gouty arthritis, we believe our study provides evidence to support that the population enrichment of HU or gout risk alleles could lead to a higher gout incidence, especially when exposed and acculturated to a Western diet [15,40]. Indeed, our findings suggest that diet-genetic interactions may greatly modulate gout risk rather than diet alone, which partly explains the low gout prevalence in select non-US populations, despite having the highest risk allele indices. Therefore, a polygenic risk assessment for gout may provide a personalized approach and a robust assessment rather than relying on racial stratification for disease risk or treatment selection. Additionally, this genetic information could be used for gout risk stratification and potentially guide prescribers in choosing the most optimal drug therapy for patients at risk for developing gout. Limitations Our analysis is not without limitations. While genetics could play a significant role in the development of elevated SU levels and gout, nongenetic factors such as diet, obesity, physical activity, and other environmental factors may also affect the risk of developing HU and gout. Nonetheless, nongenetic factors alone have yet explained little variability in SU levels or gout risk to date. Additionally, our study lacked robust epidemiological data for HU and gout prevalence in the Asian American population. Currently, NHANES 2007-2016 does not report HU and gout prevalence for Asian Americans, which may have limited our ability to corroborate HU and gout prevalence with gout risk alleles in the U.S. We also focused on Southern Han-Chinese and Japanese populations in our risk allele analyses. Genetic information on gout and HU from other major Asian subgroups, such as Vietnamese, Korean, Filipinos, and others were not assessed. Therefore, future studies in Asian subgroups are needed to validate our findings. Additionally, we limited our genetic analyses to 11 gene/SNP pairs. Both HU and gout are polygenic disorders and may involve other genes beyond what was studied in our genetic analysis. These limited genes may alter the risk allele index for each racial group. Finally, though the risk allele index approach may provide insights on the directionality of disease risk, it may not explain the racial disparities of gout prevalence, partly due to the variation in the effect sizes associated with the different alleles. However, our genetic results remain directionally consistent with a greater genetic predisposition for HU and gout in Asian descent than Europeans. Future Perspective Genomic and personalized medicine is a growing field in health care. Evaluating the individual's genetic information may guide the choice of the appropriate diet, medications, and design personalized risk-mitigation strategies for individuals at high risk for developing HU or gout. Consequently, a genetically-guided approach may avert unnecessary drug therapy and reduce the risk of new disease onset. Not only does this approach have the potential to improve healthcare outcomes, but it could also address the existing health disparities associated with HU and gout across different racial populations, thereby improving health equity. Ultimately, relying on genetic information such as those assessed in our analysis may allow us to recommend personalized diet modifications and make personalized gout-related therapy adjustments. Employing a precision health approach rather than using imprecise demographic information such as self-reported race and one diet fits all approach will lead to improved clinical outcomes in managing chronic diseases such as HU or gout. Finally, while there is no present evidence to suggest that treating idiopathic HU is warranted, studies investigating the effect of lowering urate levels in populations genetically enriched with HU or gout risk alleles may be worth further investigations. Conclusions Our genetic analysis suggests that population enrichment with HU or gout risk alleles may result in a higher prevalence of HU and gout in Asians versus Europeans. Overall, our results showed that CHS and JPT to have the highest risk allele index compared to Europeans. Our results are consistent with the limited reports that Asian subgroups have higher HU and gout prevalence compared to non-Asians. Although NHANES 2015-2016 does not report HU or gout prevalence for Asian-Americans, patient claims study has shown that Asians are nearly three-times more likely to have a gout diagnosis versus Caucasians in ambulatory care settings. Validation of our SNP selections from multiple genome-wide association studies further supports our hypothesis that the differences in allele frequencies could be responsible for the differential HU and gout prevalence across distinct racial groups. Author Contributions: Y.R. conceived the study. F.B., A.A., and Y.R. were responsible for data acquisition and analysis. All authors were involved in drafting the article or revising it critically for important intellectual content. All authors have read and agreed to the published version of the manuscript.
2021-03-30T05:12:03.317Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "947b6079306eb462ff9fabbbb3ae0d6be25fbc9f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4426/11/3/231/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "947b6079306eb462ff9fabbbb3ae0d6be25fbc9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
149815597
pes2o/s2orc
v3-fos-license
The Understanding and Use of Reflection in Family Support Social Work ABSTRACT Previous research emphasizes the need for reflection in complex, dynamic practices, like social work. However, increased governance of the public sector and welfare state has caused a reform, which in turn has affected the layout and conditions of work. Private sector control ideals and ideas from the auditing system have led to a new focus. It is argued that we should subordinate practice approaches – characterized by reflection, proven experience, and tacit knowledge – to manual-based treatment, evaluations, and assessments. This study aims at understanding the role of reflection in social work by investigating its use and valuation by family support social workers. Opportunities and resources for reflection are another focus. Focus group interviews (n = 40) were used to produce data. The need for reflection to conduct highly qualitative social work became evident. The question is not whether to reflect or not, rather how this best can be done, given current time constraints. Reflection was considered a coping mechanism, offering a sense of professional legitimacy. Organizational changes seem to impact on time for reflection. However, when enabling process, learning, and development, it can be argued that reflection is beneficial for several parties. Therefore, reflection requires continued emphasis, highlighting its potential benefits. KEYWORDS Reflective practice; social work; professional development; workplace learning; parent support; human service organizations Previous international research portrays reflection as both a vital part, and a useful tool in challenging, difficult practices like Western social work, in which practitioners are required to make uncertain judgements, to be flexible, and to deal with complex problem-solving (e.g., Gambrill, 2010;Ingram, Fenton, Hodson, & Jindal-Snape, 2014;Mantzoukas, 2008;Sch€ on, 1991). Additionally, reflection is described as being useful for learning and development at work and for improving the quality of services provided in human services professions (e.g., Avby, 2015;Ellstr€ om, 2001Ellstr€ om, , 2006Kolb, 2015;Nilsen, Nordstr€ om, & Ellstr€ om, 2012). However, in times of increased governance of the public sector and welfare state, social work practice appears to be changing, in turn affecting practitioners' working conditions (e.g., Liljegren & Parding, 2010). It is furthermore argued that this is a change that has led to an increased demand for external evaluations, the measuring of practice, an economically effective use of resources, and the favoring of manual-based approaches to practice, like evidence-based practice (EBP) (e.g., Gursansky, Quinn, & Le Sueur, 2010;Liljegren & Parding, 2010;Ponnert & Svensson;2016;Webb, 2001). Those who argue for the use of EBP assert that it provides the best possible care with the client's interests and needs incorporated, improves practitioners' decisionmaking and effectiveness, fosters the use of evidence, and promotes knowledge-acquisition and learning (e.g. Gambrill, 2010;Mullen, Shlonsky, Bledsoe & Bellamy, 2005). Nevertheless, EBP is both criticized and questioned in the field of social work, and is a hotly debated concept (e.g., Avby, 2018;Ruch, 2002). Often EBP is accentuated as "best practice," due to its "base in knowledge." Such a proclamation is, however, degrading to practitioners' proven experience, intuition, tacit knowledge/"know-how," and common sense ("gut feeling"), reducing the importance of these qualities and depicting them as "risky" and not reliable as the basis upon which decisions are made and practice is carried out (e.g., Avby, 2018;Otto, Polutta, & Ziegler, 2009). With proven experience and tacit knowledge not rarely being grounded in reflective practice, the procedure of accentuating EBP as best practice also subordinates reflection. If you look beyond the controversy, however, many of the (positive) characteristics attributed to EBP can be found in positive attributes assigned to reflection. One example is its capacity for improving decision-making and actions as well as avoiding risks and making mistakes (e.g., Avby, 2018;Sicora, 2017). Just as theory-based knowledge and EBP are argued to promote life-long learning (e.g., Mullen et al., 2005), it is similarly argued that reflection is needed for effective learning from experiences, in turn contributing to the identification of knowledge gaps for continuous competencedevelopment (e.g., Mann, Gordon, & MacLeod, 2009). Another similarity between EBP and reflection is that process is characteristic for both. Rather than onetime applications, processes are iterative with a capacity of improvement in line with use. Previous and ongoing changes have obviously led to a differentiation between various approaches to practice. One example is organizational and occupational professionalism. With different views of knowledge, logic, tools, and grounds for control, bureaucratic contra professional legitimacy is being advocated (Liljegren & Parding, 2010). Another example is the "new" contra "traditional" professional style, found in a study by Barfoed and Jacobsson (2012), which follows the same logic. A gap between formal theory/research and practice-developed theory and knowledge, and realities of practice (creating inconsistency), appears to have emerged, where different parties tend to favor one over the other (D'Cruz, Gillingham, & Melendez, 2007;Nilsen et al., 2012). However, it can be argued that so strongly differentiating between practices and approaches might be problematic. Instead, focus should be on the combination of both for creating knowledge beneficial to social work practice (e.g., D'Cruz et al., 2007;Otto et al., 2009). This aligns with the idea of practitioners basing their thinking and acting in both practical experience and theory. Practice informs theory about as much as theory informs practice and reflexive abilities are as, if not more, important than science-based knowledge for coping with everyday complexities (cf. Gursansky et al., 2010;Otto et al., 2009;Sch€ on, 1991). Characteristics of one approach must not be in contradiction to another. For example, standardization is often mentioned in relation to, e.g., bureaucratic management, and is criticized as being a way of gaining control, limiting practitioners' autonomy and reflection, and affecting their professionalism in favor of industrial uniformity (e.g., Avby, 2018;M€ akitalo, 2012;Ponnert & Svensson, 2016;Timmermans & Epstein, 2010). But, is it really that simple? Undoubtedly, standardization plays a pivotal role in the functioning of organizations; this is, however, of varying degree and character (cf. Timmermans & Epstein, 2010). As a contrast to more holistic and reflective approaches to work (Ingram et al., 2014;Ruch, 2002;Webb, 2001), standardization, it is argued, is cost-efficient and so is EBP (since it stems from a bureaucratic philosophy), as ideals have become contested concepts within human services professions. At the same time, though, reflection can also be considered a kind of standardized activity since it is a systematic way of thinking (e.g., Watkins, 2016) and dealing with experiences, although implicit and more personally designed in character. Consequently, similarities appear to exist between the aims of bureaucratic philosophy as a governing ideal within the public sector, and reflective practice. Indeed, reaching and ensuring high quality and effectiveness in the treatment of clients is a goal for both. However, the way to achieve it is what separates them: practitioners and advocates of reflection emphasize its importance as an activity and practice, while other groups argue that there is a need for evidence to inform practice, preferably findings from RCTs, the top of the "evidence-hierarchy" (e.g., Mantzoukas, 2008). But even if working according to a manual, some level of adjustment based on the current situation and the client's specific needs ought to be required. All clients, at least in the field of social work, cannot possibly be treated in the exact same way, and situational knowledge is required to offer the best possible treatment. This thus points to the need for highlighting both practice-and evidence-based knowledge in practice and education. Serving different functions, they complement one another: practice enriches theory and vice versa. Together with more standardized approaches to practice, both are needed, and neither of them can be considered more important than the other (e.g., Avby, 2018;D'Cruz et al., 2007). Reflection is not a new field in research on social work. Previous research has focused on reflection in relation to both the education field and the preparation of students for their future practice, as well as on practitioners (cf. Nilsen et al., 2012;Sicora, 2017). There is research claiming the importance of reflection in social work, however less research has focused on describing in more depth the content of reflection and practitioners' ability to reflect. Finding themselves in a complex and uncertain practice, characterized by increased demands from various stakeholders, and with reflection, proven experience and tacit knowledge becoming subordinated, practitioners are an important group to study. Hence, this study is aimed at gaining better insight into social workers' understanding and valuation of reflection. Its purpose is to explore the possible impact of the changes going on in their field that could be affecting their opportunities for reflection. The participants in the study are social workers engaged in family support for children with mental health problems. Their work involves helping families in crisis or with social problems and includes counselling, support and treatment, focusing on the family and its surrounding network. Family support is both preventive and treatment-focused, implying the broad scope of problems that practitioners encounter. The goal is to help the family find useful strategies to overcome difficulties, thus enabling the child to live at home and function at school and in leisure time. The purpose is to help the parents support and guide their children in their development and help them to feel better. As social workers, they work either alone or in pairs with a colleague, in individually designed efforts (with the specific family) or in parent groups. The work is done both at the family's home and at the practitioners' office (i.e., the social services resource-unit) and assumes both the form of dialogue, network meetings, direct work in everyday situations and other supportive activities. For handling questions and complexities of practice, social workers are offered various types of supervision, often involving reflective practice, aimed at improving practice (cf. Bradley & H€ ojer, 2009). The Meaning of Reflection and Reflective Practice Dewey claimed that "while we cannot learn or be taught to think, we do have to learn how to think well, especially how to acquire the general habit of reflecting" (Dewey, 1998(Dewey, [first published in 1933, p. 35). Thinking is an often-used synonym for reflection. However, reflection is often regarded as a more structured, disciplined and rigorous way of thinking, caused by some doubt, hesitation, problem or difficulty, a way to articulate thinking (Watkins, 2016). Both are however needed and furthermore interconnected with doing. Doing, thinking, and acting are complementary: doing affects human thinking, a doing which, together with the carried-out actions serves as fuel for human reflection (e.g., reflection-inaction, Sch€ on, 1991). Reflection can be described as a process where practical experiences are elaborated and transformed into personal knowledge, aiding in developing professionalism (e.g., Avby, 2015;Kolb, 2015;Sch€ on, 1991). In trying to resolve specific situations, a search-process for ques is initiated, often comprising observation and conversation with oneself, one's actions, and experiences. This forms part of learning and knowledge development (Dewey, 1998). Reflection is seen as necessary for sense-making as well as a constructive way to learn from/make use of complexities, doubts, and hesitations, which are common in social work practice (Gursansky et al., 2010). To understand these situations, and decide upon the next step, questions are helpful and used for guidance. However, it is not about just any questions: the smarter the questions, the better the chance of reaching deeper and finding a solution, with the ability of generating knowledge (Sicora, 2017). This can be related to the importance of why in making reflection critical, i.e., not only considering the how but also the why, that is, looking for reasons and examining consequences in relation to what we do (Watkins, 2016). In terms of reflection, an important and often-used concept is reflective practice. Lately, it has received increased attention in the field of social work, possibly because with it comes recognition of the field's challenging characteristics, serving as a response to the abovementioned changes (Ingram et al., 2014;Mattsson, 2017;Nilsen et al., 2012;Ruch, 2002). Reflective practice, it is argued, is a process of self-involvement and self-reflection, i.e., self-awareness, to pinpoint important content in previous experiences and use the information for adjusting behavior and actions (Dolan, Pinkerton, & Canavan, 2006;Ruch, 2002;Sch€ on, 1991;Yip, 2006). By remembering and evaluating past and present experiences and actions, dealing with present problems, emotions and feelings, new perspectives and solutions to situations are sought. Thus, reflective practice is a process of considering both the past, present, and future in evaluating one's own behavior, thoughts, and actions (Yip, 2006). Sch€ on described reflective practice as a reflective conversation with the situation (1991), implying that reflective practice is the very action, or practicing, of reflection, thus related to both design and the provision of reflective activities in practice. Reflective conversations can vary in form. They can be individual and/or collective, and they can be retrospective (reflection-on-action) or continuous (reflection-in-action) in character, thus being dialogic reflections on what one does, both before, while and after doing it (cf. Sch€ on, 1991). Being dialogic in character, either with oneself or others, communication can be regarded as mediating reflection. Language, as a dynamic and powerful discursive tool in human activity, can mediate both meaning, sense and consciousness (Vygotsky, in M€ akitalo, 2012;M€ akitalo, 2012), not only in speech but also in written form. Reflection, as an activity, also occurs both verbally and in writing, implying the importance of language for making sense of reflections and creating knowledge. In terms of collective reflection, one hallmark is supervision. If critically examined, supervision, it is argued, can improve organizational learning (e.g., Bradley & H€ ojer, 2009). The Present Study The changes in the social work field, it is argued, reduce the value of professional and personal values and the provision of community services in favor of organizational and bureaucratic accountabilities thus shift from "people-changing" to "people-processing" (Bj€ orktomta & Arnsvik, 2016; Gibson, Samuels, & Pryce, 2018). These are changes that degrade professional expertise gained from experience, reflection, and an interpretative practice (e.g., Barfoed & Jacobsson, 2012). To avoid the risk that social work loses quality by becoming too bureaucratic and managerial, a relationship-based and reflective practice is stressed. Despite its frequent utilization and argued importance, reflection and reflective practice are however often accused of being broad, multifaceted and lacking conceptual clarity (Kinsella, 2010), a process which can "unearth any assumptions about anything" (Fook, 2004, p. 59). Due to its breadth and complexity, it is argued that reflection and reflective practice are often described in a general, simplistic way and used unreflectively (Bengtsson in Kinsella, 2010). In view of the varying ways of understanding reflection, this study aims at contributing to research on reflection in social work. Swedish family support social workers were invited to participate, resulting in 12 focus groups (n D 40). By investigating how reflection is described and used and asking about opportunities and resources for reflection (together with obstacles and/or facilitators), the aim was to gain deeper insight into social workers' understanding of reflection and its role in everyday practice. The study was initiated by the following research questions: 1) What is social workers' understanding of reflection and what value is it given in everyday practice? -How is reflective practice arranged/organized? 2) Based on the current changes in the field, how are social workers experiencing the opportunities and resources provided for reflection? Participants This study, conducted during the spring of 2016, emanates from data produced in focus groups with family support social workers. The participants worked at the municipal social services in a city in western Sweden. They shared experiences from the same type of work and are trained in various intervention-models related to the field of social work. The collaboration-partner in this project is the Center for Progress in Children's Mental Health (UPH), a unit within N€ arh€ alsan (the public primary care provider) working with methods-development, education and research. For this reason, the selection criterion for being included in the study was employment in a city district in which UPH is represented. At that time, this applied to seven out of ten districts, meaning that social workers from three city districts were not invited to participate in the study. In total, 103 social workers were invited. Of those, 40 (39%) chose to take part, resulting in 12 focus groups (see Table 1). All participants belonged to pre-existing work groups at their respective workplaces. However, not all members of each pre-existing group participated in the focus group. Further information about the participants is shown below. The number of participants in each focus group was between two and five. The duration of the interview was between 65 and 102 minutes (in total 16 hours, 35 min). All focus groups' interviews were recorded with a Dictaphone and transcribed verbatim. Procedure and Interview The use of focus groups for producing data was considered an appropriate method as it allows for group interaction. Allowing participants to discuss the topic in groups rather than individually enabled them to discuss and engage in both complementary and argumentative interaction (Kitzinger, 1994), offering the possibility to "reflect on reflection" and reaching greater depth of analysis than in individual interviews. Focus groups can stimulate participants to share concrete, specific, but also personal answers, while simultaneously creating a possibility to reveal dimensions of variations in opinions and understandings (Hylander, 2001;Kitzinger, 1994). To recruit participants an information letter about the study was sent by the first author to the resourceunit managers of each included city district. They were asked to provide the names of the social workers in their respective districts. The social workers were then contacted individually by email regarding participation in a focus group. All invitees received an information and consent letter informing them that participation would be voluntary and providing an assurance of confidentiality. Interest in participating was confirmed by email or telephone by the first author. A reminder was sent out prior to the focus group interview. The focus group interviews were conducted by the first author, either at the respective city district's resource-unit or at the office of UPH. The focus groups commenced with participants being asked briefly to describe their jobs. A few open-ended questions about reflection followed to facilitate a discussion among the participants, but without interfering in the ongoing dialogue. At the end, participants were given the possibility to add to or comment on the topics discussed. Prior to recruiting participants, a representative from the regional Ethical Board was consulted regarding the necessity of an Ethics Board approval for the study. Due to its specific aim and purpose, the representative did not consider an approval necessary. Other Swedish rules and requirements in relation to conducting research have been respected and applied (Vetenskapsra det, 2017). Analysis A thematic analysis, in accordance with the principles outlined by Braun and Clarke (2006), was used for analyzing the data. The analysis was done by the first author and discussed continuously among the three authors. The aim of the analysis was, through an inductive approach, to provide rich and detailed descriptions, creating themes representing the participants' thoughts and opinions on the topics discussed. To obtain a good grasp of the data, the transcriptions were first read through. Important content and initial ideas about interpretations and possible connections were noted. In the next step, the software MAXQDA was used for coding and further analysis. A second reading of the transcriptions followed, encoding relevant content into categories. Introductory questions, used to help to get to know the participants, were excluded from the analysis. The next step was the organizing of the coded categories into different themes, i.e., repeated patterns recurring in the data. During the process, initial codes and themes were reworked, renamed and, when necessary, organized into subthemes, to ensure consistency with the dataset as a whole. The final themes have been used to describe the findings. Excerpts are used to illustrate the content and meaning of each theme. Findings The thematic analysis resulted in four themes (see Table 2). Themes 2-4 had subthemes. The term participant/s has been used for ensuring confidentiality. When the word client is mentioned, it can refer to either an individual or a family. When quoting, the focus group is abbreviated FG, followed by its number. The quotes have been translated from Swedish to English. Theme 1: Reflection -A Meaningful but Diverse Concept The participants were asked about the meaning of "reflection" and their thoughts when hearing the word. Although agreeing on its importance, its true/ real meaning was frequently discussed and compared to various synonyms and metaphors, indicating a difficulty to define it. One participant says: "It is an opportunity to process what you have been through, to ponder and discuss what you have experienced. And, thinking ahead, like very vast. I think" (FG5). In all focus groups, synonyms for reflection were discussed and its possible similarity to other mental activities (see Table 3). Thus, reflection was discussed in terms of both reflection as… and reflection for…. Due to the difficulty of defining it, discussions of what reflection "actually is" were deemed important. It is important to talk about what it is precisely, we can say that we do it all the time and [that] we need to bandy it back and forth with each other and talk, but is this what we do when we reflect? Or is it reflection we do then? Or…? (FG1) Despite the discussion of its meaning, reflection, the participants argued, is necessary and important for social work, even crucial and "obvious," meaning they were unable to understand how anyone could carry out social work without using reflection. Being more than interpretation and different from thinking, reflection was instead considered to be the twisting and turning of things, a kind of perspective-thinking not included in "regular" thinking. It offers a possibility to stop and think in various ways: one can reflect upon what happened in the past, what is happening here and now, and what will happen in the future. Furthermore, many participants regarded reflection as never-ending: Regardless of its perceived importance, the participants could still understand the intangible nature of reflection. Since it is neither part of their workdescription nor measurable, and furthermore varies in meaning for different people, it was not difficult for them to understand why reflection is occasionally considered difficult to define, as something nonconcrete and less important, especially for people who do not use reflection so extensively. This was an issue that became clearer after discussing the difficulty of defining and measuring reflection, which arose in the focus-groups. Theme 2: An Asset in Everyday Practice The discussions revealed the opinion that reflection is an asset. A strong connection between reflection and what they argue is the very "core" of their work was underscored: "Without reflection, I don't think we could work on bringing about any change" (FG3). To reflect is to be professional. Reflection, it was argued, is a tool for visualizing both their own and others' behaviors, necessary for mapping, familiarizing and understanding both cases and their own performance and role. The participants expressed a need for the twisting and turning of ideas, for creating awareness about thought-patterns, values, preconceptions, and feelings. The quotation below illustrates the significance of reflection in aiding the practitioners in their efforts to perform well, and consequently being an asset in everyday practice: You really must be listening closely, because it's easy to, to conceptualize, you get some information and conceptualize. Then you meet the parent and you must sort of When discussing reflection as an asset in practice, the role of questions for facilitating and obtaining different perspectives was frequently emphasized. Various types of questions that are both helpful and provide guidance in the reflection-process were mentioned: Where are we going? Is this meaningful/helpful and how do we know? What is left before reaching our goal? What could I have done differently? What is my influence on them? By reviewing, attaching value to, and evaluating the situation more clearly, questions were considered helpful for creating awareness and putting into words what is happening. Questions are also useful for making explicit what needs to be improved, clarifying the client's wishes, needs, and actions, and for reaching the stated goal, yet also for understanding the interaction between family-members. This type of question forms the basis for their reflections; however, it seemed that most reflection takes place in retrospect, e.g., after a conversation or observations from a meeting. Additionally, they claimed that reflection offers a sense of legitimacy in their professional role. Reflection offers an opportunity to create a knowledge-base about the client on which to stand as a practitioner, and its supportive and legitimizing character was of use in future work. Reflection, it was argued, is a "tool" or "method" for gaining greater insight into the case; it is believed to increase professionalism compared to basing work solely on one's own intuitive feelings. Subtheme 2.2: A Coping Mechanism Reflection also seemed to serve as a protection for the practitioners. Occasionally being a challenging and difficult practice, reflection was considered to function as a coping mechanism and for sorting out emotions: "It can be a daily mental cleansing routine to have someone to reflect with …" (FG9). Being recurrently in (close) contact with other people, a contact not rarely energy-consuming and emotionally charged, they, as practitioners, also become part of the clients' systems. To reflect upon the joint therapist-client process and its impact on them (as practitioners and individuals) was considered a way to handle potential complications and reduce the risk of taking on the clients' feelings. This protective function helps in separating what is "theirs" (i.e., the client's) and "mine" (the practitioner's). These types of reflections were considered helpful for reducing the mental burden their work involves: "If I didn't have the capacity or possibility to reflect upon it [the situation], or the permission, I would have, I think I would have driven myself into a ditch. It would have been really bad" (FG2). Theme 3: The Structuring of Reflective Practice (See Figure 1) During the focus groups' discussions, it became evident that reflection is structured in various ways. Discussions of place (where) and form (with whom) occurred frequently, indicating its variety as a practice. Regardless of structure, all variations, due to serving specific purposes and completing each other, were considered important (see Figure 1): In terms of reflection I think we are moving at very different levels, it is some type of structured and organized time for reflection and then there is the inner reflection where you…, but then I think that all of them play an important role sort of, or they are important. (FG10). In terms of form, both individual and collective reflection was mentioned. Individual reflection is more about reasoning with oneself, about oneself, the client and the emotions that surface in connection with meetings. Collective reflection, on the other hand, is reflection with colleagues, clients or supervisors, constituting a complement to individual reflection. I think that, I think more and more that reflection occurs in various contexts, I mean, I am reflecting when I am on my own, sitting by myself and my inner dialogue, or if you can call it analysis, that's a way for me to sort my thoughts and experiences and what I have been experiencing for example or what I personally need. But the other reflection, what we are doing now … you can reflect in a group and the group is different, if you are more than two you are a group, at least that's my opinion. And then you are also reflecting as a colleague, with each other, on the basis of our work roles and professional roles. But the other kind [of group reflection] is to reflect together with those we are here for, for families, and children and adolescents. (FG9) The need to reflect on thoughts and experiences was emphasized, occasionally described as an inner dialogue. Utilizing reflection for learning from experience was underscored and considered to be promoted by both individual and collective structure. Variations however seemed to exist regarding opportunities for engagement in the various structures. Individual, and "informal" reflection was said to occur anywhere and at any time. It requires nothing more than the time to ponder situations and cases. It is a type of reflection that commonly appears in connection to more formal occasions like client-meetings or supervision and needed for handling feelings and emotions that arise. However, due to a heavy workload and the provision of mainly formal reflectionopportunities while at work, this type of reflection was frequently described as a companion on the way home from work or during evenings and weekends. One frequently mentioned type of formal and collective reflection was supervision. Supervision was described as highly reflective, positive, useful, providing encouragement and help in dealing with concerns at work. Unlike individual reflection it offers the possibility of sharing and receiving in terms of both knowledge and experience, promoting professional learning and development. And then we have supervision and it's really a reflective process. There, you try to describe as detailed as possible, or so, your experience of the client and the meetings and such, as emotionally as one can really, and there, you get a lot of reflection from your colleagues, in your supervision-group. And the supervisor then of course. (FG8) Although offered various types of supervision (e.g., method and process supervision), a desire for additional resources for formal reflection was expressed, especially in conjunction with client-meetings. A certain level of formality (or structure as per the participants) was believed to be beneficial; it was thought that productive reflection would occur more frequently if scheduled, offering positive outcomes for several parties, like clients, themselves and the organization. Subtheme 3.1: For Making Progress and Bringing about Development Collective reflection was accentuated as a way to "get help to think" (FG10). By offering diverse perspectives, it provides a more nuanced and complete picture of the situation. By enabling the sharing of impressions, thoughts and views on a situation, collective reflection was considered a way of taking one's own reflections further and for making progress on the case. The collective sharing of knowledge and experience, enabled in joint reflection, was also considered an element of professional development. However, it was stressed that to achieve such a productive situation, in which experiences are learnt from and a deeper understanding is gained, there must be an open and safe environment. In line with these discussions, the importance of collective reflection together with the client was raised. Bringing about change requires reflection, for which merely one reflective practitioner is not enough. Rather, the client and practitioner need to work and reflect together on stated goals and the progress they are making (or not making). By getting the client to reflect, the practitioner can help him/her to gain a better understanding of the situation. Joint reflection can enable a common understanding of the specific case and problem, creating favorable conditions for bringing about change, which is the very core of social work. Besides, joint reflection is not only aimed at one-sided gain. Offering substantial possibilities for professional learning and development, it was considered equally important for them as practitioners: I think it's very important, it's important for the families. And to do it together with them is also a way to get them to develop and to see other perspectives, that you [the practitioners] learn a lot from that. That you… [it is] a way of avoiding doing something wrong. That you do a good job. (FG7) Theme 4: Reflection -Important Yet not Prioritized Discussing reflection raised another important matter, that of organizational support. Although experiencing a positive attitude toward reflection on the part of management, resources provided for it were, however, not always considered sufficient. Consequently, the organization impacts on practitioners' opportunity to reflect, implying that a supportive organization is not enough if time for reflection is not provided, instead placing the responsibility for finding time on the employee him-or herself. It seems to be taken for granted (by the organization) that the individual practitioner, based on the prevailing work situation, will schedule and devote a certain amount of time for reflection when appropriate. The participants found this worrying, expressing a desire for more structure in relation to reflection. Since reflective activities are more implicit in character, thus difficult to measure, calculate or valuate, they argued that reflective activities should be both emphasized and equated with more "practical"/concrete work-tasks. Consequently, it seems that the organization is both supporting and hindering reflection. Regarding social work as a field, the participants experienced what they call a move from a "reflection-" or "exploration-domain" to a "production-domain." They described a change of practice appearing to increase both pressure and demand for "quick-fixes," i.e., the fast and cost-effective treatment of clients, an increase in tasks and duties interfering with their core work (bringing about change), like administration, meetings and other activities, a change also constituting a "threat" to reflection and other important elements. And you must also, you must take a stand for reflection. Right now there is quite a lot of pressure on us to be inside the production-domain and to deliver quickly and that we should be able to accommodate families quickly and offer appointments quickly and sort of, these are things that can reduce the preparation time, reflection before is reduced, start-up meetings are reduced, the number of social-secretaries is reduced due to great pressure, now is the time for you to deliver, and then, then it is very important to safeguard, it is an actual and practical circumstance, but you have to safeguard reflection throughout the whole process because otherwise quality will be greatly diminished. (FG9) This quote emphasizes the importance of "protecting" reflection, claiming that it is at risk in the seemingly prevailing effectivization of social work. Subtheme 4.1: The Managing of Time and Resources This subordinate theme depicts the participants' experience of managing time for being as reflective as necessary. The social workers are constantly fully-booked, with other tasks impinging on their time. It is difficult to find time for reflection, but they are left without a choice, since there are not enough resources provided. Room for reflection must be made, especially in more difficult cases. If they fail to find time for reflection, they themselves and their well-being are affected, with stress and a negative impact on work being the probable results. The introduction of new work-tasks interfering and "stealing" time is another aspect related to this topic. One explicit example is documentation. Even though several participants considered documentation promotive of reflection, newer documentation systems appeared to be more about describing cases and their progress succinctly, without including the practitioner's thoughts, values or feelings, thus offering a limited opportunity for reflection. By making documentation impersonal, its dissemination can be widened and form part of the material being used for "measuring" their practice. Besides, documentation not only impinges on their reflection-time, it also appears to take up a lot of their overall time. The managing of time and resources in relation to new worktasks that interfere with everyday practice is furthermore perceived to be management controlled, thus beyond their own influence. With time appearing to be equated with money, the participants are exposed to a form of external control affecting the layout and content of their everyday practice and resulting in the "squeezing in" of the more intangible tasks, like reflection, in an already busy schedule. Discussion The findings reveal an understanding of reflection as "self-evident" within social work and an aid (or tool/ method) for understanding complex situations, reactions, and emotions one faces in practice. The participants found it hard to imagine social work without reflection playing an integral role. It also became evident that reflective practice occurs in various forms, for which opportunities for engagement seemed to vary. Despite being regarded as "obvious," the problem of defining reflection became apparent (as described in Theme 1). One possible reason could be its challenging character as a concept, possibly referring to a collection of attitudes and methods, both cognitive and philosophical (van Manen, 1995). Another possibly influential factor is experience, and level and type of education. However, whether differences in the participants' understanding of reflection as a concept is necessarily problematic is debatable. Being multi-faceted, appearing, e.g., as a concept, a word and a practice, it is not surprising that such a discussion arose. The participants were asked about their thoughts when hearing the word "reflection." Despite being common and well-known, there is no guarantee for equal interpretations or similar conduct in the action (in this case to reflect), thus leading to a varying range of interpretations and actions (cf. Archer, 2003). If the participants were, for example, asked to distinguish between different practices of reflection, like reflection at work, as a professional, and reflection more in general, the discussions might have appeared different. Furthermore, it was argued that the type of reflection needed for handling work and which is capable of leading to professional learning and development is different from "regular thinking" (cf. Mattsson, 2017;Watkins, 2016). Thus, the discussion related to reflection as a concept or word -What does it mean to reflect? Is it thought of as a concept, word, or practice?could possibly also be concerned with the objects the reflection is based upon, implying that everyday/casual reflections are possibly thought of as thinking, while "thinking at work," as an aid in complex situations, on the other hand, is considered reflection. Hence, reflection can be regarded as context-and content-dependent, in alignment with Volo sinov, who argues that words have as many meanings as there are contexts, and that meaning and situation cannot be separated (in M€ akitalo, 2012). Additionally, to investigate people's subjective experience of a mental activity, like reflection, can be compared to investigating attitudes and beliefs; both involve interpretation and a subjective report on an inner conversation (Archer, 2003). The findings indicate that reflection is used to legitimize professional knowledge (i.e., what they are doing and why) (Theme 2; 2.1). Originating in medicine and science, social work has long struggled for professional legitimacy, recognition and increased professionalization, built on professional knowledge and competence (e.g., Barfoed & Jacobsson, 2012;Bolin, 2011;Forenza & Eckert, 2017). Due to ongoing discussions of how to organize social work, which direction to adopt, and whether changes should occur at the micro, macro, or mezzo level, the struggle for a professional identity is also continuous (Forenza & Eckert, 2017). To reach a more coherent identity, and to raise and consolidate professional status, a new science-based professional style, including manual-based approaches and EBP has been proposed (e.g., Barfoed & Jacobsson, 2012;Ponnert & Svensson, 2016). The potential of using standardized assessments (like EBP) for increasing professionalization within social work is, however, questioned (e.g., Barfoed & Jacobsson, 2012). A view of reflection as a coping-mechanism emerged (Theme 2.2.) and was described in terms of a protection against clients' tragic fates, life stories, situations and the related emotions which easily affect social workers, both professionally and privately. Social work today, it can be argued, is contradictious in that it aims to be cost-effective, leading to a high workload for the practitioners, who simultaneously are expected to take on personal responsibility for the client (e.g., Astvik & Melin, 2012). Astvik and Melin's study is aimed at identifying various coping-mechanisms for handling social secretaries' increasingly demanding work. According to them, coping is the "constantly changing cognitive and behavioral efforts to manage the internal and external demands of transactions that strain or exceed a person's resources" (2012, p. 341). This study's participants' understanding of reflection as a coping-mechanism can thus be argued as accurate, since it helps them to manage, or cope with clients and organizational demands. This could be an indicator of the need for organizations and decision-makers to take reflection seriously, to prioritize it and provide opportunities for it in everyday practiceespecially since social work today is facing problems with recruitment and high staff turnover, something claimed to be relational with the audit and managerial changes simultaneously identified (e.g., Bj€ orktomta & Arnsvik, 2016;Munro, 2004). The importance of reflection for making progress in cases and obtaining perspectives, especially through collective reflection, is described in Theme 3.2. The results imply that reflection is used for processing experiences. By doing so, lessons learned from experiences can be integrated into future work, enabling professional learning and development, in turn creating and expanding the individual "experience bank" of knowledge, which is possible to share with others. This can be likened to the "naming and framing" of problematic situations for finding solutions, a process creating opportunities for the identification of situations and experiences of use in future practice (Sch€ on, 1991). Through reflection, strengths, weaknesses, and gaps in terms of knowledge, skills, and values are offered, facilitating an understanding of what worked well and less well, what needs to be adjusted and how to bring about change and new goals (cf. Mantzoukas, 2008), which is furthermore similar to what are described as the benefits of EBP (e.g., Gambrill, 2010;Mullen et al., 2005). This aligns with experiential learning-theories in which reflection (as a conscious thought/act/activity), through its advanced processing, is claimed to be a key mechanism for understanding experience and learning from it, important for individual learning, competence-development and professional expertise (Billett & Somerville, 2004;Ellstr€ om, 2001Ellstr€ om, , 2006Kolb, 2015;Sicora, 2017). The processing function of reflection was also discussed by the participants, who saw it as contributing to the formation of a knowledge-base as well as offering a sense of professional legitimacy. Collective reflection, emphasized and described as occurring in various forms, can be compared to socalled informal support networks, essential for professional growth (Forenza & Eckert, 2017). As it is collective, collaborative and participatory in character, the importance of communication, or dialogue, for sharing experiences, gaining understanding, and creating meaning is evident (e.g., Breidensj€ o & Huzzard, 2006;Ingram et al., 2014). Collective reflection was emphasized in relation to both colleagues and clients. According to Sch€ on, the "reflective client" is important for improving decision-making processes and avoiding mistakes, since this type of collective reflection offers opportunities for agreement and shared understanding (1991). Consequently, reflection is not only important when occurring at an individual level, but also at the group level as a "community of practice" (Lave & Wenger in Breidensj€ o & Huzzard, 2006). Indeed, collective reflection is informed by individual experiences, which, through dialogue, are compared to and affected by other information, informing practice, and capable of leading to new collective knowledge (Breidensj€ o & Huzzard, 2006;Ingram et al., 2014). Forenza and Eckert (2017) stress the need for an improved understanding of informal support networks: their functions, members, content, and importance for social workers. If collective reflection, in its various forms, is to be considered an example of such a network, a small contribution is hereby deemed to have been submitted. When discussing the resources provided and opportunities for reflection, the support, interest and prioritization on the part of management was stressed (Theme 4; 4.1.). An opinion that management considers reflection important emerged; it was nevertheless not always visible in the resources provided. The participants however stressed the importance of having sufficient resources, which aligns with the importance of appropriate conditions for assisting personal and professional development through individual reflection. An environment that is both intellectually and emotionally supportive includes factors such as context, colleagues, and supervisors, all of which can create such conditions. Enough inner space, time, workload, and readiness are other factors with possible effects on the self-reflection process (Mann et al., 2009;Yip, 2006). Support, mentoring, and a safe and respectful climate that allows for the expression of feelings are other examples (Mann et al., 2009). Inappropriate conditions, on the other hand, like a demanding and/or oppressive work-environment or the poor physical/mental health of the practitioner (e.g., negative self-image or unresolved traumatic experiences), may be destructive and lead to selfreflection becoming a burden rather than being helpful (Yip, 2006). Clearly, the organization impacts its practitioners. However, practitioners also constitute a part of the organization; employees and organization are not separate entities. Making up an integral part of the organization, practitioners form part of its culture, and can thus take advantage of opportunities to both influence and transform it (e.g. Billett & Somerville, 2004;Ingram et al., 2014). This is similar to Vygotsky and Luria's argument about humans transforming their environment (in M€ akitalo, 2012). Their own role and possibility of influence was however not discussed by the participants. One can imagine, though, that collective activities among practitioners, like collective reflection and supervision, are influential and can bring about development, thus exerting influence on the organization. In a group, power and influence grow stronger, which can have an impact on the organization. However, other factors also have an impact on organizations. One example is the application of ideas from the "audit society" on social work, in turn affecting practitioners' practice, placing them in a growing dilemma (Liljegren & Parding, 2010;Munro, 2004;power in Ponnert & Svensson, 2016). This is a matter that is relatable to child welfare systems as either "people-processing" or "people-changing," i.e., processing clients through the system, or changing, and improving, clients' lives through interventions (Hasenfeld in Gibson et al., 2018). Bureaucratic ideals, striving for standardization, evaluation and cost control of social work practice, appear to have caused a tension between social work and paperwork, with an increased amount of (digital) documentation being one result (Gibson et al., 2018;Liljegren & Parding, 2010;M€ akitalo, 2012). Similar to this study, the participants in that of Bj€ orktomta and Arnsvik (2016) considered their work situation as characterized by an increased level of administrative work, creating a heavy workload and efficiency requirements. They did not discuss decreased opportunities for reflection; they did, however, mention the problem of increased staff turnover. Limitations First, the sample originates from one city only. A geographical breadth could have brought varied insight into the matter. However, due to the project's financial partner being in this city, its social workers became the focus. Another limitation is the approach of selfselected participants, meaning that only practitioners with an interest in the topic participated, something that in turn could have affected the results. Despite these limitations, this study contributes important and useful insight into social work practitioners' understanding and use of reflection. Conclusion and Future Implications Despite being a subject of interest for quite some time now, the importance of reflection in challenging and dynamic practices requires continued emphasis. With current changes in mind, not only affecting practitioners, but also clients, reflection and reflective practice needs to be prioritized. If not, the risk of facing other types of problems, involving cost, like high staff turnover and sick-leaves among staff might increase. If governmental, political and organizational prioritizations are to be ruled by ideals like "quick-fixes" and "people-processing," thus subordinating practitioners' proven experience and tacit knowledge to efficiency, the goal of working for change might be difficult to reach and the general perception of social work as a helping profession altered. Although difficult to measure (i.e., prove its worth), reflection is, according to this study, highly valued and essential for coping with social work practice and for reaching progress in cases. The prevailing changes in the field tend to reduce resources for less measurable features and instead focus on making practice measurable. Potential organizational benefits need to be emphasized for making decision-makers aware of the usefulness and importance of reflection. For example, reflection is described as a way to increase productivity within organizations (e.g. Yliruka & Karvinen-Niinikoski, 2013). Constituting a central part of organizational work, such as decision-making processes, evaluations, and sense-making, reflection constitutes an important function in everyday practice Sicora, 2017). It serves an important function by enabling detection of errors and avoiding mistakes, both at the individual and organizational levels; it aids in the following of ethical guidelines, rules, and responsibilities; and it provides an opportunity to make social work more effective (Sicora, 2017). Furthermore, reflection is emphasized in relation to both workplace learning (e.g., Boud et al., 2006;Ellstr€ om, 2001Ellstr€ om, , 2006 and organizational learning (e.g., Fook, 2004;Sch€ on, 1991). As mentioned in the introductory section, reflection could be considered an asset in individualizing more general models/ methods, like EBP, for meeting the client's needs and wishes. EBP cannot be questioned if the evidence is strong, but to believe that it can be directly applied to every client, without taking into account prevailing circumstances might be a mistake. Reflection, it could thus be argued, is needed for the application and correct use of the model's general knowledge in individual cases. Reflection and reflective practice thus need to be re-thought and re-contextualized to gain both a new position and meaning in working-life . If decision-makers are to see reflection as a learning strategy, rather than the mere activity of thinking, it might gain increased value and meaning, and thus perhaps also be provided with more resources (Mann et al., 2009). The results imply the importance of reflection for promoting learning and development. It can be argued that reflection is a form of advanced, continuous, and ongoing evaluation of one's work for improving the services provided as well as one's own competence. This points to the importance of not basing our views on reflection upon predefined categorizations, i.e., as either a tool for evaluation or as personal nonsense: reality is not so black or white. Based upon the participants' statements, reflection can be considered a process of constant self-assessment and quality assurance, highlighting its potential capacity as a useful tool in practice, which, put this way, also can be understood by managers and politicians. Thus, reflection needs to be studied in relation to work-place learning, investigating practitioners' views upon reflection in relation to this area. Like reflection as an activity, opportunities for professional learning and development also need to be studied to understand the possible effects of changes to the work environment in this specific area. Similarly, since it is claimed that EBP poses a threat to reflection and other more intangible features of practice, reducing the level of professional discretion (e.g., Ingram et al., 2014;Liljegren & Parding, 2010), studies are also needed to investigate the impact of EBP on social workers' everyday practice, and more specifically the impact of the evidence-based models that are part of their treatment-repertoire. However, since it is also argued that EBP can reduce the theory-practice gap, helping clients in an un-harmful way (e.g., Gambrill, 2010) and enable the acquisition of new knowledge, increased insight into research, and make social work more effective and equal (e.g., Barfoed & Jacobsson, 2012), the matter of EBP is seemingly controversial, similarly emphasizing the need for further studies.
2019-05-12T14:22:57.911Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "2a6968638913751bedc8357f5d7a8c38dc6bca8d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/01488376.2018.1476300?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "28c0ba6c8674a4b77c65e7f1fe9e4276711323f2", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
235284423
pes2o/s2orc
v3-fos-license
Research and Design of Broadband Power Amplifier with Feedback Structure Based on ADS The power amplifier based on the CGH40010 power tube design often needs to improve the circuit structure to ensure the stability due to its poor stability. However, this method often causes problems such as reduced amplifier gain and reduced efficiency. After the design of the input matching circuit and the output matching circuit, it still cannot be effectively improved. This paper proposes a power amplifier design scheme with a feedback structure, which improves the gain and efficiency, and the available bandwidth is effectively broadened. Introduction With the rapid development of wireless communication, people have higher requirements for the distance of wireless communication, which makes the research of power amplifier as an important part of the wireless communication link more attention by many scholars. At present, the main application frequency centers of wireless communication are concentrated in the two frequency points of 2.4GHz and 3.5GHz. The research on 2.4GHz and its surrounding frequency bands has matured, and many researchers have set their sights on the new communication frequency band centered on 3.5GHz. At the same time, as wireless communication links require wider communication frequency bands, the research of broadband power amplifiers has gradually attracted attention. The performance of the RF power amplifier determines the overall performance of the communication link, so designing a broadband power amplifier with excellent performance has become the goal of scholars' research. RF engineers often use design software for simulation testing when designing. Scholars generally use ADS software for the design of power amplifiers, so that when designing the amplifier circuit, they can accurately understand the circuit performance and optimize the circuit parameters in real time. At present, the international research on CGH40010 power amplifier is basically focused on the 2.4GHz frequency band. According to the parameters given in the Datasheet, the small signal gain is about 16dB at 2.0GHz, and the small signal gain is about 14dB at 4.0GHz. It can be seen that the gain of the power amplifier in the 3.5GHz frequency band should be between 14-16dB. But in fact, because the stability of the CGH40010 power tube is less than 1 when the amplifier circuit is designed, it cannot be used as a qualified amplifier without adding an auxiliary circuit. Therefore, after adding auxiliary circuits to increase stability, the gain and efficiency of the power amplifier will be adversely affected. The actual gain is only between 12-13dB. This text mainly carries on the structural innovation and improvement to the power amplifier designed by the CGH40010 power tube. Through the optimization of the feedback structure, a higher gain is achieved at 3.5 GHz, the available bandwidth is expanded, and the efficiency of the power amplifier is optimized. Theoretical analysis of broadband power amplifier with feedback structure The design of a power amplifier generally includes processes such as DC analysis, stability analysis, Load-Pull, output circuit matching, Source-Pull, input circuit matching, and microstrip conversion. The performance of the power amplifier depends on the gain and efficiency. For broadband power amplifiers, the width of the available bandwidth also needs to be considered. For the amplifier circuit, the circuit structure containing the feedback network can effectively change the performance of the amplifier. After the feedback circuit and the amplifier circuit of the power amplifier form a closed loop, the output signal is transmitted to the input terminal again to achieve the purpose of forward feedback, thereby improving the gain and efficiency of the amplifier. Power added efficiency: Power amplifier total efficiency: This research hopes to adopt a feedback structure, so that the signal at the output end is returned to the input end through a feedback circuit connected with a large resistance. Through the parameter adjustment and optimization of the power amplifier circuit, the stability and gain can reach the usable value at the same time. Gain, efficiency and available bandwidth have all been improved. General broadband power amplifier design For the design of CGH40010 broadband power amplifier, we can follow the general power amplifier design rules: DC analysis, stability analysis, bias circuit design, input matching circuit and output matching circuit design, microstrip line conversion, and finally simulation experiments. Due to the stability defect of the CGH40010 power tube itself, the general method is to connect a small resistor Due to the poor stability of the CGH40010 power tube itself, stability measures were added to the circuit, which changed the input impedance and output impedance values under 3.5GHz given in the Datasheet. At this time, it is generally necessary to use Load-pull and Source-pull to obtain the actual impedance. In this process, the gain of CGH40010 given by the Datasheet is about 16dB, and the output power given by the datasheet is 13w, so the input power is 25dBm. Load-pull and Source-pull can be used after determining the input power. Conversion of output power value corresponding to P 1dB : out 10 lg (mW)=(dBm) P (4) The experimental results obtained by simulation are shown below. If the value of the available gain is specified as 12dB or more, the available bandwidth is 800MHz. And the highest point gain is about 13.194dB, which is the normal gain achieved by the generally designed CGH40010 power amplifier in the 3.5GHz frequency band. According to the filter tuning optimization, the final gain fluctuates no more than 0.2dB. Because the input terminal has improved the circuit structure in order to increase the stability, the actual gain is difficult to reach the 14-16dB given by the Datasheet. This amplifier is a class AB power amplifier, and has added a stabilizing circuit, so the efficiency is only 37.387%. Feedback structure broadband power amplifier This research not only carried out the actual design and research on the previous CGH40010 power amplifier, but more importantly, proposed a feedback circuit structure, which feeds the amplified signal from the output end to the input end via the feedback circuit, so as to realize the gain improvement and bandwidth of the overall amplifier circuit. A large load is added to the feedback circuit to ensure stability, and inductance and capacitance elements are added to other parts of the circuit to optimize gain and bandwidth. Improved feedback structure CGH40010 power amplifier design circuit structure In order to make the CGH40010 power amplifier closer to the actual design, the ideal microstrip line in the circuit is replaced with the actual microstrip line. The material used is Rogers R04350 plate, the dielectric constant is 3.66, the dielectric thickness is 0.762mm, and the tangent loss angle is 0.02. Choose a microstrip line with a width of 1mm in the selection of the bias circuit, which can effectively prevent current breakdown. In order to save PCB space during production, a 90° corner with a radius of 2.5mm can be used. The experimental results obtained by simulation are shown below. If the value of the available gain is specified as 12dB or more, the available bandwidth is 1.08GHz, and the bandwidth is increased by 280MHz. And the highest point gain has reached 15.206dB, which is 2.012dB higher than the previous power amplifier gain. This is the gain achieved by the CGH40010 power amplifier in the 3.5GHz band after the feedback structure is improved. According to the filter tuning optimization, the final gain fluctuates no more than 0.2 dB. And the stability has reached more than 1, which can be used as a usable device. The efficiency of the power amplifier reached 54.304%. 4.Conclusions This design uses a feedback structure to improve the circuit structure of the power amplifier, so that the highest point gain is increased by about 2dB, the available bandwidth is expanded by nearly 300MHz, and the efficiency is increased by nearly 20%, successfully achieving the goal of improvement. And the stability also meets the requirements of general power amplifiers, and the microstrip line is also designed with actual materials, which meets the design requirements of actual power amplifiers.
2021-06-03T01:37:29.008Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "60bde3a3499daf9be960047cbdfee335a7c3e76f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1920/1/012056", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "60bde3a3499daf9be960047cbdfee335a7c3e76f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
2458346
pes2o/s2orc
v3-fos-license
Magnetic and Natural Circular Dichroism of Metalloporphyrin Complexes of Human and Rabbit Hemopexin* Magnetic circular dichroism (MCD) spectra of several metalloporphyrin complexes of rabbit and human serum hemopexins in the spectral region of 300 to 650 nm and natural circular dichroism (CD) in the 300 to 450 nm region are reported. The MCD spectra of the heme (iron-protopor-phyrin IX) complexes of both proteins were essentially identical suggesting similar iron coordination. The Soret region MCD spectrum of ferriheme . hemopexin has a shape and amplitude typical of other completely low spin (S = l/2) ferric hemeproteins, and the temperature dependence of the MCD intensity indicates that it is composed predominantly of Faraday C-type. terms. The visible region MCD spectrum of this complex closely resembles those characteristic of cytochrome b, and other bisimidazole-coordinated heme derivatives. Under aerobic conditions, in the fully oxidized state. The ferroheme hemopexins also exhibit MCD spectra similar to that of ferrocytochrome consistent with a low spin state and side-chain coordination of the heme iron in the reduced as well as in the oxidized Magnetic circular dichroism (MCD) spectra of several metalloporphyrin complexes of rabbit and human serum hemopexins in the spectral region of 300 to 650 nm and natural circular dichroism (CD) in the 300 to 450 nm region are reported. The MCD spectra of the heme (iron-protoporphyrin IX) complexes of both proteins were essentially identical suggesting similar iron coordination. The Soret region MCD spectrum of ferriheme . hemopexin has a shape and amplitude typical of other completely low spin (S = l/2) ferric hemeproteins, and the temperature dependence of the MCD intensity indicates that it is composed predominantly of Faraday C-type. terms. The visible region MCD spectrum of this complex closely resembles those characteristic of cytochrome b, and other bisimidazole-coordinated heme derivatives. Under aerobic conditions, heme. hemopexin is in the fully oxidized state. The ferroheme hemopexins also exhibit MCD spectra similar to that of ferrocytochrome b,, consistent with a low spin state and histidyl side-chain coordination of the heme iron in the reduced as well as in the oxidized state. The deuteroheme derivatives of rabbit hemopexin exhibit MCD spectra similar to those of the heme complex except for the expected slight differences in wavelength extrema, indicating that the vinyl side chains of protoporphyrin have little influence on the coordination. In contrast, the natural CD spectra of the heme complexes of rabbit and human hemopexin do not resemble the CD of cytochrome b,, reflecting differences in the crevice regions of the different hemeproteins. Furthermore, the CD spectra of the ferroheme complexes of rabbit and human hemopexin point to differences in the local environments of the heme chromophores. The MCD spectra of cobalt-and nickel-deuteroporphyrin IX bound to hemopexin do not display the effects seen with iron-porphyrins. The Soret and visible MCD spectra observed arise predominantly from porphyrin bands. In the visible region MCD spectra, A terms associated with the (2 bands are observed, but unlike iron-porphyrins, no evidence is found for additional transitions in the 440 to 490 nm region. Hemopexin is a serum P-glycoprotein (1,2) believed to function in the selective transport of heme' to the liver parenchymal cells (3) where the heme is degraded to bilirubin. The protein possesses a single binding site for heme (4)(5)(6) and has an affinity for heme, K,, < 10-l" M (71, considerably greater than that of serum albumin, K,, near lo-' M (8). The protein also interacts with a wide variety of naturally occurring and synthetic porphyrins (5, 6, g-111, but only the binding of ironporphyrins induces changes in the tertiary conformation of hemopexin (5) which may lead to cellular recognition of the complex. Both the chemistry and physiology of this protein have been recently reviewed (2,10). Several experimental approaches have been used to gain insight into the nature of the hemopexin-heme interaction. The absorption and EPR (12,13) spectra of the protoheme. hemopexin complex are typical of low spin hemeproteins, unlike the high spin heme complex formed with serum albumin (12), and suggest that the heme iron is axially coordinated to two strong field ligands. The absorption spectra of the oxidized and reduced forms of heme. hemopexin resemble those of cytochrome b, (14) which is known to have 2 histidines bound to the heme iron (15). and chemical (16) and photochemical (17) bisimidazole-type hemeproteins such as cytochrome b-> ( Fig. 6) and complexes such as imidazole myoglobin (19). The spectra of the diagnostic charge-transfer bands of other hemeligand systems, such as the methionyl-histidyl of cytochrome c (20) or the proposed thiolate-histidyl coordination of low spin forms of cytochromes P-450 (24, 43, 44), are distinctly different. This assignment is also supported by previous work showing that chemical modification (16) of histidine residues of rabbit hemopexin prevents formation of a hemichrome complex, but does allow heme to associate with the protein forming a complex with no apparent strong field iron axial ligands from the protein. The properties of the bromoacetate-modified hemopexin.heme complex are being examined in more detail using MCD techniques." by the interaction. Previous studies have established the utility of the MCD of hemeproteins in determining both the oxidation-reduction and spin states as well as the axial ligation of the heme (see for example, Refs. 19 and 20). The advantages of this technique include the ability to examine ferrous as well as ferric heme at ambient temperature. In contrast to MCD which is determined solely by the electronic state of the porphyrin and metal orbitals, the natural CD properties of these chromophores are strongly influenced by their local environment and symmetry. Free heme in solution has little optical activity but in hemeproteins derives its activity from the dissymetric environment imposed by the polypeptide "solvent" and, hence, is sensitive to the protein structure. In addition, considerable rotatory strength in the Soret region is believed to arise via a coupled oscillator mechanism between the heme moiety and nearby aromatic residues (21). Thus, the use of MCD and CD techniques provides information on both the electronic state and the polypeptide environment of heme. We report here the MCD and CD spectra of several metalloporphyrin complexes of rabbit and human hemopexin. This information should prove useful for the comparison of the heme coordination sphere of rabbit hemopexin with that of human hemopexin as well as with other hemeproteins, such as cytochrome b->, for which MCD (20,(22)(23)(24) and CD (25-27) spectra have already been reported. The data presented provide additional evidence for bishistidyl-heme coordination in the unmodified proteins and provide a basis for interpretation of the spectroscopic properties of modified derivatives which are currently being investigated. EXPERIMENTAL PROCEDURES Details of the experimental procedures are given in the miniprint supplement which follows.' RESULTS A detailed description of the results is presented in the miniprint supplement which follows.' DISCUSSION The MCD data presented here demonstrate that ferrihemehemopexin is a fully low spin hemeprotein as indicated by the shape and intensity of the MCD signal associated with the near-ultraviolet Soret band (19). The temperature dependence of the MCD of this derivative-shaped band (Fig. 2) establishes that it is composed predominantly of Faraday C terms as expected for the S = l/2 state. The MCD spectra in Fig. 1 as well as others not shown establish the similarity of ferric human and rabbit heme. and deuteroheme. hemopexin at pH 7 and pH 9 with regard to spin state. This confirms and extends previous results using EPR in frozen solution (12,13). In addition, earlier results indicate that no change in conformation or absorbance of heme. hemopexin ensues between pH 6.5 and 9.1 (5). The visible MCD spectra of ferriheme complexes of rabbit and human hemopexin (Fig. 3) The Soret and visible region MCD spectra of reduced heme. hemopexin resemble those of the low spin ferrocytochrome 6, (Fig. 6) but are unlike the low spin oxy-and carbonyl-derivatives of myoglobin or c-type cytochromes (19,20). In the Soret region, the hemopexin complexes exhibit spectra very similar to the weak inverse derivative-shaped A terms seen in the spectra of cytochrome b,, with a zero crossing near the absorption band maximum (Figs. 4 and 6). In the visible region, the hemopexin MCD spectra display an intense A term associated with the Q,, transition, typical of low spin ferrohemoproteins (Figs. 5 and 6). This strongly implies that the reduced heme. hemopexin complex is low spin (S = 0) and is supported by the absorption spectrum of reduced heme. hemopexin which is similar to other low spin hemeproteins. In the visible region, ferrodeuteroheme . hemopexin displays MCD generally similar to the MCD of the protoheme complexes. However, there are several well resolved MCD bands in the Q,. region and a distinct secondary MCD band in the Q,) region at 535 nm (Fig. 5). This complex MCD spectrum is reflected in the unusual absorption spectrum of ferrodeuteroheme. hemopexin in which the p-band shows a distinct shoulder and the a-band presents a double maximum with an intensity near that of the P-band. This is not found in other reduced low spin deuteroheme complexes, e.g. ferrodeuteroheme.cytochrome b, (14) in which the a-band has a single maximum of greater intensity than the p-band. However, "split" a-bands in heme proteins at ambient temperature have been reported previously, e .g ferromesoheme . cytochrome b, (14), Pseudomonas cytochrome c peroxidase (45), and cytochrome oxidase (46). The splitting of the Q,) band could arise from an internal inequivalency of heme x -y axes in the protein-bound state, but other causes can be envisioned and the basis of this effect is not clear at this time. The MCD spectra of cobalt-and nickel-deuteroporphyrin IX bound to hemopexin differ from those seen with iron-porphyrins (Figs. 7 and 8). The Soret and visible MCD spectra observed arise predominantly from porphyrinn-rr* bands, and no C terms are expected with diamagnetic porphyrins. In the visible region MCD spectra, A terms associated with the Q bands are observed, but unlike iron-porphyrins, no evidence is found for additional transitions in the 440 to 490 nm region. While the MCD and absorption spectra of the iron porphyrin . hemopexin complexes and cytochrome b, show a high degree of similarity, as do their EPR (12,13,47) ellipticity in this spectral region, with maxima near the absorption band maxima. Human ferriheme . hemopexin has a significantly weaker ellipticity in the Soret region than its rabbit counterpart ( Figs. 1 and 4) and human and rabbit ferroheme . hemopexin have Soret CD of opposite sign (Fig. 4). Since the rotatory strength of the heme chromophore derives predominantly from interactions with close lying tryptophan and tyrosine residues (211, these differences in CD reflect differences in the location or orientation, or both, of aromatic amino acid side chains near the heme-binding site of the two hemopexin species. Further differences are seen in comparing the CD spectra of the heme. hemopexins with cytochrome b, (Figs. 1, 4, and 6) and point to dissimilarities in their respec-FIG. 6. MCD and CD spectra of human heme-hemopexin and cytochrome b,. The concentrations of hemeproteins in 0.1 M sodium phosphate, pH 7, were near 7 X lo-" M for Soret region spectra and near 4 x lo-;' M for visible region spectra. The path length was 1.0 cm; field, 1.4 T; average of four or more passes; temperature, 22°C. Absorption, CD and MCD spectra were obtained using the same sample. HHx, human hemopexin. tive heme environments. These differences in CD are also reflected by the large accessibility of the heme chromophore to solvent in the rabbit heme. hemopexin complex (lg), whereas the heme of cytochrome b, lies in a restricted crevice (15). In summary, the work presented here shows that human and rabbit heme. hemopexin, like cytochrome b,, are low spin, bisimidazole heme complexes in both the oxidized and reduced oxidation states. However, the similarity does not extend to the local environments of their heme chromophores. The difference between human and rabbit hemopexin may indicate that the mechanism whereby hemopexin delivers heme to the liver for degradation may show at least slight species differences.
2018-04-03T03:18:47.645Z
1978-05-10T00:00:00.000
{ "year": 1978, "sha1": "4695a50ce0dfb29e349f419f19b41e3eeb335c98", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(17)40786-1", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2736aaab1efec9841627b06b84f05740cc10acb6", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
226324890
pes2o/s2orc
v3-fos-license
Late Diagnosis of Takayasu Disease in a 50-Year-Old African Black Woman with Repeated Episodes of Heart Failure: Seeing the Forest through the Trees—A Case Report Background: First described in 1908, TAK has now been recognized as a non-specific inflammatory disease of unknown etiology, predominantly affecting young females. Sometimes, it progresses into relatively rare and potentially fatal scenarios such heart failure. Case Presentation: Here, we present the case of a 50-year sub-saharan female suffering from acute heart failure related to TAK. Despite constitutional symptoms (fever, malaise, weight loss) and more characteristic features such claudication of lower ex-tremities, carotydinia, and pulseless syndrome, diagnosis of TAK was delayed since main presentation was heart failure. Immunosuppressive and anticoa-gulant therapies have induced improvement in cardiac manifestations. Con-clusion: Early diagnosis and proper treatment can protect the patient from dangerous complications such heart failure. Background Large vessel vasculitis (LVV), of which giant cell arteritis (GCA) and Takayasu arteritis (TAK) are the major subtypes, represents a group of diseases whose importance has been increasingly recognized over the years. Clinical manifestations for these diseases may vary from non-specific constitutional symptoms, such as fever, malaise and weight loss, to more characteristic features, resulting from stenosis/occlusion of the vascular territories involved. Treatment of TAK consists of two strategies: immunosuppressive therapy for inflammation control and management of vascular diseases including control of blood pressure and surgical or interventional procedures [1]. First described in 1908, TAK is a rare chronic inflammatory disease of unknown etiology, predominantly affecting young females [2]. Its identical pathology is inflammatory infiltrates involving all arterial layers, including acute exudative inflammation and chronic granulomatous inflammation situated mainly in the media and adventitia while hyperplasia and neovascularization are found in the intimal layer [3]. Sometimes, it progresses into relatively rare and potentially fatal scenarios including acute visual loss, myocardial infarction, heart failure, cerebral thrombosis, and malignant hypertension. Here, we present a case of a 50-year female suffering from acute heart failure related to TAK. Case Presentation A 50-year-old female resident in a border country (Guinea Conakry) was admitted in our hospital (Military Hospital of ouakam, Dakar, Senegal) for acute heart failure. Her Medical history included hypertension nearly 5 years, managed with amlodipine (10 mg/day). She started to develop intermittent claudication 4 years priorto admission. She was also reported to have low-gradefever, night sweats, myalagias, apparent weight loss, and carotidynia. The heart failure first developed more than 3 years ago with sudden dyspnea and palpitation after what local hospital made the diagnosis of heart failure and prescribed diuretics and ACE in addition to amlodipine. Although dyspnea resolved slowly, the claudication kept worsening gradually. Over time, she develops more episodes of cardiac decompensation which become more and closer and more intense. One month ago, her health deteriorated with a New York heart association (NYHA) class II dyspnea, and complained of decreased appetite, coughing, and pitting oedema of lower limbs and eyelids. For a better exploration, she decides to come to our structure. During her 5-day road trip, she omits to take her diuretic treatment and is admitted on arrival in a state of acute heart failure. On admission, her height was 176 cm and weight was 89 kg. Her body mass index (BMI) was 28.7 kg/m 2 . Physical examination revealed a right arm blood pressure of 120/90 mm Hg, a left arm blood pressure of 100/68 mm Hg and a 3/6 D. M. BA et al. systolic murmur over the right cervical region. Her heart rate was 118 bpm of regular rhythm. Jugular vein distensions, and pitting oedema of lower limbs were also noted. Coarse crackles were prominent in both lungs. Pulses of the bilateral radial arteries were not present. The bilateral femoral artery, popliteal artery and bilateral dorsalis pedis artery could be touched Her discomforts improved after diuresis was initiated, and she was admitted to the Cardiac Unit in our hospital for further investigation. Since our patient met 3 out of criteria among those of the diagnosis of TA (symptoms of limb ischemia, physical findings of decreased pulses, and unsymmetrical blood pressure), we decided to perform a Doppler ultrasound which showed a complete occlusion of the common carotid artery and the subclavian on the left, and a thickening of the wall of the right subclavian artery. Computed tomography angiography ( Figure 3) revealed a diffuse thickening of the wall of the aorta more marked in the arch and the thoracic descending aorta, reducing the aortic lumen between 3.8 and 7 mm. The brachiocephalic trunk was stenosed with a 2.5 mm lumen with enhancement of the right carotid and the right subclavian artery. There was an obstruction of the origin of the left common carotid artery and the left subclavian artery. The abdominal aorta and pulmonary arteries appeared normal. These angiographic findings classified our patient as type IIb. Furosemide boluses, and oral ACE were administered to counter heart failure while low molecular weight heparin was injected subcutaneously for anticoagulation. Intravenous methylprednisolone 40 mg daily and cyclophosphamide 400 mg weekly were initiated to treat TAK. 10 days later, her symptoms began to resolve and she was discharged after that we gradually transitioned from low molecular weight heparin to warfarin and switched from intravenous methylprednisolone to oral prednisone. After five weeks, repeated echocardiography showed reduction of heart dimensions with LVEF 49.2%, and a significant lowering in pulmonary pressure which returned to 39 mm Hg. Clinically, she had no complaints and driven daily activities without difficulties. Discussion Systemic vasculitides are pathologically identified by the inflammation of blood vessel walls, and cause various organ disorders depending on the size of the affected blood vessels. The first classification of vasculitis was proposed by Zeek in 1952 [4] [5]. In 1994, Jennette et al. published the results of the Chapel Hill Consensus Conference (CHCC) on the Nomenclature of Systemic Vasculitis [6]. They adopted names and definitions of vasculitides based on the size of the affected vessels, and categorized vasculitides into large-vessel vasculitis, medium-vessel vasculitis and small-vessel vasculitis. Large-vessel vasculitis, including TAK and GCA, was defined as vasculitis affecting the aorta and its major branches more often than other vasculitides, whereas any size artery might be affected. It was concluded that these types of arteritis could not be distinguished by pathological findings, except for the difference in the age of onset [7]. Takayasu arteritis is a rare, large-vessel vasculitis of unknown etiology that most commonly affects women predominantly below 40 years of age. The disease was first described by Professor Mikito Takayasu during the 12th Annual Meeting of the Japan Ophthalmology Society held in 1905 [8]. At the same meeting, Onishi and Kagoshima reported similar cases mentioning the patients also had abolished radial pulse, a finding overlooked by Takayasu [9]. The disease has a worldwide distribution although it occurs more commonly in Asia [10]. It is now known that there are regional and ethnic differences in the clinical features of patients with vasculitis. Unfortunately, there is a paucity of data regarding these conditions in the developing world, including Africa, with most information arising from Western Caucasian populations. However, in recent years, there have been increasing reports which indicate that these conditions do occur in Africa, although hitherto infrequently reported [11]. In addition to symptoms resulting from the vascular territories involved, TAK can present with systemic symptoms including fever, weight loss and malaise. Unlike GCA (where a classical cranial pattern of symptoms can be described), in TAK there is no clear pattern of presentation. However, some differences in disease manifestations may occur according to age and gender. Using an age between 12 and 35 years old plus the 1990 American College of Rheumatology (ACR) classification criteria for TAK as inclusion criteria, Mont'Alverne et al. [12] studied 55 patients with TAK (17 males and 38 females). Multivariate analysis showed that male gender was a risk factor for the occurrence of abdominal pain (OR 18.75; 95% CI 2.89 to 121.54) and ascending aortic aneurysm (OR 9.51; 95% CI 1.94 to 46.70). 9 There were no gender differences regarding the presence of constitutional symptoms, limb claudication, carotidynia, respiratory and articular manifestations, nor the presence of comorbidities. Watanabe et al. [13] included 1372 patients (222 males and 1150 females) newly registered (<1 year) in a nationwide Japanese registry and analysed data according to gender and age of disease onset (≤40 vs >40 years). Gender analysis (although limited given the number of males compared with females) showed that, overall, the most common complications were hypertension and aortic valve regurgitation, with males having more complications than females (ischaemic heart disease, funduscopic alterations, aortic aneurysm and dissection, renal disorders, renal artery stenosis, and hypertension). Most of the data from Africa has been published from Tunisia. Ghannouchi et al. [14] in a review of 37 Takayasu patients from 1985 to 2005 noted a mean age at presentation of 33.2 years (range 16 -68 years) and found that 88.9% were female. Intermittent claudication was the most common presentation (81.5%) and hypertension was noted in 40.7% of cases. Mwipatayi et al. [15] also reviewed 272 cases from South Africa seen between the years 1952 and 2002. The mean age at presentation was 25 years (range 14 -66 years) and 75% of patients were female. Interestingly, only 8% of the patients studied were Caucasian. Hypertension was the most common presentation (77%) and was usually a consequence of renal artery stenosis or aortic coarctation. Arnaud L et al. [16], in a Single-Center Retrospective Study of 82 Cases Comparing White, North African, and Black Patients, conducted in France found a median age at diagnosis of 39.3 years (range, 14 -70 years) in white patients, vs. 28.4 years (range, 12 -54 years) in North African (p = 0.02), and 28.0 years (range, 13 -60 years) in black patients (p = 0.08). The proportion of patients who had onset of TAK after 40 years of age was significantly higher in white than in non-white patients (40.0% vs. 18.6%, p = 0.03), suggesting that late-onset TAK may be a more specific feature of white patients. Constitutional symptoms and ophthalmologic manifestations were the most frequent presenting features in white patients (13.9% each). Claudication of extremities was most frequent in North African (27.8%) and black (26.3%) patients. [16] Our patient presented with typical signs and symptoms of arterial blockages, but her most striking symptoms were total heart failure. Prevalence of LV dysfunction in TAK has not been clarified. Observational studies have reported relatively high prevalence of LV dysfunction in TAK of 15% -50%, [17] [18] [19] including non-hypertensive dilated cardiomyopathies in 4% -6%. Since HF symptoms may often be masked in TAK patients, allowing underestimation of HF prevalence in TAK, further larger scale clinical studies are warranted for these patients. While multiple factors such as valvular regurgitation or coronary artery involvement are known to be related to LV dysfunction in TAK, vascular inflammation resulting in chronically elevated vascular resistance may be one of the underlying pathogenesis of LV dysfunction. Elevation of vascular resistance should be considered as a possible and reversible cause of LV dysfunction in TAK without myocarditis, coronary artery involvement, or valvular regurgitation. Mwipatayi and Jeffery retrospectively analyzed 272 TAK patients and identified 90 individuals with cardiac failure, which accounted for 46% of all-cause mortality (29 out of 57) [15]. Main causes of heart failure in TAK patients were increased afterload due to renovascular hypertension and aortic regurgitation. Myocardial ischaemia induced by myocarditis, accelerated atherosclerosis, or severe pulmonary hypertension were also noted [18]. For our patient, echocardiographic findings make us suspect myocarditis. Occurring not uncommonly in TAK patients, myocarditis tends to occur early in disease course and appears to correlate with the disease's activity. In 1988 and 1991, Talwar et al. separately performed serial endomyocardial biopsies in TAK patients and myocarditis were identified in 8 out of 18 and 24 out of 54 patients [18]. The pathophysiology was believed to be direct immune cytotoxicity towards the myocardium. The aorta can be involved along its entire length, and although any of its branches can be diseased, the most commonly affected are the subclavian and common carotid arteries [20] [21]. Although the most frequent pattern of disease varies geographically [21] [22], stenotic lesions found in >90% of patients predominate, whereas aneurysms are reported in approximately 25% [15] [23] [24] [25] [26] Pulmonary arteries are involved in up to 50% of patients [27] [28], and it is important to look specifically for evidence of aortic valve regurgitation and coronary arteritis. The distribution of vascular affection of the disease also varies by region, with cervical and thoracic arterial lesions being more common in Japan and South America contrary to abdominal lesions in other Asian countries [29]. The coronary ostia, pulmonary arteries and renal arteries may also be involved. males). Ninety-three percent of vessels affected included the aortic arch and its branches, while only 24% involved the renal arteries and 21% the abdominal aorta [30]. A retrospective study by Elasri et al. among 47 Moroccan patients reviewed between the years 1988 and 1999 reported that involvement of the aortic arch and its branches was more frequent than the involvement of the abdominal aorta and its branches [31] as the case in our patient. The diagnosis of TAK requires at least three of the American College of Rheumatology criteria ( sensitivity of 91% and specificity of up to 98%. Our patient fulfilled all criteria except age at the onset of disease. Takayasu arteritis is usually diagnosed in young individuals in their second and third decade of life and affects females in most cases (82.9% -97.0%) [32] [33] [34]. Generally, TAK has been defined arbitrarily as a disease with onset prior to the age of 40. An age <40 years was selected as a mandatory criterion in the original Ishikawa diagnostic criteria and is a non-mandatory criterion in the 1990 American College of Rheumatology (ACR) classification criteria. However, occurrence of TAK in patients older than 40 years is not rare. Recent studies conducted in different populations indicate that the proportion of patients aged over 40 years at disease onset varies from 9% to 32% [16] [33] [35] and at the time of diagnosis varies from 15% to 71% [10] [36] [37]. Ideally, the diagnosis of Takayasu arteritis should be made early in the prestenotic phase. In theory, this would allow initiation of treatment to suppress inflammation and prevent vascular injury. A delay in diagnosis of many months or years is typical, even in patients in whom diminished or absent pulsation has been recorded [23]. Although the reasons for the delay are multifactorial, important additional factors include the lack of easy access to biopsy material and, to some extent, the limitations of current diagnostic criteria. The American College of rheumatology and Ishikawa criteria [38] favor the detection of established stenotic disease and have not yet been revised in response to the increasing sensitivity of noninvasive imaging for the detection of prestenotic disease. Moreover, the likelihood of early diagnosis is not improved by the variable nature of disease presentation, and the lack of constitutional symptoms in 30% -50% of patients at presentation [25] [26]. Imaging of arteries is very useful in diagnosing TAK and for patient follow-up. Conventional angiography was formerly the gold standard, and currently it is often replaced with computerized tomography (CT) or magnetic resonance (MR) angiography in routine practice. Imaging methods allow distinguishing different types of TAK, depending on location of vascular lesions. The disease can be classified into 5 types (Figure 4). To date, no established biological marker specific to the diagnosis of patients with TAK has been reported. Patients with TAK often present increased inflammation markers, including C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR). However, systemic inflammatory response does not always show a positive correlation with inflammatory activity in the vessel wall. Therefore TAK may be active without increasing CRP or ESR, and vice versa. For the treatment and outcomes, early implementation of immunosuppressive therapy seemed to be responsive. Glucocorticoids remain the basic and the most effective first line TAK treatment. Initially, in the presence of active TAK disease, the treatment of choice is high-dose (0.8 -1 mg/kg/day, p.o.) prednisolone or an equivalent. Generally, two thirds of the total daily dose is given early in the morning and the rest of the Figure 4. Type I: type that affects blood vessels of aortic arch. Type IIa: affects ascending aorta, aortic arch and its branches. Type IIb: affects ascending aorta, aortic arch and its branches. Thoracic descending aorta. Type III: affects thoracic descending aorta and abdominal aorta and/or renal arteries. Type IV: affects abdominal aorta and/or renal arteries. Type V: combined features of types IIb and IV. dose in the evening after meals. This treatment is maintained until symptoms and laboratory evidence (ESR, CRP) of inflammation normalize (usually 4 -6 weeks, sometimes up 12 weeks). With control of inflammation, corticosteroid therapy can be tapered to (typically about 10 percent every week) a maintenance dose of 0.1 -0.2 mg/kg/day (≤15 mg/day). For the prevention of relapse, treatment is usually continued for 1 -2 years. The response to high dose prednisolone is generally favorable, but relapses may occur while gradually tapering the dose and adverse effects of long-term treatment can cause problems. Therefore sometimes physicians start treatment from conventional immunosuppressant agents together with initial glucocorticosteroid treatment or while tapering thesteroid dose [39] [40] [41]. Talwar et al. chose combined therapy of prednisolone and cyclophosphamide over 12 weeks, and improvements were evident not only in clinical and haemodynamic states but also myocardial morphology [18]. Takeda used steroid therapy for 2 months, and the patient's symptoms were markedly alleviated, and his cardiac function and morphology greatly improved [42]. For our patient, we initiated the treatment regimen of corticosteroids and cyclophosphamide, and her symptoms resolved after 5 weeks, together with the heart's structural and functional improvement. In cases where immunosuppressive drugs are insufficient, biological therapy should be introduced. Observational studies provide evidence that biological agents such as anti-tumor necrosis factor (anti-TNF), rituximab and tocilizumab are beneficial and could be used effectively in refractory TAK [43]. Biological agents are not recommended as monotherapy (i.e. without GC) nor as first line, add-on therapy to GC in newly-diagnosed TAK patients [44]. In chronic stages of TAK, patients with ischemic symptoms need to be treated with endovascular revascularization or vascular surgery (such as balloon angioplasty or stent). Procedures should be undertaken only after the suppression of inflammation in the affected arteries. Surgical procedures carry risk of restenosis or occlusion and success rate depends upon the location and stage of stenosis of blood vessel [45]. The presence of major complications, progressive disease course and older age are unfavourable prognostic indicators, as Ishikawa and colleagues demonstrated in a series of prospective observational studies. In this study, peak death rates occurred early, in the first year after diagnosis (n = 10/16) and late in the disease course, >10 years after diagnosis (n = 5/16). Major causes of death were congestive heart failure, acute myocardial infarction, cerebrovascular accidents and postoperative complications [24]. These results are corroborated by other authors reporting that overall survival decreases in the first 5 years of disease, and event-free survival rates decrease progressively along the years, even more for patients with severe forms of disease (severe or multiple complications) or progressive course and carotidynia [35] [46]. In the study conducted by Mwipatayi, the most common cause of death was cardiac failure [15]. Conclusions TAK belongs to rare, idiopathic diseases of the immune system affecting the aorta and its branches. Various presentations of TAK have been recorded, including cardiac involvement. Despite the disease being recognized for more than 100 years, the outlook for patients with TAK remains relatively poor in Africa. This is the first case of TAK with heart failure reported in black African. Clinical practitioners should be aware of this disease. Early diagnosis and proper treatment can protect the patient from dangerous complications such heart failure.
2020-10-28T19:18:18.154Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "ad3413eb392f4d9d9ce9db0f39cb4137fc2abf45", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=103583", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "eb663cdd0579d07d4c10f183a33f6b7380ab8a1c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252683142
pes2o/s2orc
v3-fos-license
A large sample theory for infinitesimal gradient boosting Infinitesimal gradient boosting is defined as the vanishing-learning-rate limit of the popular tree-based gradient boosting algorithm from machine learning ( Dombry and Duchamps, 2021). It is characterized as the solution of a nonlinear ordinary differential equation in a infinite-dimensional function space where the infinitesimal boosting operator driving the dynamics depends on the training sample. We consider the asymptotic behavior of the model in the large sample limit and prove its convergence to a deterministic process. This infinite population limit is again characterized by a differential equation that depends on the population distribution. We explore some properties of this population limit: we prove that the dynamics makes the test error decrease and we consider its long time behavior. Introduction Tree-based gradient boosting (Friedman, 2001) is one of the most successful algorithm from machine learning. It provides a powerful and versatile methodology in supervised learning and achieves excellent performance in prediction problems where one aims at understanding the relationship between a response variable (target) and explanatory variables (features). Its modern implementation in XGBoost (Chen and Guestrin, 2016) is involved in countless applications. Several theoretical and statistical works have been devoted to the understanding of the good performance of boosting, see e.g. Jiang (2004), Lugosi and Vayatis (2004), Blanchard et al. (2004), Zhang and Yu (2005) to cite only a few. The primary focus is to establish the consistency of the method, meaning that near optimal error rates can be achieved provided sufficiently large training data is available. On the other hand, theoretical results considering the time dynamics of gradient boosting are relatively scarce. A recent advance in that direction is the model of infinitesimal gradient boosting (Dombry and Duchamps, 2021) that provides a mathematical characterization of the vanishing-learning-rate limit of tree-based gradient boosting by a nonlinear ordinary differential equation in an infinite-dimensional function space. The underlying dynamics is characterized by the infinitesimal boosting operator (defined thanks to the training sample) and it ensures that the training error is non-increasing in time. This approach is very much related to the gradient flow approximation of stochastic gradient descent (Dieuleveut et al., 2020), with the additional difficulty that the dynamics takes place in a function space and that the structure of tree-functions has to be handled. The purpose of this paper is to analyze the large sample theory of infinitesimal gradient boosting and we prove that a deterministic infinite population limit exists. More precisely, the infinitesimal boosting operator driving the dynamics converges as the sample size goes to infinity, implying the convergence of the solutions of the corresponding ODEs. Once convergence is established, we study some properties of the limit and we prove in particular that the infinite population dynamics ensures that the test error is non-increasing. Furthermore, we explore the long-time properties of infinite population boosting. We expect and conjecture that as times goes to infinity, the test error converges to its minimum and the boosting predictor to the Bayes predictor. Unfortunately, proving such results turns out to be surprisingly difficult and out of reach for the moment, in spite of substantial efforts, so that we provide only partial results in this direction. The structure of the paper is as follows. Section 2 is devoted to the presentation of the model, assumptions and results. We first recall the setting of infinitesimal gradient boosting and our main assumptions; then we state our results regarding the convergence of infinitesimal gradient boosting when the sample size goes to infinity (Theorem 2.13) ; finally we describe some important properties of the infinite population limit. Notions and preliminary results that play an important role in our analysis are introduced in Section 3. All the proofs are postponed to Sections 4 and 5. Setting and notation We introduce the setting of infinitesimal gradient boosting developed by Dombry and Duchamps (2021) and try to provide a short yet self-contained presentation. Further details can be found in Dombry and Duchamps (2021). Supervised statistical learning framework. We observe a response variable, or target, Y with state space Y ⊂ R jointly with a vector of covariates, or features, X taking values in [0, 1] p . We want to construct a model to predict the target Y in view of the features X. We denote by P the joint distribution of (X, Y ) and let (X i , Y i ) i≥1 be independent copies of (X, Y ). We will also denote by P X the marginal distribution of X. A predictor is a measurable function F : [0, 1] p → R used to predict Y in view of X. A loss function L : Y × R → R compares the observation y and its prediction F (x); the loss L(y, F (x)) is interpreted as a prediction error that we want to minimize. The Bayes risk is defined by inf F E[L(Y, F (X))], i.e. the infimum expected risk over all possible predictors. A predictor F * achieving the Bayes risk is called a Bayes predictor. The goal of statistical learning is to build a predictorF n using only the first n observations (X i , Y i ) 1≤i≤n as a training set and that approaches the Bayes risk as the size of the training sample grows, that is we want E[L(Y,F n (X))] → E[L(Y, F * (X))] as n → ∞. Softmax gradient trees. Friedman (2001) introduces gradient boosting as an additive model that sequentially learns a sequence of trees in order to minimize the training error. The procedure is akin to gradient descent and at each step, a gradient tree is fitted and added to the current model; a shrinking factor called learning rate is introduced that plays the same role as the step size in gradient descent. We detail in the following the construction of gradient trees. For more details on the gradient boosting algorithm, we refer to Friedman (2001) and Hastie et al. (2009, Chapter 10). Given a predictor F : [0, 1] p → R, the gradient tree is obtained by fitting a (randomized) regression tree to the residuals and performing a line search approximation in the different leaves. More precisely, the residuals of a predictor F are defined as Its one-step approximation performs a single Newton-Raphson step, yielding the explicit updater with the convention 0/0 = 0. The gradient tree finally writes We next provide some details on the construction of the partition (A v ) v∈{0,1} d associated with (randomized) regression tree. Starting with the trivial partition A = [0, 1] d into a single leaf (depth 0), binary splitting is applied recursively with depth d so as to obtain a partition into 2 d leaves. Binary splitting selects a covariate j ∈ {1, . . . , p} and a threshold u ∈ [0, 1] and then divides the leaf Different splitting rule may be used and are generally defined via a score measuring the heterogeneity between the two leaves. For regression trees, the usual score for the split is the intergroup variance where n(A) andr(A) denote respectively the number of observations and mean residual in leaf A, and similarly for A 0 , A 1 . In its original version (Breiman et al., 1984), the algorithm uses greedy binary splitting, meaning that the covariate j and threshold u that are selected maximize the score ∆. Another possibility, explored by Extra-Trees (Geurts et al., 2006), is to restrict the search of the best split within a subset of K randomly chosen proposals (j k , u k ) 1≤k≤K . The proposals are independent and uniform on {1, . . . , p} × [0, 1]. When K = 1, the split (j, u) is chosen completely at random, whence the name completely random trees. Softmax regression trees (Dombry and Duchamps, 2021) were proposed for the purpose of regularization of the strong argmax in Extra-Trees. Given K random proposals (j k , u k ) 1≤k≤K , the scores (∆ k ) 1≤k≤K corresponding to the different proposals are computed and the threshold (j, u) is randomly chosen according to the softmax distribution The parameter β ≥ 0 allows to interpolate between completely random trees (β = 0) and Extra-Trees (β = ∞). In order to ease the asymptotic analysis, it is useful to see the procedure as a function of the empirical distribution P n = n i=1 δ (X i ,Y i ) associated to the sample. We use the short notation P n [G(x, y)] = G(x, y)P n (dxdy) to denote the integral of a function G : R × Y → R with respect to P n . The leaf values defined in Equation (1) can be rewritten asr and the score of a binary split A = A 0 ∪ A 1 defined in Equation (4) as (7) Note that the last term does not depend on the split but only on the original region and can therefore be omitted when looking for the best split. The stochastic algorithm associated with softmax binary splitting and softmax gradient tree are summarized in Algorithms 2.2 and 2.3 respectively. We write T (x; P n , F ) to emphasize the dependency of the softmax gradient tree on the sample distribution P n and predictor F . Algorithm 2.2. Softmax binary splitting. • Input: sample distribution P n , predictor F , region A. Output the partition Algorithm 2.3. Softmax gradient tree T (x; P n , F ). • Input: sample distribution P n , predictor F . • Output: randomized tree function T (x; P n , F ). Infinitesimal gradient boosting. Infinitesimal gradient boosting is defined as the vanishing-learning-rate limit of gradient boosting and characterized by a nonlinear ordinary differential equation in function space. The existence of a limit, for the algorithm of gradient boosting with learning rate λ, as λ → 0, is justified in Dombry and Duchamps (2021) for a fixed input (x i , y i ) 1≤n . Let B denote the space of measurable bounded functions F : [0, 1] p → R endowed with the supremum norm · ∞ . The infinitesimal boosting operator B is defined as the operator T n : where T (x; F, P n ) is the softmax gradient tree defined in Algorithm 2.3 and expectation is taken with respect to the algorithm randomness (and not the sample randomness). Under mild assumptions (discussed below), this operator is locally Lipschitz in F and infinitesimal gradient boosting (F n t ) t≥0 is defined as the unique solution in B of the differential equation with initialization at the constant function Importantly, the training error t → P n [L(y,F n t (x))] is non-increasing, which is a natural property resulting from the fact that gradient boosting is meant to minimize the training error. Also, the mean residual P n [ ∂ L ∂z (y,F n t (x))] ≡ 0 is identically zero, which follows from the line search approximation in the definition of gradient trees. In order to consider random samples and measurability issues, it is useful to work on a complete and separable function space rather than on the non-separable space B. The path (F n t ) t≥0 a priori defined in B remains in a function space with strong regularity properties. For q ∈ [1, +∞], let W q denote the space of functions F : where π 0 is a reference probability distribution on [0, 1] p (see Section 3.1 for more details), f F ∈ L q (π 0 ) and [0, The infinitesimal boosting operator has its image in W ∞ and this implies that infinitesimal gradient boosting (F n t ) t≥0 can be seen as a smooth path in W q for all 1 ≤ q ≤ ∞. For 1 ≤ q < ∞, the Banach space W q is separable, and measurability (and even continuity) properties will be established in Section 3.3 that allow to consider infinitesimal gradient boosting with random sample. This paper studies the almost sure convergence of infinitesimal gradient boosting when the sample size goes to infinity. Assumptions We next specify our working assumptions, that are quite general and satisfied by the three cases considered in Example 2.1. We start with a convexity assumption on the loss function in its second variable. Assumption 2.4. The function L : Y × R → R is C 2 , with ∂ 2 L ∂z 2 (y, z) positive and locally Lipschitz-continuous in z. Furthermore, for all z ∈ R, we have has a unique minimizer. The next assumption requires some integrability of the residuals. Assumption 2.5. There exists q > 1 such that for any compact subset K ⊂ R, we have sup This assumption is trivially satisfied for classification, either with cross-entropy or exponential loss. In the case of regression it is equivalent to sup x E[|Y | q | X = x] < ∞ for some q > 1. The two following assumptions are more technical and we believe they are not too stringent. The first one only concerns the loss function. Assumption 2.6. One of the following conditions holds. (i) For any compact subset K ⊂ R, we have sup (ii) For any compact subset K ⊂ R, we have inf (y,z)∈Y×K ∂ 2 L ∂z 2 (y, z) > 0. Note that the first point above covers the classification case, while the second point covers the regression case. Our last assumption involves the conditional distribution of Y given X. For i ∈ {1, 2}, we define Assumption 2.7. For any compact subset K ⊂ R, there exists C > 0 such that for all i ∈ {1, 2}, x ∈ [0, 1] p and z, z ′ ∈ K, In the regression case, Assumption 2.7 is a consequence of Assumption 2.5. In the classification case, it is trivially satisfied for the two cases we consider. Example 2.8. The different assumptions involve the loss functions and its derivatives. We recall the corresponding formulas for the three main cases from Example (2.1) for which the different assumptions are easily verified. Convergence to the infinite population limit Our main results give the large sample asymptotics for the infinitesimal boosting operator T n and infinitesimal gradient boosting (F n t ) t≥0 . Measurability issues with respect to the input sample (X i , Y i ) 1≤i≤n will be considered in Section 3.3. We first define the limiting object corresponding to infinite population gradient boosting. Definition 2.9. • The infinite population softmax gradient tree T (x; P, F ) is defined as the output of Algorithm 2.3 where the sample distribution P n is replaced by the population distribution P. • The infinite population infinitesimal boosting operator T : with expectation taken with respect to the randomness of the stochastic algorithm. An equivalent, more formal definition of T is given in Section 3.1 where we also check that T (F ) ∈ W ∞ for all F ∈ B. Let C bb (B, W q ) be the space of functions from B to W q that are bounded on bounded sets and endowed with the topology of uniform convergence on bounded sets. Theorem 2.10. Let q > 1 satisfy Assumption 2.5. We have the almost sure convergence, as n → ∞, T n a.s. Next we consider the dynamics associated with the infinite population infinitesimal boosting operator which defines the infinite population gradient boosting process. has a unique maximal solution started at F 0 at time 0. It is not clear under our general assumptions whether the solution of (11) is defined for all time t ≥ 0 or explosion may occur in finite time. We denote by t max the maximal time of definition of the solution (F t ) t≥0 . A linear growth condition ensures that the solution is defined for all time t ≥ 0. Lemma 2.12. If the loss function satisfies for some constants A, B ≥ 0, then t max = +∞. Note that Equation (12) holds in the case of regression (under Assumption 2.5, which implies that sup x E[|Y | | X = x] < ∞) or classification with exponential loss. In the case of classification with cross-entropy, Assumption (12) is not fulfilled in general and we do not now whether t max is finite or not. We finally consider convergence of the gradient boosting process (F n t ) t≥0 . The space of continuous functions C([0, t max ), W q ) is endowed with the topology of uniform convergence on compact sets. As n → ∞, we have the almost sure convergence where (F t ) t≥0 denotes the unique solution of (11) started fromF 0 . Properties of population infinitesimal gradient boosting Gradient boosting is designed in order to minimize the training error and it is indeed proved that the training error t → P n [L(y,F n t (x))] is non-increasing, see Proposition 4.6 in Dombry and Duchamps (2021). In the infinite population limit, gradient boosting has the fundamental property that the test error is non increasing. Proposition 2.14. For all initializationF The specific initializationF n 0 = arg min z∈R P n [L(y, z)] ensures that the finite sample model has centered residual on the training set, that is t → P n [ ∂ L ∂z (y,F n t (x))] is identically null, see Proposition 4.6 in Dombry and Duchamps (2021). Interestingly, this property is preserved in the population limit. Proposition 2.15. ForF 0 = arg min z∈R E[L(Y, z)], the population gradient boosting has centered residuals on the population, We next focus on the long time behavior of population infinitesimal gradient boosting. It is related to the critical points of the ODE (11) that we first character- Proposition 2.16. Let F ∈ B. The following properties are equivalent: Example 2.17. In the case of regression, ∂ L ∂z (y, F (x)) = F (x) − y. We consider L 2 = L 2 ([0, 1] p , P X ) and the subspace L 2 d = span(1 A , A ∈ A d ) -in other words it is the subset of functions f ∈ L 2 that can be written In other words, the set of critical points is exactly the affine subspace orthogonal to L 2 For a general loss function and assuming d ≥ p, using point (iii) of Proposition 2.16 and the convexity of L in its second variable, we see that we have T In statistical learning, a desirable general property is consistency, which means that the test error converges to its minimum. Such consistency for population boosting is considered in Breiman (2004) for a version of Adaboost. Proposition 2.16 shows that the critical points of the ODE are exactly the minimizer of the expected loss, which is a first step toward consistency. Unfortunately, it seems difficult to prove formally consistency and we are able only to prove a weaker statement. We focus on the case of regression where consistency is equivalent to the convergencê F t → P d (F * ) in L 2 = L 2 ([0, 1] p , P X ) with the notation introduced in Example 2.17. We cannot prove strong convergence but weak convergence only. We recall that in the Hilbert space L 2 , a sequence G n converges weakly to G, noted G n w → G, if the convergence of inner products G n , H → G, H holds for all H ∈ L 2 . Proposition 2.18. (i) In the case of regression, weak convergence holds: (ii) For completely random trees (case β = 0), strong convergence holds: Remark 2.19. We would expect that strong convergence holds in the case β > 0 as well; a hint in this direction is the following remark. Let us temporarily write T β instead of T to highlight the dependence on the parameter β, and consider (F t ) t≥0 the solution of d dtF t = T β (F t ), started from someF 0 ∈ L 2 d rather than from the constant F 0 . In the case of regression, for the proofs of the results above we will show and use the fact that Then one can show, after tedious calculations, that the latter quantity is decreasing when β increases. This suggests that, at least around t = 0, F t − P d (F * ) 2 L 2 tends to 0 faster when β > 0 than when β = 0. However, it is not obvious to compare the whole trajectories of (F t ) t≥0 as a function of β and with the techniques we used, we were not able to prove the convergenceF t Similar results may be expected in the general case, for instance in classification, where we expect that the test error converges to the minimal risk over the space in which (F t ) t≥0 lives. This remark leads us to the following conjecture. Preliminaries We now present some technical results that will be used for the proof of the main results from Section 2. All the proofs are postponed to Section 4. Explicit formulas associated with Algorithm 2.3 We recall some technical background and explicit formulas associated with Algorithm 2.3. We refer to Dombry and Duchamps (2021, Section 2.2) for more details. The binary rooted tree with depth d ≥ 1 (from graph theory) is defined on the vertex The vertex set is divided into the internal nodes v ∈ T d−1 and the terminal nodes v ∈ {0, 1} d , also called leaves. The construction of the partition in Algorithm 2.3 starts from the the single component A ∅ = [0, 1] p indexed by the root ∅ and performs softmax binary splitting recursively with depth d so as to end up with a partition (A v ) v∈{0,1} d indexed by the leaves. This can be encoded thanks to the notion of splitting scheme giving the covariate and threshold used at each internal node to perform the split. When β = 0, the softmax distribution (5) is the uniform distribution so that the splits (j v , u v ) are independent and uniform on [[1, p]] × (0, 1). This situation corresponds to a completely random tree and we denote by Q 0 the distribution of the associated splitting scheme. When β > 0, the distribution of the splitting scheme ξ depends on the sample distribution P n and model F and is denoted by Q n,F . According to Proposition 2.1 in Dombry and Duchamps (2021), softmax binary splitting as defined in Algorithm 2.2, implies with (A v ) v∈T d the different regions induced by the splitting scheme ξ 1 , ∆(j, u; A) the score resulting from the split at (j, u) of region A, and In this formula, the different scores ∆ implicitly depend on F and on the sample distribution P n according to Equation (7). Note that in Equation (13), each factor in the product is bounded by K so that the Radon-Nikodym derivative (13) satisfies Once the distribution of the splitting scheme is made explicit, one can easily deduce an integral formula for the infinitesimal boosting operator defined by (8). Indeed, Equations (2) and (6) together imply where (A v ) v∈{0,1} d denotes the random partition driven by a splitting scheme with distribution Q n,F . Integrating with respect to the splitting scheme, we deduce Remark 3.1. In Definition 2.9, Algorithm 2.3 is also considered when the sample distribution P n is replaced by the population distribution P. Equations (13) and (15) still hold true with straightforward modifications (P n replaced by P). Norm estimate and regularity in W q The space W q was introduced in the end of Section 2.1 with no details on the reference measure π 0 . We now provide further details as well as useful properties. Recall that, for q ∈ [1, +∞], W q denotes the space of functions F : where π 0 is a reference probability distribution on [0, 1] p and f F ∈ L q (π 0 ). Naturally, W q is endowed with the norm F W q = f F L q (π 0 ) . The reference probability distribution π 0 is related to completely random trees and to their splitting scheme distribution denoted by Q 0 . A splitting scheme ξ induces a partition of seen as a point process on [0, 1) d under the distribution Q 0 (dξ). We denote byπ 0 its intensity measure and by π 0 the normalized probability distribution, that is Note that in Dombry and Duchamps (2021, Section 4.4), the normalization was not introduced but it is useful to simplify some formulas. In view of Equation (15), the following technical lemma will be crucial to obtain norm estimates in W q for the gradient boosting operator. Then, for all q ∈ [1, ∞], we have Next we describe a regularity property of functions in W q . Recall the definition of the measure π 0 in (16). It is interesting to note, as a result of Dombry and Duchamps (2021), that π 0 is absolutely continuous with respect to the measure where Leb J,ε is the |J|-dimensional Lebesgue measure on the subspace This is the key to estimate the modulus of continuity of functions in W q . Then F is continuous on [0, 1] p and its modulus of continuity satisfies where q ′ = q/(q − 1) and C is a constant that depends only on p and d. Properties of infinitesimal boosting operators Recall that infinitesimal boosting operator T n is defined in Equation (8) and its infinite population version T in Definition 2.9. Explicit formulas involving splitting scheme are provided in Section 3.1. Together with Lemma 3.2, Equations (13)-(15) are crucial in our analysis of the infinitesimal boosting operators. Proposition 3.4. For every F ∈ B, T n (F ) ∈ W ∞ and T (F ) ∈ W ∞ . Furthermore, when restricted on an arbitrary bounded set, the mappings T n : B → W ∞ and T : Note that the result for T n was already stated in Dombry and Duchamps (2021, Lemma 5.4) and we extend it here naturally to T . In this former work, the gradient boosting operator T n and the gradient boosting process (F n t ) t≥0 were considered for a fixed input sample (x, y) = (x i , y i ) 1≤i≤n . In this paper, we consider a random independent sample of size n and we denote by T n (·; x, y) andF n t (·; x, y) the corresponding random operator and random process. Therefore we need to check the measurability (and even prove the continuity) of T n (·; x, y) andF n t (·; x, y) as functions of the input sample (x, y). Recall that we denote by C bb (B, W q ) the space of continuous functions B → W q that are bounded on bounded sets, endowed with the topology of uniform convergence on bounded sets. We also endow C([0, ∞), W q ) with the topology of uniform convergence on compact intervals. is continuous. Glivenko-Cantelli classes Our main results, Theorems 2.10 and 2.13, state the almost sure convergence of the infinitesimal boosting operator and infinitesimal gradient boosting process as the sample size tends to infinity. The main technical tool for our proof is the notion of Glivenko-Cantelli classes of function. Following van der Vaart and Wellner (1996, Section 19.2), a class F of measurable functions f : is the empirical measure associated with an i.i.d. sample (X i , Y i ) i≥1 with distribution P. The notation as * −→ 0 stands for almost sure convergence in outer probability, which is introduced to handle the possible nonmeasurability of the supremum. In the context of gradient boosting, the following result will be useful. We denote by A the class of hyperrectangles of the form A = [a, b] for some a, b ∈ [0, 1] p , a ≤ b. Proposition 3.6. Let q ∈ (1, ∞] and B ⊂ W q a bounded set. The classes of functions are P-Glivenko-Cantelli. Proofs related to Section 3 4.1 Proof of Lemma 3.2 and Proposition 3.3 Proof of Lemma 3.2. First consider a measure ν of the form and let us show that for all q ∈ [1, ∞], . It suffices to show the result for q < ∞, since the case q = ∞ is then obtained by taking the limit q → ∞. Therefore, we fix q ∈ [1, ∞) and let q ′ ∈ (1, ∞] be such that 1 q + 1 q ′ = 1. We use the duality between L q (π 0 ) and L q ′ (π 0 ), more precisely the fact that where C is the total mass ofπ 0 (dz) = π ξ (dz) Q 0 (dξ). For both inequalities above, we used Hölder's inequality -applied to π ξ , then to Q 0 . By definition of π ξ , for all splitting scheme ξ of depth d, π ξ has at most total mass 2 p+d , therefore C ≤ 2 p+d . This proves (17). To prove the lemma, note that for a splitting scheme ξ with associated partition (A v ) v∈{0,1} d , there exist for each v a point measure ν Av supported by the corner points Dombry and Duchamps (2021), Proposition 3.3). By definition of π ξ , ν Av is absolutely continuous with respect to π ξ , and | dν Av dπ ξ (z)| ≤ 1 for all z in its support. Then, if T is of the form the measure ν T defined by Therefore, we can bound which concludes the proof. For the proof of Proposition 3.3, the following technical lemma will be useful. Then there is a constant C ′ that depends on d and p such that As a consequence, if x, y ∈ [0, 1] p , we have where C = pC ′ and △ denotes the symmetric difference. Proof. Consider ξ a splitting scheme of depth d drawn according to Q 0 , and let us fix v ∈ {0, 1} d a leaf of the discrete binary tree. This leaf v corresponds to a unique chain of random rectangular sets that correspond to the subsequent splits along the branch ending in v. Let us define the following quantities: • Let N v be the number of atoms of π ξ in S a,b that are caused by the splits along the branch We want to bound E[N v ] from above, and since the number of atoms caused by d split is at most d2 p , we have The right-hand side -note that we have v = 0 there -will be easily bounded further in the proof. We prove this by induction on the decreasing value of k. First for k = d − 1, note that To see this, note that the conditioning Π 1 (A d−1 v ) = [r, s implies that no splits up to stage d − 1 may have caused atoms in S a,b , so the only possibility for an atom is for the last split to be on the first coordinate, and with splitting value between a and b. This expression clearly shows that (20) is satisfied -actually with an equality -for k = d − 1. Let us now proceed with the induction: for 0 ≤ k < d − 1, we compute The key to derive this is to recall that for v k+1 = 1, if the split along the first coordinate arrives at r ′ ≥ b, then Π 1 (A k+1 v ) = [r ′ , s and conditional on that, the probability that N v is positive is null. Now we can use the induction hypothesis: the first integral in the display above can be bounded by The second integral can be bounded by the same term using the same technique, so regardless of the value of v k+1 , we get To get the equality, we simply applied (21) to Q k 0 (0, b − a, 0, s − r). So the induction is proven and (20) follows. In particular, This last quantity is finally bounded from above: in the worst case scenario, we split only the first coordinate, so after k split A k 0 is of the form [0, U 1 · · · U k ), where the U i are i.i.d. uniform on [0, 1]. Therefore, we have where Γ d follows a Γ(d, 1) distribution. It follows that there is a constant C ′′ that depends only on d such that Q 0 To show (19), consider x, y ∈ [0, 1] p and fix some (18) to each of the S a i ,b i , which completes the proof. We can now prove Proposition 3.3. Proof of Propositions 3.4 and 3.5 In order to ease the proof of Propositions 3.4, 3.5 and Theorem 2.10 and treat in an unified way the finite sample case (associated with the measure P n ) and the infinite population case (associated with the measure P), we introduce some further notation. Let M be the set of probability measures µ on the space [0, 1] p × R satisfying Assumptions 2.4 to 2.7 (when seen as joint distributions for a pair (X, Y ) of random variables). The key for factorizing the proof is that M contains any empirical distribution so that P n , P ∈ M. We use the short notation Quite generally, we may consider Algorithms 2.2 and 2.2 when P n is replaced by a generic measure µ ∈ M. The corresponding gradient tree is written T (x; µ, F ) and the infinitesimal boosting operator T µ (see Definition 2.9). In particular, T = T P and T n = T Pn . All the quantities and notation introduced so far can be adapted in a straightforward way: the leaf values (6) becomẽ where we have added the dependency on F (so thatr(A) =r Pn (F, A)); the score (7) of a region A along variable j at threshold u becomes where the third irrelevant term has as been removed; the distribution of the splitting scheme is written Q µ F and Equations (13)-(15) are readily modified. In particular, Equation (15) becomes We can see that this is exactly the form required in Lemma 3.2 to get W q -norm estimates. Proof of Proposition 3.4. The statement concerns T µ with either µ = P or µ = P n and our proof holds for a generic µ ∈ M. It is convenient to first state two technical lemmas. Lemma 4.2. When restricted on a bounded set, the map F ∈ B →r µ (F, A) is bounded and Lipschitz, with constants that do not depend on A. Proof. Let us fix M. We note B M := {F ∈ B, F ∞ ≤ M} the ball with radius M and consider F, G ∈ B M . With the notation of Assumption 2.7, we have with µ X the marginal distribution of X. Using this and putting everything on the same denominator, we can write By Assumption 2.7, there is a constant C > 0 that depends only on M such that for all i ∈ {1, 2}, We thus easily obtain which concludes the proof. with a constant that does not depend on ξ. Proof. Recall the expression for dQ µ F dQ 0 that can be deduced from (13): with where A k v0 and A k v1 are the regions resulting from the is bounded by 1. Hence the product in (23) has 2 d − 1 factors bounded by K so that the bound K 2 d −1 is clear. On the other hand, the softmax function is 1/2-Lipschitzcontinuous for the uniform norm. It is therefore sufficient to show that the maps A v ) are bounded and Lipschitz-continuous on bounded subsets of B with constants that do not depend on (j k v , u k v ) and A v . Using this follows from Assumption 2.7 with a similar argument as in the previous proof. We are now ready to prove Proposition 3.4. (22), Proof of Proposition 3.4. By Equation Let M > 0 and assume F, G ∈ B M . Lemma 3.2 implies is also bounded and Lipschitz on B M , thus concluding the proof. Proof of Proposition 3.5. We first state a deterministic convergence lemma for sequences of infinitesimal gradient boosting operators. This lemma will be key in the proof of Proposition 3.5 and also in the proof of Theorem 2.10 in an upcoming section. Lemma 4.4. Let µ, (µ k ) k≥1 be distributions in M. Assume that there exists a bounded subset B ⊂ B and q ≥ 1 such that: Proof. Similarly as for Equation (25), Lemma 3.2 entails . We can now use the result above to prove Proposition 3.5. Proof of Proposition 3.5 (i). Let n ≥ 1 be fixed, consider an input sample for the empirical distribution associated with (x k , y k ). It is easily checked that µ, µ k satisfies Assumptions 2.4 to 2.7 (with any q > 1 in Assumption 2.5). We want to show that for any q ≥ 1 and any bounded subset B ⊂ B, we have This is easily proven thanks to Lemma 4.4. The assumptions (i)-(iii) of the lemma are easily verified. Point (iii) is satisfied because necessarily {(x, y)}∪ k≥1 {(x k , y k )} is a compact subset of [0, 1] pk × R k , so by a continuity argument, we have For (i)-(ii), it is clear that the convergence holds for splitting schemes ξ such that none of the (x i ) 1≤i≤n is at the frontier of a leaf. This event has null Q 0 -probability since the splits of a completely random splitting scheme are uniform. Therefore the lemma applies, concluding the proof. Proof of Proposition 3.5 (ii). We write respectively (F n t ) t≥0 and (F n t ′ ) t≥0 for the infinitesimal gradient boosting process based on the input samples (x, y) and (x ′ , y ′ ). Since (F n t ) t≥0 is the solution of the ODE, and similarly forF n t ′ . We deduce By the triangle inequality, Let the time horizon T > 0 be fixed and let B T ⊂ W q be a bounded set containingF n u andF n u ′ for u ∈ [0, T ] and C T the Lipschitz constant on B T of the locally Lipschitz map T n . We have These bounds together with Equation (26) imply Grönwall's Lemma finally yields By point (i) proven above, y). Recall the initialization is constant and given byF n . Therefore, in a neighborhood of y, the implicit function theorem implies the continuity of the map Note that the theorem can be applied since, by Assumption 2.4, L is C 2 with ∂ 2 L ∂z 2 > 0. We deduce that F n 0 ′ −F n 0 W q = cst × |F n 0 ′ −F n 0 | → 0, proving the result. Proof of Proposition 3.6 The following lemma will be useful for the proof of Proposition 3.6. Before stating and proving it, let us define the notions of brackets and envelope functions, as they are used in e.g. van der Vaart and Wellner (1996, Section 2.4). Consider X a separable complete metric space endowed with its Borel σ-field, and let P denote a probability measure on X . For any ε > 0, an ε-bracketing of a family F of measurable functions f : X → R, is a collection of pairs (l i , u i ) i∈I of P-integrable functions X → R satisfying ∀i ∈ I, If for all ε > 0, such an ε-bracketing can be found with a finite index set I, we say that F has finite bracketing numbers for P, and we write Finally, recall that A denotes the class of hyperrectangles, of the form [a, b] ⊂ [0, 1] p . Lemma 4.5. If G is a class of measurable functions g : [0, 1] p × R → R with finite bracketing numbers for P and with a P-integrable envelope function G, then the class of functions has finite bracketing numbers and therefore is P-Glivenko-Cantelli. Furthermore, for a fixed ε > 0, if δ(ε) > 0 denotes a value such that for each Borel set B ⊂ [0, 1] p , Proof. It is classical that the class of indicator functions of rectangles has finite bracketing numbers for any probability measure on [0, 1] p , but we nevertheless recall the argument in order to fix some notation. For all ε > 0, for each coordinate Now consider the subset A ε ⊂ A of rectangles A = [a, b] such that for each j, the jth coordinate of a and b is among the (a j l , 0 ≤ l ≤ k j ). Note that the cardinality of A ε is no greater than j k j (k j +1) 2 = O(ε −2p ). For each rectangle A ∈ A, let us define and define A as the largest rectangle of A ε included in A whose boundary is disjoint from that of A, with the convention A = ∅ if there are no such rectangles in A ε . Then it is clear that In the rest of the proof, with a slight abuse, we call the pair (A, A) an ε-bracketing of A. For all ε > 0, let us choose G ε a finite ε-bracketing of G . We fix ε > 0 and, using the fact that G is P-integrable, we define δ = δ(ε) as in the statement of the lemma. Consider g ∈ G and A ∈ A. Let (g, g) ∈ G ε/3 be such that g ≤ g ≤ g, and (A, A) be a δ-bracketing of A. Then we have Therefore, we have found a class of functions that bracket G · A with an L 1 (P)precision ε, and there are at most |G ε/3 ||A δ | such functions. It remains to argue that G ·A is P-Glivenko-Cantelli: this is easily deduced from the fact that it has finite bracketing numbers -see van der Vaart and Wellner (1996, Theorem 2.4.1). Proof of Proposition 3.6. We aim to apply Lemma 4.5 twice. Indeed, note that with the lemma's notation, the sets F 1 and F 2 are of the form It then suffices to show that the bracketing numbers of F ′ 1 and F ′ 2 are finite, and that these classes of functions have a P-integrable envelope. Since the proof is the same for the two classes of functions, we show this only for F ′ 1 . Let us define M = sup F ∈B F W q , and q ′ = q/(q − 1). Note that for all F ∈ B, we have F ∞ ≤ F W q ≤ M, and by Proposition 3.3 all F ∈ B have a common modulus of continuity where C is a constant that depends only on d and p. Note that since Since by Assumption 2.4 ∂ L ∂z (y, z) and ∂ 2 L ∂z 2 (y, z) are locally Lipschitz, the functions in F ′ 1 are uniformly bounded on [0, 1] p × [−m, m] and, on this compact space, have a common modulus of continuity satisfying the same bound as in (27) Now it is readily checked that the functions of the form define a finite class of functions such that for each g ∈ F ′ 1 , there exists g, g of the previous form satisfying This concludes the proof. Proof of Theorem 2.10 Proof of Theorem 2.10. We aim to use Lemma 4.4, and need to check that the conditions (i)-(iii) of the lemma hold almost surely for µ k = P k and µ = P. We first show (i): sup Note that for any hyperrectangle A such that P(x ∈ A) > 0, Proposition 3.6 ensures the almost sure convergencer n (F, A v ) →r(F, A v ). Furthermore, for any A such that P(x ∈ A) = 0, we haver n (F, A v ) = 0 andr(F, A v ) = 0 almost surely. Therefore (i) is satisfied. The proof of (ii), which consists in showing that is exactly the same. We now show (iii): sup This trivially holds under Assumption 2.6, (i) because it implies that the ratios r n (F, A) are uniformly bounded. Under Assumption 2.6, (ii), instead, we can define Note that the supremum in the right hand side is equal in distribution to where g(y) = sup z∈K | ∂ L ∂z (y, z)|, and the (Ỹ i ) i≥1 are i.i.d. with the conditional distribution of Y given x ∈ A. It is classical (see e.g. Durrett, 2010, Example 5.6.1) that ( 1 n n i=1 g(Ỹ i )) n≥1 is a backwards martingale, and Doob's inequality (Durrett, 2010, Theorem 5.4 Now by Assumption 2.5, we can bound E[g(Ỹ ) q ] by a constant C that does not depend on A, so we have Taking the integral with respect to ξ, this shows that almost surely, (iii) is satisfied, concluding the proof. Proof of Proposition 2.11 and of Lemma 2.12 Proposition 2.11 is immediate from the fact that T : W ∞ → W ∞ is locally Lipschitz, which is a straightforward consequence of Proposition 3.4. Proof of Lemma 2.12. By providing control on the leaf values of a softmax gradient tree, Equation (12) ensures that for F ∈ B, we have T (F ) W ∞ ≤ A F ∞ + B. Therefore, from any initial condition F 0 ∈ B, the solution to A standard Grönwall lemma-type argument shows that the norm of F t , hence the norm T (F t ), cannot explode in finite time, therefore the maximal time of definition of F t is t max = +∞. Proof of Proposition 2.13 Proof of Theorem 2.13. The proof shares similarities with the proof of Proposition 3.5 (ii) and relies on Grönwall's Lemma. First let us show thatF n 0 →F 0 almost surely. According to Assumption 2.4, the map z → E[L(Y, z)] has a unique minimizerF 0 which must be the unique zero of the derivative z → E[∂L(Y, z)] -note that Assumption 2.5 ensures that one can differentiate under the expectation. The maps z → P n [L(y, z)], n ≥ 1, are strictly convex and, by the law of large number, their derivatives satisfy, for all ε > 0, This ensures that, for n large enough,F n 0 ∈]F 0 −ε,F 0 +ε[ and, ε > 0 being arbitrary, proves the almost sure convergenceF n 0 →F 0 . Now, note that for all t ∈ [0, t max ), Taking the difference and applying the triangle inequality, we deduce The integrand in the right hand side is bounded from above by Let us now fix a time horizon T ∈ (0, t max ) and show that sup t∈[0,T ] F n t −F t W q → 0 almost surely. Define where B(R + 1) denotes the closed ball of W q centered on 0 and of radius R + 1. For n ≥ 1, define S n = max{s ∈ [0, T ] : ∀m ≥ n,F n s ∈ B(R + 1)}. Note that be definition the (S n ) n≥1 are nondecreasing. We will show that ii) for all n ≥ 1, there almost surely exists n ′ ≥ n such that S n ′ ≥ (S n + δ M ) ∧ T , for δ M = (2(M + 1)) −1 . Since the second point implies that there almost surely exists n ≥ 1 such that S n = T , by the first point we will have which proves the proposition. Let us now show (i) and (ii). Let C denote the Lipschitz constant on B(R + 1) of the locally Lipschitz map T , and for n ≥ 1, let us define which tends to 0 almost surely by Theorem 2.10. By (28) and (29), for t ∈ [0, S n ], we have Therefore we have This shows (i). To prove (ii), note that for any fixed n ≥ 1 there is a random index n ′ ≥ n such that This implies that for all m ≥ n ′ , F m Sn W q ≤ R + 1 2 , and furthermore that for all t ∈ [S n , S n + δ M ], we have F m t −F m Sn W q ≤ 1 2 . Therefore we haveF m t ∈ B(R + 1) for all m ≥ n ′ and all t ∈ [0, S n + δ], in other words S n ′ ≥ (S n + δ) ∧ T . So we have shown (ii) and this completes the proof. Proof of Proposition 2.14 and Proposition 2.15 Proof of Proposition 2.14. We differentiate E[L(Y,F t (X))] with respect to time. To see that we can differentiate under the expectation, note that for a fixed t ∈ (0, t max ) and any s ∈ [0, t], we have where the (A v ) are the leaves of a regression tree ξ based onF t , which concludes the proof. Proof of Proposition 2.15. We leave the computation -which is very similar to the one in the proof above -to the reader; we get which implies the result. Proof of Proposition 2.16 and Proposition 2.18 Proof of Proposition 2.16. First, note that (ii) clearly implies (i) because (ii) implies that leaf-values of a softmax gradient tree are always null. (i) =⇒ (ii). If T (F ) = 0, then E[ ∂ L ∂z (Y, F (X))T (X; F )] = 0, and this expectation was computed in (30) to be equal to This shows that for Q 0 -almost every splitting scheme ξ and all v ∈ {0, 1} d , the non-negative value P[ ∂ L ∂z (y,F t (x))1 Av(x) ] 2 must be zero. Since under Q 0 , the splitting scheme ξ is completely random, i.e. the regions A v are obtained by successively making a series of d uniform splits. Therefore we have Leb denotes the Lebesgue measure and △ the symmetric difference. A standard continuity argument shows that it must also hold for all A ∈ A d . (ii) ⇐⇒ (iii). Let J ⊂ {1, . . . , p} with |J| ≤ d, and define A J ⊂ A d as the sets A of the form [a, b] for which (a j , b j ) = (0, 1) for all j / ∈ J. Clearly the σ-field generated by A J is the one that makes the map x ∈ [0, 1] p → x J measurable. Therefore if E ∂ L ∂z (Y, F (X))1 A (X) = 0 for all A ∈ A J , then E ∂ L ∂z (Y, F (X)) X J = 0 almost surely, and reciprocally. In the following we focus on the context of regression. Let us recall that we consider L 2 = L 2 ([0, 1] p , P X ), where P X denotes the distribution of X under P, endowed with its usual scalar product ·, · . Recall also that we define L 2 d = span(1 A , A ∈ A d ). The following lemma is key to proving Proposition 2.18, which shows a convergence in the weak topology of L 2 -i.e. the coarsest topology for which for each g ∈ L 2 , the map f → f, g is continuous. Recall that we defined F * = E[Y | X] ∈ L 2this is the so-called target function of the problem of regression. Lemma 5.1. The map is weakly continuous on bounded subsets of L 2 , and is null only on the affine space Proof. The second part of the lemma is easily shown: similarly as in the proof of Proposition 2.16 above, it is clear that ϕ(F ) = 0 if and only if F * − F, 1 A = 0 for all A ∈ A d , and this is equivalent to F ∈ F * + (L 2 d ) ⊥ . To prove the first part, observe that the functions 1 A are in the unit ball of L 2 , for any Borel subset A ⊂ [0, 1] p . Also, in the case of regression, note that for a Borel subset A ⊂ [0, 1] p , we have Therefore, the scores (24) used in Algorithm 2.2 can be written where the regions A 0 and A 1 are the result of the (j, u)-split of the region A, and where 1 A denotes the normalization of 1 A in L 2 . Expressing dQ F dQ 0 (ξ) in terms of the scores, as in (23) with µ = P, it is clear that the map ϕ is of the form for some k ≥ 1, where µ is a probability measure on B k and ψ : R k → R is continuous. Let us fix a bounded domain D of L 2 -recall that we are interested in showing that ϕ is weakly continuous on bounded subsets of L 2 -and consider our map ϕ as a function of F ∈ D. We then have almost surely F * − F, g i ∈ [−C, C] for all i for some constant C > 0, so we can assume without loss of generality that ψ is bounded and uniformly continuous, with a modulus of uniform continuity defined by ω ψ (ε) = sup |ψ(x) − ψ(y)| : x, y ∈ [−C, C] p with x − y ∞ ≤ ε . Let us now fix ε > 0, and consider a finite set G = {g 1 , . . . , g m } ⊂ B such that Note that we can do this since B is a Polish (metric and complete) space. Now for any fixed F ∈ D, consider a weak neighborhood V of F such that for all F ′ ∈ V , for all i ∈ {1, . . . , m}, | F − F ′ , g i | < ε. Then for all F ′ ∈ V ∩ D and g ∈ G ε ∩ B, we have | F − F ′ , g | ≤ 3ε, so that for all F ′ ∈ V ∩ D, This tends to 0 as ε → 0, so we have proved that ϕ is weakly continuous on bounded subsets of L 2 , and the proof is complete. We state a final lemma before being able to prove Proposition 2.18; its second part is an interesting observation in itself and it is convenient to prove it here, but it is the first part that will be useful for the following proof. Lemma 5.2. In the context of regression, for all F ∈ L 2 , T (F ) L 2 ≤ 2 d K 2 d −1 F − F * L 2 . In the Adaboost setting, for all F ∈ B, Proof. For the first part of the lemma, note that we can write Since we have dQ F dQ 0 (ξ) ≤ K 2 d −1 , for any g ∈ L 2 we can bound Taking the supremum of this expression for g in the unit ball of L 2 yields the result. For the second part of the lemma, note that in the context of Adaboost, we havẽ Therefore, similarly as above we can directly bound T (F ) ∞ ≤ 2 d K 2 d −1 . Proof of Proposition 2.18, (i). By Proposition 2.14, F t − F * L 2 is decreasing in time, so we can fix B ⊂ L 2 a closed ball such thatF t ∈ B for all t ≥ 0. Since closed balls in L 2 are compact for the weak topology, we know thatF t has weakly convergent subsequences. It is sufficient to show that any such subsequence must tend to P d (F * ). Let us fix ℓ an adherent point. We know that ℓ must be in B because the norm is weakly lower semicontinuous, and furthermore it is readily seen that L 2 d is weakly closed, so that ℓ ∈ L 2 d ∩ B. We assume by contradiction that ℓ = P d (F * ), and compute By Lemma 5.1, the map is continuous for the weak topology, nonnegative, and null only when F = P d (F * ), therefore c := ϕ(ℓ) is positive and there exists V a weak neighborhood of ℓ such that for all f ∈ V , ϕ(f ) > c/2. Up to taking a smaller neighborhood, we can assume that V is of the form for some g 1 , . . . g k in the unit ball of L 2 and ε > 0. Let us now define W ⊂ V by Consider the set of times T V = {t ≥ 0,F t ∈ V }, and define T W in an analogous way. Since V and W are both neighborhoods of ℓ, these sets are unbounded. Furthermore, if t ∈ T W , then for each u ≥ 0, for all i ∈ {1, . . . , k}, we have | F t − ℓ, g i − F u − ℓ, g i | ≤ |t − u|C for C := sup F ∈B T (F ) L 2 , which is finite by Lemma 5.2. Therefore, for all t ∈ T W , if |t − u| ≤ ε 2C , then u ∈ T V , in other words [t − ε 2C , t+ ε 2C ] ⊂ T V . Since T W is unbounded, this shows that the total time spent in V is infinite. This is absurd since we would have This conclude the proof. Proof of Proposition 2.18, (ii). When β = 0, we have, for all F ∈ B, and (F t ) t≥0 solves d dtF t = T (F t ). It is clear that this extends to a continuous map T : L 2 → L 2 , and that is a bounded positive semi-definite linear operator. Furthermore, G t :=F t − F * satisfies d dt G t = −L(G t ), so that G t = e −tL G 0 , with G 0 =F 0 − F * . In the context of regression, notice that we can reformulate the equivalence of (i) and (ii) of Proposition 2.16 by: In other words, we have ker(L) = (L 2 d ) ⊥ . Therefore if P ⊥ d denotes the orthogonal projection on (L 2 d ) ⊥ , by decomposing G 0 = P d (G 0 ) + P ⊥ d (G 0 ), we get with e −tL P d (G 0 ) → 0 in L 2 , since the restriction of L to L 2 d is positive definite. Finally, we havê in L 2 , concluding the proof.
2022-10-04T06:42:08.599Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "1c7f333858563b0c62b6ef4a76ec4b1f0681cf24", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ba01bc2c5df8fafcc8e6bf01fb61496ce60ca902", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
250252914
pes2o/s2orc
v3-fos-license
Building a hybrid virtual cardiac rehabilitation program to promote health equity: Lessons learned From the *Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland, Johns Hopkins University School of Medicine, Baltimore, Maryland, and The Welch Center for Prevention, Epidemiology and Clinical Research, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland. Assemble a multidisciplinary team and technology platform A diverse team with insightful perspectives was key to the design of our hybrid CR program. Our team included the CR medical director, program director, exercise physiologists, nurses, researchers, preventive cardiologists, engineers, compliance/legal teams, and frontline clinicians. Next, we identified the technology to deliver the virtual component of the CR program. We selected the Corrie Health digital platform (Corrie), 5 because it is a comprehensive, evidence-based, and health equity-focused 6 platform for patients with CVD. 7 Corrie is composed of a smartphone application (app) paired with a wireless blood pressure monitor and a smartwatch connected to a clinician dashboard that supports GDMT for CVD secondary prevention. 1,5 Smartphone application The app highlights 3 main pillars: (1) education on CVD risk factors, pathophysiology, and lifestyle modifications; (2) medication support with reminders and adherence tracking of GDMT; and (3) exercise and physical activity guidance designed to achieve individualized treatment plans. To promote app engagement, we provide motivational, weekly coaching check-ins where questions about educational content are addressed along with progress toward achieving healthy lifestyle goals. We also promote app engagement through an education feature where patients have the option to mark modules as completed once they have viewed the resources. Once all items are complete, patients are awarded a golden heart badge, acting as a gamification model for motivation. Clinician dashboard We developed a clinical dashboard that provides intuitive data visualization including heart rate, blood pressure, steps, medication adherence, education completion, and exercise duration with pre-and post-vitals. Patients are also able to view and share these data within the app. Establish an equitable onboarding process Patients are introduced to the app at the bedside, while inpatient, by a trained patient navigator. During an approximately 30-minute session, the patient and navigator download the app and complete basic setup together. Patients are asked to perform teach-back to ensure understanding. Navigators also assist with pairing devices and the patient's first vital signs measurement. Barriers to a hybrid CR model include socioeconomic status and technology and/or health literacy. We took steps to ensure equitable access through creating an iShare program, 8 which provides, at no cost, loaner devices to patients who do not own them. Health literacy was addressed with all educational materials being created at a sixth-or seventhgrade reading level. It was also supported at the start of the program and at in-center sessions, and reinforced virtually during weekly health coach check-ins. Patients were given access to technology tutorial videos, tailored to varying levels of digital literacy, that they could view at their own pace for supplementary support. Throughout the 12week program, starting from discharge, patients were offered additional technical support via weekly coaching calls or by e-mail. This flexible and dynamic approach (in-person instruction, instructional videos, and coaching check-ins) was designed to help patients get started quickly regardless of their technology or health literacy status. We learned this was crucial for increasing motivation and engagement. Gather feedback To optimize user experiences, we engaged a diverse group of CR-eligible patients, caregivers, and clinicians using purposeful sampling for recruitment in human-centered design (HCD) sessions. Our patient sample was 27% African American, 9% Asian, 18% Hispanic or Latino, 55% female, with a median (interquartile range) age of 63 (56-66) years. The cohort of patients met 3 times over the course of 6 weeks for a total of 270 minutes. Session 1 focused on defining challenges patients and their caregivers faced after experiencing cardiac events, including barriers to CR participation. Session 2 featured brainstorming solutions to these challenges. In session 3, participants designed prototypes of top solutions. Between sessions 2 and 3, participants were asked to test the Corrie app and provide feedback using a written survey following the Systems Usability Scale. 9 To obtain clinician insights, we gathered 10 clinicians (nurse practitioners, cardiologists, exercise physiologists, and pharmacists) for a 90-minute roundtable discussion via Zoom on challenges encountered with engagement in CR. They addressed concerns about access, financial barriers, and limited patient education on the benefits of CR. Implement feedback HCD sessions provided structured feedback from patients that we implemented into our quality improvement (QI) program. We were successful in promoting equitable access within our QI program on hybrid CR, as demonstrated by the fact that the patients had a mean age of 59.2 (standard deviation: 10.4) years, 40% were female, 39% were of minority race/ethnicity, 58% were insured by Medicare/Medicaid, and 76% owned an Android. From these sessions many improvements were made, including creating a CR introductory video and digital instructional how-to videos as well as implementing weekly coaching check-ins. We are creating a patient-centered Figure 1 Key steps to building a hybrid cardiac rehabilitation (CR) program to promote health equity. KEY FINDINGS Despite the overwhelming evidence in support of cardiac rehabilitation (CR) from the American Heart Association and the American Association of Cardiovascular and Pulmonary Rehabilitation, it has historically been underutilized. An innovative approach is needed to increase access to CR in an equitable and cost-effective manner. To increase CR access, we created a combination of center-and home-based sessions. We present lessons learned in developing and implementing our Hybrid CR program at Johns Hopkins. Key steps to building a hybrid CR program to promote health equity are as follows: (1) assemble a team and technology platform; (2) establish an equitable onboarding process; (3) gather feedback; (4) implement feedback; (5) evaluate clinical efficacy. animation to visually share the benefits of CR that will be presented to patients to introduce the program and promote participation. Similarly, clinicians provided feedback on the importance of making it easy to refer to the program. This prompted our development of an electronic health record-based order set to refer patients to hybrid CR. Evaluate clinical efficacy With support from the AHA Strategically Focused Research Network, we will be conducting a randomized clinical trial (Impact of a mobile Technology Enabled Corrie Cardiac Rehabilitation Program on Cardiovascular Outcomes mTECH REHAB) to test the efficacy of the Corrie Health digital platform to deliver a hybrid CR model. We will enroll 300 CR-eligible patients with CVD, and evaluate the achievement of guideline-directed goals. Our primary outcome is to assess change in participants' functional capacity from discharge to 12 weeks postdischarge using the 6-minute walk test. At the completion of the program, both patients and clinicians will complete a survey to evaluate their satisfaction and perceived burden of the intervention. Conclusion We are addressing CR underutilization by combining guideline-directed cardiovascular care and innovative technology to enable equitable access to CR. Learning from HCD and QI, we have optimized onboarding, app usability, and delivery of coaching sessions to improve CR patient engagement. Adapting the program to scale requires a multidisciplinary team and easy-to-use/adaptable technology that delivers equitable and high-value care. These are key takeaways that have been important in creating a dynamic, patient-centered, and equitable hybrid CR program. Funding Sources This work will continue as part of the American Heart Association Strategically Focused Research Network (SFRN) on Health Technology and Innovation. Disclosures Erin Spaulding serves as a consultant to Corrie Health. Under a license agreement between Corrie Health and the Johns Hopkins University, the University owns equity in Corrie Health and the University, Francoise Marvel, and Seth Martin are entitled to royalty distributions related to technology described in the study discussed in this publication. Additionally, Francoise Marvel and Seth Martin are founders of and hold equity in Corrie Health. This arrangement has been reviewed and approved by the Johns Hopkins University in accordance with its conflict-of-interest policies.
2022-07-04T15:09:36.833Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "9f60cec22772aa7fd400beae1f23a4c107e1bdcf", "oa_license": "CCBYNCND", "oa_url": "http://www.cvdigitalhealthjournal.com/article/S2666693622000421/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd39a74a3d942d385f8338543f1b69c63ec7c17a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38712839
pes2o/s2orc
v3-fos-license
Computer-aided Drug Design Applied to Parkinson Targets Background Parkinson’s disease (PD) is a progressive neurodegenerative disorder characterized by debilitating motor deficits, as well as autonomic problems, cognitive declines, changes in affect and sleep disturbances. Although the scientific community has performed great efforts in the study of PD, and from the most diverse points of view, the disease remains incurable. The exact mechanism underlying its progression is unclear, but oxidative stress, mitochondrial dysfunction and inflammation are thought to play major roles in the etiology. Objective Current pharmacological therapies for the treatment of Parkinson’s disease are mostly inadequate, and new therapeutic agents are much needed. Methods In this review, recent advances in computer-aided drug design for the rational design of new compounds against Parkinson disease; using methods such as Quantitative Structure-Activity Relationships (QSAR), molecular docking, molecular dynamics and pharmacophore modeling are discussed. Results In this review, four targets were selected: the enzyme monoamine oxidase, dopamine agonists, acetylcholine receptors, and adenosine receptors. Conclusion Computer aided-drug design enables the creation of theoretical models that can be used in a large database to virtually screen for and identify novel candidate molecules. INTRODUCTION Parkinson's disease (PD) is the second most frequent neurodegenerative disorder [1,2] after Alzheimer's (AD). PD can cause significant disability and decreases the quality of life; clinical manifestations include tremor, rigidity, postural instability and bradykinesia [3]. As an example, PD patients carry a six-fold increased risk for dementia compared to the general population, with approximately 80% of patients developing dementia over the course of the disease [4]. Motor ability disruption in PD is due to decreased striatal dopamine levels, arising from selective and progressive loss of dopaminergic cells within the substantia nigra pars compacta and formation of α-synuclein proteinaceous intraneuronal inclusions referred to as Lewy bodies and Lewy neurites [5,6]. These nigrostriatal circuits are an integral part of a complex basal ganglia network and are thought to be involved in a variety of complex functions [7]. The exact *Address correspondence to this author at the Health Sciences Center, Federal University of Paraíba, Campus I, 58051-970, João Pessoa-PB, Brazil; Fax: 55-83-3291-1528; E-mail: luciana.scotti@gmail.com mechanism underlying this process is unclear, but oxidative stress, mitochondrial dysfunction and inflammation are thought to play major roles in the etiology [8]. Non-motor symptoms can also be observed, and involve autonomic functions, sleep, cognition, mood and attention [3], these symptoms can occur across all stages of PD, and have been recognized as a key determinant factor for quality of life in PD patients [9]. Presently, there are neither medical treatments nor convincing neuroprotective agents to cure PD. Yet, there are a number of strategies that help to improve dopamine deficiency and therefore PD symptoms [7]. Treatment strategies depend on several factors, including patient disability level, age of the patient, the desire to avoid response fluctuations, potential medication side effects, and affordability [10]. Motor symptoms that result from PD may be treated with dopaminergic agents, and with functional neurosurgery, yet the currently available treatments typically fail to treat non-motor symptoms [11]. Non-motor symptoms and non-motor fluctuations can be minimized with dopaminergic treatment or with deep brain stimulation, and the dopaminergic pathophysiology of such non-motor symptoms, likely involves brain areas other than the nigrostriatal system [12]. The main strategy in the treatment of PD is dopamine replacement using carbidopa, levodopa, dopamine agonists, monoamine oxidase type B inhibitors, catechol-o-methyltrfdansferase inhibitors, anticholinergics and amantadine [13]. Levodopa has been the therapeutic mainstay for patients with idiopathic Parkinson's disease since the late 1960's and continues to be the primary treatment for management of symptomatic PD [14]. On the other hand, dopamine agonists cause hallucinations, sleepiness and compulsive behaviors such as: gambling, hyper-sexuality and excessive eating [15]. Other side effects of synthetic PD medications include ankle edema, diarrhea, dry mouth, tremor, dyskinesia, cognitive impairments and urinary retraction. When a drug therapy fails to successfully manage PD, surgical treatments are recommended. However, the surgery for PD is not devoid of risks. It has been reported that surgery may increase morbidity and mortality as a result of intracellular hemorrhages, and thermolytic lesioning of structures adjacent to the target sites [16]. Although efforts continue to study PD from the most diverse points of view, the disease remains incurable. Consequently, the major objective is to design new and more potent compounds for targets associated with PD. Many molecular modeling methods and chemical informatics techniques have been applied to differing targets in the study of PD. This review aims to examine a reasonable selection of QSAR analyses employed to develop drugs for PD, that include DA agonists, monoamine oxidase type B (MAO-B) inhibitors, levodopa or levodopa plus dopa-decarboxylase inhibitors (DDC-I) and catechol-O-methyl transferase (COMT) inhibitors. DOPAMINE AGONISTS Dopamine is an abundant neurotransmitter in the brain, and plays an important role as a regulator of many physiological functions in the central nervous system. These functions include motor activity, cognition and positive reinforcement. Additionally, in the periphery dopamine acts as a modulator of the cardiovascular and renal functions, among others [17]. In two steps that occur in the cytosol, dopamine is synthesized from the amino acid tyrosine. The first step involves hydroxylation of tyrosine to ldihydroxyphenylanaline (l-dopa). This reaction is catalyzed by the enzyme tyrosine hydroxylase and requires oxygen. The second step is the decarboxylation of l-dopa to dopamine. This reaction is catalyzed by the aromatic amino acid enzyme decarboxylase, and generates CO 2 [18]. Dopamine receptors belong to a superfamily of G-proteincoupled receptors (GPCRs) and have been subdivided into two groups based on pharmacological behavior [19]. D 1 and D 5 receptors are members of the D1-like family of dopamine receptors and have in common the activation of the enzyme adenylate cyclase. D 2 , D 3 and D 4 receptors are members of the D2-like family and characterized by inhibition of adenylyl cyclase [20,21]. The D 1 -like group of receptors includes D 1 (or D 1a ), D 5 (D 1b ), D 1c , and D 1d [D 1a (D 1 ) and D 1b (D 5 ) these being the principal ones], while the D 2 -like group of receptors contains D 2L , D 2S , D 3 , and D 4 (or D 2aL , D 2aS , D 2b , and D 2c ) [22]. Receptors belonging to the family of GPCRs have in common a characteristic 7-transmembrane helix, each of which has 22-28 hydrophobic amino acids [20]. According to Sidhu and Niznik [23], the signal pathways involving central dopamine receptors are extremely complicated, considering that each of them can interact with more than one G protein. This leads to competitive activation in multiple directions. Dopaminergic system disorders are associated with Parkinson's disease, schizophrenia, mania and depression, among others [21,24]. Using CoMFA (Comparative Molecular Field Analysis) [25] and CoMSIA (Comparative Molecular Similarity Indices Analysis) [26] analyses, Modi et al. [27] studied the structural requirements of D2 and D3 receptor ligands for binding affinity, and selectivity for D3 receptors (Fig. 1). The D3 receptor was chosen; considering its predominant limbic location in the central nervous system and expectations that it would cause fewer undesirable side effects [24,28]. To derive the 3D QSAR models, 45 structurally diverse molecules in a dataset were selected by SAR studies focused mainly on optimization of the linker length, and the arylpiperazine moiety. After the linker length identifications and possible arylpiperazine moieties, the agonist portion of the molecule was varied. In this work, two different alignment methods, atom-based, and flexible, were tried. Different training and test sets were used, since experimental activity varied significantly for D2, and D3; and selectivity between (D2/D3). The training sets were formed by carefully selecting 37 molecules that generated statistically significant CoMFA models. The remaining compounds (8) were used as the test set. The best CoMFA model for D2, obtained using flexible alignment and AM1 charges, gave an r CV 2 of 0.713 (4 components), a conventional r 2 of 0.920, and a standard error of estimate (SEE) of 0.234. The predictive capability was r pred 2 of 0.926. For the dopamine D3 receptor binding affinity, the best CoMFA model, obtained using a flexible alignment and Gasteiger-Hückel charges, gave an r cv 2 of 0.453 (5 components), an r CV 2 of 0.941, a SEE of 0.169 and an r pred 2 of 0.710. The steric field described 41.5% and 63.6% of variance for the dopamine D2 and D3 binding affinities, respectively, and the corresponding contributions from the electrostatic field were found to be 58.5% and 36.4%, respectively. The mean r cv 2 values of 0.731 and 0.472 obtained for the D2 and D3 binding affinities indicated that the derived models had good internal predictivity. The authors suggest that the greater contribution of the electrostatic field may indicate the importance of the 'solvation-desolvation' processes that are crucial for the observed differences in the D2/D3 receptor binding affinities. The CoMSIA models were generated using five fields: steric, electrostatics, hydrophobic, hydrogen bond -donor and acceptor. Initially, the analyses were performed using individual fields as well as various combinations of the different fields. For D2 binding affinity, the CoMSIA model, using atom-based alignment and AM1 charges, obtained an r cv 2 of 0.719 (4 components), r conv 2 of 0.912, SEE of 0.245 and r pred 2 of 0.911. For D3 binding affinity, the best model using flexible alignment and Gasteiger-Hückel charges obtained an r cv 2 of 0.493 (6 components), an r conv 2 of 0.898, a SEE of 0.227 with an r pred 2 of 0.465. Removal of compound 33 (an outlier) improved the r pred 2 value from 0.465 to 0.640. Once more, the derived models showed good internal predictivity considering the mean r cv 2 values of 0.726 and 0.456, respectively for D2 and D3. The CoMFA-generated plots revealed good correlation between the steric and electrostatic fields and the binding potencies at the D2, D3 receptors, and selectivity at D3 (D2/D3), with a dominating contribution made by the steric field on the electrostatic counterpart (for D3 and D2/D3), or vice versa (for D2). The models obtained revealed the importance of the carbonyl group, (which is likely involved in potential H-bonding interactions with the D3 target residues), and a biphenyl substituent, as important determinants for the D3 selectivity of the studied compounds. A set of 45 novel illoperidone analogs, 3-[[(aryloxy)alkyl]piperidinyl]-1,2-benzisoxazole ( Fig. 4), for D 2 antagonism, as selected from the literature [31], were studied by Dash et al. [32], using 3D-QSAR approach. In this study, the pharmacophore and 3D-QSAR modeling was carried out using PHASE software [33]. This software identifies common spatial arrangements between functional groups which are essential to biological activity, considering a set of high-affinity ligands. PHASE software provides a standard set of pharmacophore features: hydrogen-bond donor, hydrogen-bond acceptor, negatively ionizable, positively ionizable, hydrophobic group, and aromatic ring. The 1/logIC 50 value for D2 inhibition was used as a biological activity parameter. In the first step, the conformational space of all the molecules was explored through a combination of Monte-Carlo multiple minimum/low mode sampling with maximum number of 2,500 conformers per structure and 100 minimization steps [34]. A pharmacophore model based on common molecular features was generated and validated by 3D-QSAR analysis. 45 compounds were divided into a training set (34 compounds), and a test set (11 compounds) for the purpose of atom-based 3D-QSAR. Resulting from the 3D-QSAR approach, the H-bond donor map suggested that the presence of a primary amine and hydroxyl group near the 2-and 5-positions of the phenyl ring have a favorable effect on biological activity. The hydrophobic volume maps suggested that hydrophobic interactions at the 4-position of the aromatic ring increase biological activity. The electron-withdrawing volume maps suggested that the presence of a methoxy group at the 2position of the aromatic ring decreases biological activity, and the presence of a carbonyl group at the 4-position increases biological activity. Active compounds were docked with the 3D structure of the D2 receptor using Glide XP docking. The results suggest that H-bond donor hydroxyl groups attached to the 2position of the aromatic ring increase biological activity, and that hydrophobic benzisoxazole ring interactions occur with amino acids VAL79, ILE148, VAL154, PHE353, PHE354 and ILE358. Inhibitor piperidine rings display hydrophobic interaction with VAL75, TRP350 and PHE375, and inhibitor hydrophobic chains interact with aromatic residues of PHE74, TYR372, PHE375 and TYR380. In the final step, an in silico screening search for novel D2 antagonists was performed considering all four pharmacophoric features, obtaining 4,171 hits in the Zinc database. The hits having fitness scores of less than 80% were discarded, and 119 compounds were further selected for cluster analysis. Taking into account the predicted value of 1/logIC50 greater than 0.411, 86 hits were chosen for Glide SP docking onto the active site of D2 receptor, and 11 hits, with XP GlideScore ≤ −8.5, were considered to be potential D 2 inhibitors. The sumary of dopamine agonist studies is presented in Table 1. MONOAMINE OXIDASE Monoamine oxidase (MAO) is a flavo-protein, localized in the mitochondrial outer membrane, that catalyzes oxidative deamination of biogenic and xenobiotic amines. The enzyme has essential functions in the metabolism of neuro-active and vasoactive amines in the central nervous system and peripheral tissues [35]. MAO degrades dopamine excesses in the cytosol, catalyzing oxidative deamination of the dopamine amino group to 3,4-dihydroxyphenylacetaldehyde with concomitant formation of ammonia and Table 1. Main results obtained for dopamine agonists by computer-aided drug design methods. Authors Compound Principal Results Modi et al. [27] arylpiperazine derivatives -'solvation-desolvation' processes are crucial for the observed differences in the D2/D3 receptor binding affinities; -the carbonyl group and a biphenyl substituent, are determinants for the D3 selectivity. Dash et al. [32] Illoperidone derivatives -primary amine and hydroxyl group near the 2-and 5-positions of the phenyl ring; -hydrophobic interactions at the 4-position of the aromatic ring and -the presence of a carbonyl group at the 4-position will increases biological activity. -a methoxy group at the 2-position of the aromatic ring decreases biological activity. hydrogen peroxide. The product of this reaction is then metabolized by the enzyme aldehyde dehydrogenase to 3,4dihydroxyphenylacetic acid using NAD as an electron donator [18]. Two MAO isoforms can be found in all mammals, MAO-A and MAO-B. The difference between these isoforms is based on their respective substrate preferences, their sensitivities to the acetylenic inhibitors clorgyline and L-deprenyl (selegiline), and by their tissue distribution [35][36][37]. MAO-A has higher affinity for serotonin, dopamine, norepinephrine and epinephrine whereas high affinity substrates of MAO-B include tyramine, phenylethylamine and MPTP [38][39][40]. MAO-A, is found in the gastrointestinal tract [37] and plays a role maintaining low cytosolic concentrations of dopamine. It has been suggested that MAO-A plays a role in oxidative stress because the enzyme generates hydrogen peroxide [41]. MAO-B is found in the human brain, where it acts in the breakdown of dopamine and in deamination of phenylethylamine. This amine is responsible for stimulating the release of dopamine and inhibits its neuronal reuptake [42]. With aging, expression levels of MAO-B in neuronal tissue enhance 4-fold, increasing dopamine metabolism levels, and production of dopanal and hydrogen peroxide. The increase in hydrogen peroxide promotes apoptotic signaling events resulting in decreased levels of dopamineproducing cells, which play a key role in neurodegenerative diseases such as Parkinson's and Alzheimer's [43]. For these reasons, pharmacologists have focused attention on the development of MAO-B inhibitor drugs [44]. McNaught et al. [45] considering that isoquinoline derivatives are structurally related to MPTP or MPP + , and may be endogenous neurotoxins contributing to cell death in PD, studied substrate affinities of 14 neutral and quaternary isoquinoline derivatives for their ability to inhibit the uptake of [3H] dopamine into rat striatal synaptosomes in the dopamine reuptake system. Using the selected compounds, QSAR and 3D-QSAR produced no statistically acceptable model. However, certain favorable or unfavorable functional group regions helped to contribute to the molecular modeling analyses. Thus, a 2-methyl group in 1,2,3,4-tetrahydroisoquinoline or a 7-methoxy group increased activity, whereas two -OH substituents in positions 6-and 7-were notably unfavorable for activity. Twenty phenyl alkylamine derivatives (Fig. 6), taken from literature, all having four MAO inhibitory activities, [53], were studied by Hasegawa et al. [54] through QSAR analysis. The first biological activity was in vitro MAO inhibitory activity. Other three biological activities were in vivo MAO inhibitory activities within respective noradrenergic (NA), dopaminergic (DA), and serotonergic (5-HT) neurons of the rat brain. The activity was expressed as the negative logarithm of the 50% inhibitory concentration (pIC 50 ). The relationship between MAO inhibitory activity and structural descriptors was analyzed using the nonlinear PLS method. Principal component analysis (PCA) was also employed to verify similarities and differences among the four MAO biological activities. Two significant components were obtained when PCA analysis was employed on the dataset. The first explained 83.7% of the total variance in the conventional and crossvalidated steps, and the second explained 82.1%. The loading plot of first against the second principal component showed that four biological activities were clustered into two groups, in vitro (biological variable 1) and in vivo MAO inhibitory activities (biological variables 2, 3 and 4). Hasegawa et al. [54] performed the PLS analysis separately. The MAO in vitro activity PLS model resulted in R 2 values of 0.988 and Q 2 values of 0.861. The PLS models developed for the three in vivo MAO inhibitory activities The QSAR analysis performed for the in vitro MAO inhibitory activity demonstrated that such activity is favored by large, electron-withdrawing and hydrophobic substituents at ortho positions. Meta positions were not significant. Electron-donating substituents at para positions increased biological activity. Considering the analysis performed for in vivo MAO inhibitory activity, the QSAR analyses demonstrated that electron-withdrawing substituents at ortho positions and electron-donating substituents at para positions increase biological activity. Substituents at meta positions are limited by small steric volume. The compounds were classified into four groups considering the IC 50 and pSI values, where pSI is selectivity to hMAO-B. They were divided into training and test sets to obtain validated QSAR models. The descriptors used in this work were available in the DRAGON [78], MOE [79] and MODESLAB [80] software. Linear Discriminant Analysis was employed to find classification models that best described biological activity, as a linear combination of predictor descriptors. The three descriptor sets used in combination with the Linear Discriminant Analysis classification method revealed that DRAGON and MOE descriptors-derived models displayed higher predictive powers than those using the TOPS-MODE approach. The most frequent shape-related descriptors were associated with van der Waals volumes or areas (i.e. MATS3v, PEOE_VSA+1 and Q_VSA_FNEG), and with self-returning walk count of order 5 (SRW05). The SRW05 descriptor is related to the presence of five member rings in the chemical structure. This result is in agreement with previous SAR results that describe decreases in potency and selectivity for hMAO-B activity when simultaneous substituents are present in position 3-and 5-of the pyrazoline ring [70]. The most frequent descriptor based on the counting of atom-centered fragments was the C-019, and the descriptors based on the chemical functional groups were nArCO and nCrs, and (b_1rotN) bonds. The descriptor (C-019) describes the =CRX fragment, where R represents any group linked through carbon; X represents any electronegative atom and = represents a double bond. It was observed that this fragment is mostly included in non-selective ligands. The descriptors: nArCO representing aromatic ketones, nCrs representing the number of secondary rings C(sp3), and b_1rotN representing rotatable single bonds can be found in both selective and non-selective molecules. The molecular descriptors: average molecular weight (AMW), Geary autocorrelation -lag 3 / weighted by atomic polarizabilities (GAT3p), and descriptors calculated from the eigenvalues of a modified distance adjacency matrix graph, weighted with partial charges (GCUT_PEOE_2) also appeared in the results with a high frequency. The authors suggest that results coming from further descriptor analyses may provide useful knowledge towards hMAO-B selective inhibitor design [81]. Pisani et al. [82] performed 3D-QSAR and docking simulations on a series of 7-metahalobenzyloxy-2Hchromen-2-one derivatives (Fig. 7), considering their rat monoamine oxidase A and B inhibition activity. The data of a series of MAO inhibitors was obtained in house [83]. The initial series, with 67 compounds, was split into a training set (58 compounds) and a test set (9 compounds), having similar coverage in terms of biological activity, range, and structural diversity. Biological activity towards MAO-B was associated via Partial Least Squares to the variation of electrostatic, steric, hydrophobic, hydrogen bond acceptor, and hydrogen bond donor fields using the Gaussian-based fields available in Phase [33]. Considering these characteristics, a new series of MAO inhibitors were designed and prepared. In this new series, substituents at position 4 were introduced with unhindered hydrophilic groups exhibiting hydrogen bond donor/acceptor properties. In the low nanomolar range these 4,7-substituted coumarin derivatives showed outstanding MOA-B selectivity for the MAO-A isoform, and for MAO-B inhibitory potencies. The resume of monoamine oxidade inhibitors studies are presented in Table 2. ACETHYLCHOLINE RECEPTORS The endogenous cholinergic neurotransmitter, acetylcholine, exerts its biological effect via two types of cholinergic receptors: muscarinic acetylcholine receptors (mAChRs), and nicotinic acetylcholine receptors (nAChRs). These two types of receptors are different in both structure and function. The muscarinic acetylcholine receptors belong to a class I subfamily of hepta-helical, trans-membrane G-protein coupled receptors (GPCRs) and were discovered from their ability to bind to the alkaloid muscarine. Muscarinic receptor subtypes were initially classified pharmacologically as either M1 or M2 based on their differential sensitivity to pirenzepine, a selective antagonist to the M1 receptor [84]. Today, they are divided into five distinct subtypes, denoted as M 1 , M 2 , M 3 , M 4 and M 5 [85]. Muscarinic M 1 , M 3 and M 5 receptors couple preferentially to the G q/11 subunit type of G-proteins, activating phospholipase C-β, and inducing a subsequent increase in intracellular calcium concentration [86]. On the other hand, M 2 and M 4 couple mainly to G i/o G-proteins, and typically lead to adenylate cyclase inhibition, with activation of inward-rectifier potassium conductance [87]. Nicotinic acetylcholine receptors derive their name from their affinity for nicotine and are largely distributed both in the peripheral and central nervous systems [88]. Nicotine binds directly to the receptor α subunit and stimulates the opening of a nonspecific cation channel formed by various combinations of α 2 , β, γ, δ and ε subunits [84]. These nAChRs play a key role in signal transmission between cells at the nerve/muscle synapses [89] and in neurodegenerative pathologies [90,91]. In the central nervous system, subunits α2-α7 and β2-β4 combine in a variety of different stoichiometries, resulting in the formation of receptors with distinct biological functions [92]. The α4β2 subtype is the most widespread heteromeric nAChR subtype in the central nervous system (CNS), involved in memory, drug addiction and excitement [93]. Reduction in nAChR activity is a dysfunction in a variety of neurological and psychiatric disorders such as Alzheimer's, Parkinson's, schizophrenia, hyperactivity, depression and even nociception [94][95][96][97]. Thus, pharmaceuticals that selectively target nAChRs might be valuable for the treatment of behavioral symptoms in PD [98]. -OH substituents in positions 6-and 7-will decrease biological activity. Helguera et al. [55] heterocyclic compounds, such as chromones, homo-isoflavonoids, coumarins, and their precursors -van der Waals volumes or areas and five member rings are important to the biological activity. -substituents in position 3-and 5-of the pyrazoline ring will decrease biological activity. Pisani et al. The binding mode of the nAChR agonists was also investigated at the 3-D level by the standard CoMFA [25] procedure, and the GRID/GOLPE approach [114][115][116], with the lipophilic DRY probe was applied. The 2-D QSAR analysis of the congeneric series of nicotinoid analogs showed that two main effects influence ligand binding at the nAChR. Steric effects are unfavorable for biological activity, and lipophilic effects are favorable to biological activity. These observations were found for all classes of examined compounds, and were related to the bulk parameter; Molar Refractivity (MR) and the STERIMOL parameter B5. In three cases the STERIMOL parameter B 5 steric descriptor was correlated with better pK i values than MR. Eq (3) n=15 r =0.831 (q =0.713) s=0.538 The 3-D QSAR analyses allowed merging of all congeneric series with development a global model with good predictive ability. The arecolone and isoarecolone series, the 3-isoxazole derivatives, and the three nonpyrrolidine compounds were combined to form a unique molecular database. The full set of ligands was split into a training set, consisting of 206 compounds, and an evaluation set, consisting of 34 compounds. The good correlation between the binding affinities as calculated for the training set model, and those observed was obtained. Thus, proving the predictivity of the 3-D QSAR model generated. As observed in 2-D QSAR results, the models derived mostly showed the prevalent effects of steric features. Nielsen et al. [117] synthesized six novel series of potent ligands with nanomolar affinity for the α4β2 nAChR subtype, which is the major subtype found in brain tissue. The affinities of the compounds for the α4β2 subtype of nAChRs have been investigated in vitro using [ 3 H]cytisine binding to rat cerebral cortical membranes. The 3D-QSAR model was based on a training set of 25 compounds, and a test set composed of 4 compounds. All calculations were evaluated using the GRID [114,118] and GOLPE [116,119,120] 3D-QSAR approach. The compounds were aligned using (R)-epibatidine and the conformationally restricted nicotinic analogue 29 as templates (Fig. 9). The GRID was used to calculate the interaction energies between the compounds and the four probes (OH2, C3, O -, and N1 + ) in order to mimic possible interactions with the receptor. The final model was obtained using only two probes: OH2 and C3, considering that the N1+ probe reduced the predictivity of the model dramatically, and the O-probe had coefficient plots similar to the plots for the OH2 probe. The smart region definition (SRD) and the fractional factorial design (FFD) selection in GOLPE were applied to eliminate the noise variables. The SRD variable pre-selection reduced the number of variables from 15.155 to 2.169 without altering the quality of the model (Q 2 = 0.390). The FFD variable selection reduced the number of variables to 983 with a highly significant improvement in the quality of the model, with Q 2 from 0.38 to 0.81. The coefficient plots for the OH2 probe and for the C3 probe showed some identical regions. The identical regions with highest negative values were located around the 6position on the pyridine ring, and to a lesser extent around the 5-position. The negative coefficients indicate that bulk substituents in these positions reduce biological activity. The identical regions having the highest positive values are located around the protonated nitrogen. The introduction of substituents or bulky ring systems, which have unfavorable interactions with the C3 probe, increases biological activity. The coefficient plot for the OH2 probe differs from that for the C3 probe in the 6-position (and 5-position) of the pyridine ring. The results indicate that substituents with unfavorable electrostatic interactions with the water probe increase biological activity. Tønder et al. [121] published a pharmacophore model with similar results. The resume of acetylcholine receptors studies are presented in Table 3. ADENOSINE RECEPTORS Adenosine acts as an endogenous modulator in both the central and peripheral nervous systems by interacting with four transmembrane G protein coupled receptors (GPCRs) identified as adenosine receptors (ARs) A 1 , A 2A , A 2B , and A 3 [122,123] (Fig. 10). ARs (A 1 and A 3 ) are negatively coupled to adenylyl cyclase and exert an inhibitory effect on cyclic adenosine monophosphate (cAMP) production [123]. Adenosine A 1 receptors inhibit adenylate cyclase activity. Activation of these receptors results in the opening of several types of potassium channels, and closing of certain calcium channels. The adenosine A 3 receptors are not as wellunderstood as the others. Receptor stimulation leads to the formation of inositol triphosphate (IPA3), and consequently, to increased calcium concentration in the cell [124]. ARs (A 2A and A 2B ) stimulate adenylyl cyclase activity, inducing cAMP level increases in cells [125]. Both subtypes differ in location and pharmacological properties. In the CNS, adenosine A 2B receptor is widely spread, yet adenosine A 2A receptors are found only in dopaminergic regions of the Table 3. Main results obtained for acetylcholine receptors. Authors Compound Principal Results Nicolotti et al. [99] nicotinic agonists -steric effects are unfavorable for biological activity; -lipophilic effects are favorable to biological activity. Nielsen et al. [117] pyridines derivatives -bulk substituents around the 6-position and to a lesser extent around the 5position will decrease the biological activity. brain [124]. Adenosine A 1 and A 2A receptors are characterized by high affinity for adenosine, while A 2B and A 3 receptors show significantly lower affinity for adenosine. Adenosine A 2A receptors are primarily expressed in dopamine rich areas of the CNS [126], and are located on the bodies of indirect pathway medium spiny striatal neurons and dopamine terminals. Currently, connections between A 2A and D 2 receptors are of great interest for Parkinson's disease (PD) treatment, which involves a decrease in dopamine levels [127]. Antagonism of AR (A 2A ) reduces adenosine signaling, enhances the sensitivity of the dopaminergic neurons, and restores balance to the signaling pathway controlling muscle movement. Thus, an A 2A receptor antagonist may be a beneficial monotherapy for the treatment of PD and could be a very interesting target in new drug design. There has been a significant effort over the past decade to synthesize novel and selective A 2A receptor antagonists, and as result, istradefylline (KW-6002) was launched under the name Nouriast  as the first antiparkinsonian agent based on A 2A receptor antagonism [128]. Khanfar et al. [129] employed a genetic function algorithm (GFA) to build predictive QSAR models for a collection of 188 Adenosine A 2A antagonists in order to generate differing pharmacophore binding hypotheses. The GFA method was employed to select differing combinations of pharmacophores and molecular descriptors. The pharmacophoric space of Adenosine A 2A antagonists was explored through eight HYPOGEN automatic runs that were performed on seven training subsets. Compounds in the training subsets were selected considering structural diversity and a wide range of bioactivities. The training subsets were chosen considering that differences in Adenosine A 2A bioactivity primarily results from the presence or absence of pharmacophoric features. In this work, Khanfar et al. [129] implemented the genetic function algorithm as a tool for selecting differing combinations of pharmacophores and molecular descriptors. The ability of the resulting pharmacophore(s)/descriptor(s) combinations to explain biological activity variations was explored using two methodologies: (a) multiple linear regression (MLR) analysis, and (b) kNN regression. Unfortunately, the QSAR predictive models obtained by using GFA/MLR-based QSAR analyses were statistically insignificant. In order to improve the results, kNN-based QSAR analysis was employed. This approach relies on a distance learning methodology, where the activity of an unknown member is predicted from the activity of a certain number (k) of nearest neighbors (kNNs) in the training subset. To validate the kNN-QSAR selected pharmacophores, operating characteristic curve (ROC) analysis was employed. Such analysis makes it possible to assess the ability to selectively capture diverse Adenosine A 2A antagonists from a large list of decoys [130]. The successful pharmacophores were complemented with exclusion spheres to improve their ROC receiver profiles. The best QSAR models were used as 3D search queries to perform a virtual screen in the National Cancer Institute structural database, to identify novel Adenosine A2A antagonist leads. The most potent hit yielded an IC 50 value of 545.7 nM. Using 3D-QSAR, molecular dynamics, and thermodynamic analysis, Zhang et al. [131] studied the interactions of 278 monocyclic and bicyclic pyrimidine derivatives with the human A 2A adenosine receptor. The compounds were classified and separated artificially into three sets, i.e., Training set I: pyrimidine and triazine derivatives (97 compounds); Training set II: pyrazolo [3,4-d]pyrimidines, pyrrolo [2,3-d]pyrimidines, triazolo [4,5-d]pyrimidines and 6-arylpurines (120 compounds); and Training set III: thieno [3,2-d]pyrimidines (61 compounds) (Fig. 11). Two kinds of alignment were performed: i) the most active compound in each dataset was considered as template and the Align-Database function in Sybyl [25] was executed, and ii) the bioactive conformations of all compounds were at once derived from docking, and then processed, using the initial method. Docking analysis was performed to verify binding sites of wild AR [A 2A (PDB code 3PWH.pdb)] with certain mutations (PDB code 3EMS.pdb). To mimic the impacts of receptor flexibility, and water solvation effects on the ligandreceptor complex, a dynamic simulation was carried out. Analysis of the docking results showed that the binding poses for the three kinds of the derivatives maintained similar binding modes within the AR (A 2A ). Interactions between ligands and the active AR (A 2A ) site involves polar interactions with GLU169 and ASN253 side chains, non-polar interactions with VAL84, LEU249, MET270 and ILE274, and π-stacking between aromatic moieties of the ligands and the conserved PHE168 side chain of the receptor. The docking results showed that ASN253 is capable of forming stable H-bonding with ligands. This indicates that the residue is fundamental in maintaining binding poses with different heterocyclic compounds. After docking, the energetically favorable conformations from among the compounds were selected for CoMFA and CoMSIA modeling. Several molecular descriptors were included in the PLS analyses to derive more reasonable QSAR models. All of the statistical parameters obtained using the CoMFA and CoMSIA approaches were reasonably high, which confirms the stability and predictability of the models. Three models were generated. The maps of model I show that bulky groups at C4 and C6 positions on the pyrimidine ring increase binding affinity. Bulky groups near substituents at C2, C5 and C6 positions decrease binding affinity. The upper region of the group at C4 is favorable for hydrophobic interaction. Interactions between the ligands may occur with ALA63, ILE66 and ILE274. Hydrophilic groups at C2 and C4 increase binding affinities. At C2, a small, electronegative and hydrophilic substituent would increase binding activity. To increase the binding affinity, a limitedly bulky group at C4 should bear both electronegativity and hydrophilic interactions but not in the role of H-bond donor. Non-H groups at C5 such as methyl would decrease inhibitory activity. The maps of model II indicate that at C6 of the pyrimidine ring, a minor electronegative group should play a role as H-bond acceptor and increase biological activity. At C2 of the pyrimidine ring, an H-bond donating group increases biological activity as can be observed for compounds substituted with an amino group at this position. At Nitrogen at position 3 of ring B, N3, a limitedly bulky group as H-bond acceptor increases biological activity. An aromatic ring attached to the nitrogen is essential for both affinity and selectivity for A 2A antagonists. In the maps of model III at position C6 of the pyrimidine ring, electronegative, hydrophilic and limited bulky groups as H-bond donors increase inhibitory activity. At position C2 of the pyrimidine ring, a small and electronegative group as H-bond donor is well tolerated and would functionally deliver good potency. At position C2, a small lipophilic group plays an important role in AR (A2A) affinity and selectivity over AR (A 1 ). A series of 4-arylthieno[3, 2-d] pyrimidine derivatives was studied by Ahmed et al. [132] through QSAR analysis in order to evaluate antagonist activity towards both adenosine A 1 and adenosine A 2A targets (Fig. 11). Biological data of adenosine A 1 and adenosine A 2A for QSAR analysis was obtained from the literature [133]. The structures of 4arylthieno [3, 2-d] pyrimidine derivatives, Fig. (12), were built using INSIGHT-II (Accelrys Software Inc., US) software. The Cerius2 package was used to calculate the molecular descriptors, which included: 2D topological, thermodynamic, structural descriptors and charge dependent descriptors. The physicochemical screening of 4-arylthieno [3, 2-d] pyrimidine derivatives was executed using FAF-Drugs [134]. This tool performs various physicochemical calculations, identifies key functional groups, certain toxic, and unstable molecules or functional groups. The QSAR model was generated using respectively 19 and 21 4-arylthieno [3, 2-d] pyrimidine derivative compounds as training sets for A 1 and A 2A inhibitors. The best models are presented in equations 4 and 5. The QSAR models were generated using the genetic function approximation (GFA). This algorithm is a useful technique for a database with a large number of descriptors and a small number of molecules. In this step, only 37 compounds were analyzed, considering their poor scalability. Eq (5) n=21, r 2 =0.936, r 2 adj =0.913, LOF=0.668, q 2 =0.881. The predictive ability of the QSAR model was further validated with the test set containing 12 compounds for A 1 , and 12 compounds for A 2A inhibitors. The r 2 values of the A 1 and A 2A antagonists were above 0.7, indicating a good percentage of total variance in biological activity. The q 2 > 0.6 suggested that the models will be useful in the future for meaningful predictions. Validation was done employing test sets that contained 12 compounds of A 1 and A 2A inhibitors. The predictive power of the model was reasonably good with predictive r 2 values (0.961, 0.914), and cross validated r 2 being respectively (0.912, and 0.781). For equation 7, the DIPOLE MAG descriptor suggests that the strength and behavior of the molecule's orientation will increase A 1 inhibitory activity. The molecular connectivity index CHI-V-3-P suggests that molecular bonds, clusters, rings and flexibility are less favored for A 1 inhibitory activity. For equation 8, the topological descriptor SC-2 suggests unfavorable molecular branching for inhibition activity. The molecular surface area (AREA) describes binding, transport, and solubility for a molecule; and the negative weight suggests a less favorable inhibition of the A 2A receptor. The Wiener graph-theoretical descriptor represents the sum of the chemical bonds existing between all pairs of heavy atoms The resume of adenosine receptors studies are presented in Table 4. CONCLUSION Application of computational methods is of great importance to drug discovery, and methods such as molecular docking, QSAR, pharmacophore modeling and molecular dynamics are being broadly applied in drug development in order to cure Parkinson disease. Computer aided-drug design enables the creation of theoretical models that can be used in a large database to virtually screen for and identify novel candidate molecules. The models obtained using congeneric series allow a more in-depth understanding of binding sites, and permit modifications in ligand structures that enhance receptor binding. For the 3D-QSAR the alignment step is crucuial to perform a correct study. Analysis, understanding and improvement of a pharmacophoric group can also aid in the development of new binders. However, the limited set of substituents and the not very good quality for uncommon functional groups can injure the QSAR models. Computational methods provide benefits to the drug discovery process, expanding and guiding all stages. The results presented in this review may help the development of new drugs against Parkinson's disease and promote its cure. CONSENT FOR PUBLICATION Not applicable. CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise. Table 4. Main results obtained for adenosine receptors by computer-aided drug design methods. Authors Compound Principal Results Zhang et al. [131] monocyclic and bicyclic pyrimidine derivatives Model I -at C4 on the pyrimidine ring hydrophilic and bulky groups and; -at C6 bulky groups will increase binding affinity; -upper region of the group C4 hydrophobic interaction are favorable; -at C2, a small, electronegative and hydrophilic substituent will increase binding activity. -at C5 and C6 bulky groups will decrease binding affinity; -at C5 non-H groups will decrease inhibitory activity. Model II -at C6 a minor electronegative group should play a role as H-bond acceptor and iwill ncrease biological activity; -at N3 of ring B a limitedly bulky group as H-bond acceptor will increase biological activity; -an aromatic ring attached to the nitrogen is essential for both affinity and selectivity for A2A antagonists. Model III -at C6 electronegative, hydrophilic and limited bulky groups as H-bond donors will increase inhibitory activity; -at C2 a small lipophilic group plays an important role in AR (A2A) affinity and selectivity over AR (A1). Ahmed et al. [132] 4-arylthieno [3, 2-d] pyrimidine derivatives -the strength and behavior of the molecule's orientation will increase A1 inhibitory activity; -molecular bonds, clusters, rings and flexibility are less favored for A1 inhibitory activity; -molecular branching will decrease the biological activity; -binding, transport, and solubility and the negative weight suggests a less favorable inhibition of the A2A receptor; -the increase in the number of heavy atom pairs will increase the biological activity; -less flexibility for the compounds will enhance A2A receptor inhibition.
2018-04-03T04:20:55.857Z
2017-11-28T00:00:00.000
{ "year": 2018, "sha1": "2792e55a6ea6743fb6ee660fb9f9e65737ad2595", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6080092?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2792e55a6ea6743fb6ee660fb9f9e65737ad2595", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119359771
pes2o/s2orc
v3-fos-license
Two-dimensional fluctuations at the quantum-critical point of CeCu_{6-x}Au_x The heavy-fermion system CeCu_{6-x}Au_x exhibits a quantum critical point at x_c = 0.1 separating nonmagnetic and magnetically ordered ground states. The pronounced non-Fermi-liquid behavior at x_c calls for a search for the relevant quantum critical fluctuations. Systematic measurements of the inelastic neutron scattering cross section S(q,omega) for x = 0.1 reveal rod-like features in the reciprocal ac plane translating to two-dimensional (2d) fluctuations in real space. We find 3d magnetic ordering peaks for x = 0.2 and 0.3 located on these rods which hence can be viewed as 2d precursors of the 3d order. The heavy-fermion system CeCu6−xAux exhibits a quantum critical point at xc ≈ 0.1 separating nonmagnetic and magnetically ordered ground states. The pronounced non-Fermi-liquid behavior at xc calls for a search for the relevant quantum critical fluctuations. Systematic measurements of the inelastic neutron scattering cross section S(q, ω) for x = 0.1 reveal rod-like features in the reciprocal ac plane translating to two-dimensional (2d) fluctuations in real space. We find 3d magnetic ordering peaks for x = 0.2 and 0.3 located on these rods which hence can be viewed as 2d precursors of the 3d order. Continuous quantum phase transitions which occur in a strict sense only at temperature T = 0 are driven by quantum fluctuations instead of thermal fluctuations as for ordinary classical phase transitions [1,2]. This leads to unusual and rich behavior even at finite temperatures in the neighborhood of the critical point. Because of the uncertainty principle the energy scale of fluctuations introduces a time scale which leads to an intricate coupling of static and dynamic critical behavior. For instance, the critical behavior of the specific heat will depend on the dynamical critical exponent z relating the typical lifetime ξ τ and correlation length ξ of critical fluctuations, ξ τ ∝ ξ z . Such a quantum phase transition can be achieved by changing a coupling parameter which plays a role analogous to temperature in ordinary phase transitions. In recent years, many physical realizations of quantum phase transitions have been found. The case of a magnetic-nonmagnetic transition in heavy-fermion metals is particularly interesting because of the involvement of itinerant electrons. Excitations of a system of interacting itinerant electrons in a metal, i.e. quasiparticles, are usually described within the Fermi-liquid theory, with the specific heat C ∝ T , a Pauli susceptibility independent of T , and an electrical resistivity contribution ∆ρ ∝ T 2 due to quasiparticle-quasiparticle scattering. Interactions renormalize the quasiparticle masses with respect to the free-electron mass m 0 . Even in heavy-fermion systems with quasiparticle masses as high as several 100 m 0 , Fermi-liquid behavior is the rule rather than the exception [3]. In heavy-fermion systems, the coupling parameter tuning the magnetic-nonmagnetic transition is the (antiferromagnetic) exchange interaction J between 4f or 5f magnetic moments and conduction electrons [3]. If it is strong, a local singlet state is formed via the Kondo effect around each 4f or 5f site, leading to a nonmagnetic ground state. On the other hand, a weak (but non-zero) exchange interaction leads to a Rudermann-Kittel-Kasuya-Yosida coupling between moments and hence to magnetic order. In the exemplary system CeCu 6−x Au x doping of CeCu 6 with the larger Au atom leads -via lattice expansion -to a weakening of the Kondo effect and hence to long-range antiferromagnetic order for x > x c ≈ 0.1, with a linear increase of the Néel temperature T N ∝ (x − x c ) µ , i.e. µ = 1 [4]. At x c where T N vanishes, i.e. around the quantum critical point, pronounced deviations from Fermi-liquid behavior occur. This non-Fermi-liquid (NFL) behavior is seen, e.g. in the specific heat where C/T ∝ −ln(T /T 0 ) over nearly two decades and in the resistivity where ∆ρ ∝ T . It is precisely this NFL behavior at the quantum critical point that stirred a lot of interest [5] since it cannot be explained in terms of a transition driven by three-dimensional (3d) fluctuations, because for an antiferromagnet with d = 3 and z = 2, C/T ∝ 1 − B √ T and ∆ρ ∝ T 3/2 would be expected [2,6]. A step forward towards the solution of the NFL puzzle in CeCu 6−x Au x was to realize [7] that 2d critical fluctuations coupled to quasiparticles with 3d dynamics will indeed lead to γ = C/T ∝ −ln(T /T 0 ), ∆ρ ∝ T and µ = 1 as experimentally observed. Elastic neutron scattering experiments at 0.07 K on CeCu 5.8 Au 0.2 with T N = 0.25 K showed, in addition to peaks attributed to (shortrange) antiferromagnetic order, broad maxima along the a * direction that were much sharper in the b * direction [7]. This latter feature was interpreted in terms of ferromagnetic planes perpendicular to the a direction (orthorhombic notation) and thus provided a possible scenario of the d = 2, z = 2 universality class [7]. Without any direct evidence it is certainly hard to believe that 2d correlations are dominating an intrinsically 3d alloy, even if the thermodynamics strongly support such a picture. Therefore it is essential to investigate the quantum critical fluctuations directly by inelastic neutron scattering. The experiments were carried out at the triple-axis spectrometer IN14 at the Institut Laue-Langevin, Grenoble with a fixed final neutron energy E f = 2.7 meV (k f = 1.15Å −1 ), giving an energy resolution (FWHM) of 0.07 meV. The CeCu 6−x Au x single crystals were grown with the Czochralski method in a W crucible. The specific heat of the sample with x = 0.1 exhibits the NFL behavior C/T ∝ −ln(T /T 0 ) as measured down to 60 mK, in agreement with previous samples of the same Au concentration [4]. Fig. 1 shows q scans of the dynamic structure factor S(q, ω) in the reciprocal ac plane in two perpendicular directions at very low T < 100 mK at an energy transfer ω = 0.1 meV. The (h 0 0) scan (Fig. 1a) reveals a broad double maximum at (0.8 0 0) and (1.2 0 0). This double maximum is only resolved at small ω. For instance, for ω = 0.25 meV only a single broad feature centered at (1 0 0) is seen [8]. Hence it may be thought of as developing from the broad maximum observed at the same q for ω = 0.3 meV in CeCu 6 [9]. Upon entering the magnetically ordered state for the x = 0.2 alloy, the double-peak structure appears as a (quasi-)elastic feature for x = 0.2 that represents short-range ordering evidenced by a width in q that is considerably larger than the q resolution [7,8]. Fig. 1b shows that for x = 0.1 there is a very rich structure of S(q, ω) in the a * c * plane, as derived from scans (h 0 0 l) along c * for fixed h = h 0 . The peak at (1.2 0 0) splits when moving away from the a * axis. The solid lines present Lorentzian fits with a width of (0.24 ± 0.02) A −1 for all scans shown. However, the main point is that the peak height remains roughly constant across the whole Brillouin zone (cf Fig. 3a). The width along c * is comparable to the width of the (1.2 0 0) maximum along (h 0 0) (cf. Fig. 1a). This suggests a rod-like feature of the dynamical magnetic response. It does not, however, extend along the a * axis as previously assumed [7] but in an oblique direction. [10], for x = 0.5 from [11]). The vertical and horizontal bars indicate the line width for x = 0.1. The four rods are related by the orthorhombic symmetry (Pnma). Here we ignore a small monoclinic distortion (< 1%) at low temperatures. The open symbols for x = 0.2 represent the short-range ordering peaks [7,8]. The inset shows a resolution limited magnetic Bragg peak for x = 0.2 at T = 50 mK (elastic scan at (1.375 0 l)). Fig. 2 shows the peak positions in the a * c * plane for x = 0.1 and ω = 0.1 meV derived from Fig. 1 and from further measurements. To corroborate the rod-like nature of S(q, ω = 0.1 meV) for x = 0.1, further scans across the peaks were performed along independent directions, one in the a * b * plane along b * and one in the a * c * plane perpendicular to the rod-like feature. They, too, reveal a width of comparable magnitude as can be seen from Fig. 3b and c. In order to interpret S(q, ω) of CeCu 5.9 Au 0.1 , we recall that a rod-like feature in q space is related to a 2d correlation between Ce atoms in real space. The shaded rods in Fig. 2 can therefore be identified with planes in real space. These planes extend into the b direction and into a direction in the ac plane given by next-nearest neighboring Ce atoms. Thus, the observed quasi 2d correlations strongly support the proposed scenario [7] of 2d spin fluctuations coupled to quasiparticles with 3d dynamics, although the quasi 2d correlations are not ferromagnetic as initially supposed. The 2d fluctuations apparently are the precursor of the 3d magnetic ordering. Indeed, the Bragg points for samples not too far from the magnetic instability, e.g. x = 0.2 and 0.3, are located on the rods for x = 0.1. For x = 0.2 in addition to the rather broad double maximum at q = (0.8 0 0) and (1.2 0 0) [7] we find resolution-limited peaks at (0.625 0 0.275) and at lattice-equivalent positions in reciprocal space. The inset in Fig. 2 displays a (1.375 0 l) scan of such a Bragg peak. The main frame of Fig. 2 shows that its position is indeed on one of the rods as is the position of the short-range order peaks along a * . However, we have not observed a 3d precursor for x = 0.1, i.e. enhanced scattering intensity around the Bragg peak for x = 0.2. This is an important point in favor of the 2d scenario. For x = 0.3, the Bragg position remains almost unchanged while no short-range order peaks on the a * axis were detected. For x = 0.5 a sudden reorientation of the magnetic ordering vector is observed, with incommensurate order along a * with τ = (0.59 0 0) [11] which is then roughly constant up to x = 1 [10]. The reorientation of τ occurring between x = 0.3 and 0.5 deserves further study. Returning to the quantum-critical point at x = 0.1, we recall that in the d = 2, z = 2 scenario we expect the following generic form of the ω-and q-dependent susceptibility describing magnetic fluctuations in the plane [7]: The imaginary part of the susceptibility is directly proportional to the magnetic structure factor S(q, ω) measured with inelastic neutron scattering, S(q, ω) = (1 + n B (ω))Imχ 2d q (ω)f (q ⊥ ), where n B (ω) is the Bosefunction. The smooth function f (q ⊥ ) describes the weak q-dependence perpendicular to the planes, i.e. along the rod-like structures shown in Fig. 2. q is the momentum in the plane, i.e. perpendicular the rods. q 0 and ω 0 are constants that vary only slightly with temperature T and depend only weakly on the momentum along the rods. We expect that the above equation is valid (up to logarithmic corrections) for small momenta and frequencies ω, ω 0 q 2 /q 2 0 ≪ k B T K , where T K ≈ 6 K is the Kondo temperature in the system. We have neglected the small anisotropy within the planes. The effective correlation length is given by ξ. It is expected to vary strongly with T , ω 0 /(q 0 ξ) 2 ≈ Ak B T with A being a constant of order one varying only logarithmically with T [12,2]. The q-scans were performed with an energy transfer of 0.1 meV (= 1.2K·k B ) which is large compared to the temperature of 70 mK. Therefore we expect that the width of the peaks shown in Fig. 1 and 3 does not measure the true correlation length ξ but determines the ratio The important question arises whether the observed magnetic fluctuations can be related to the NFL behavior of the thermodynamic quantities at the quantum critical point, i.e. to the logarithmically diverging specific-heat coefficient. Actually, the prefactor of the specific-heat coefficient per area of a plane is fully determined by the quantum critical theory [13] γ 2d = (n/12)(q 2 0 /ω 0 ) ln(T 0 /T ) where n is the number of spin components. We use n = 1, as the magnetic anisotropy [4] suggests an Ising system. T 0 is an unknown temperature scale of the order of T K . To calculate the specific heat per volume one has to know the distance L of the planes -note that especially in an incommensurate structure such a distance is only an effective quantity. Then the molar specific heat is given by V M is the volume per mol of CeCu 5.9 Au 0.1 , the factor 2 takes into account that the correlations show up in two different directions (see Fig. 2). This value has to be compared to the measured [4] specific heat coefficient γ = 0.6 J mol K 2 ln(T 0 /T ) which is indeed of the same order of magnitude as our estimate (3). From the crystal structure one would expect L to be of the order of 4 to 10Å while from (3) we obtain L ≈ 2 − 3Å which is somewhat too small. However, one has to take into account the considerable theoretical uncertainty, e.g. arising from the definition of L in an incommensurate system or the unknown effective number of spin-components in this anisotropic system. In addition, the momenta and frequencies used in our analysis are quite large. Therefore, we think that the semi-quantitative agreement of the width of the rods in q space compared to the specific heat gives strong support for the idea that the 2d fluctuations are responsible for the observed NFL behavior. As a final point, we discuss the energy dependence of the critical modes. Fig. 4 shows energy scans at exactly the q value of the magnetic order in CeCu 5.8 Au 0.2 (cf. inset of Fig. 2). The solid lines indicate a fit comprised of an elastic Gaussian and a quasi-elastic Lorentzian (convoluted with the resolution) with full-width Γ = 0.62, 0.14 and 0.11 meV for the latter at T = 5, 0.5 and 0.07 K, respectively. The shift towards ω = 0 with decreasing T is clearly seen at this q as opposed to the rather broad ω scan at q = (1.8 0 0) with Γ = 0.48 meV for T = 0.07 K (deduced from polycrystalline measurements). Due to the finite energy resolution and the large elastic background it is difficult to obtain reliable values for Γ at low temperatures. For instance, assuming a weakly ωdependent background reduces Γ at T = 0.07 K by a factor of 2-3. Nevertheless, the observed decrease of Γ seems to be slower than Γ = ω 0 /(q 0 ξ) 2 ∝ T predicted for the d = 2, z = 2 quantum critical theory for T ≪ T K ≈ 6 K. It is important to note that the onset of 3d correlations would have the opposite effect at this q value where 3d magnetic order is expected. We note that in measurements of S(q, ω) concentrated on q = (1.2 0 0) a similar decrease of Γ towards low T with apparent leveling-off is observed [14]. On the other hand, the thermodynamics, notably the specific heat, shows a logarithmic increase of γ down to at least 0.06 K without a signature of a corresponding energy scale. Therefore this point certainly requires further studies, e.g. on the role of disorder in our incommensurate system. In conclusion, we have identified the critical twodimensional fluctuations leading to non-Fermi-liquid behavior in CeCu 5.9 Au 0.1 with a systematic study of quasielastic neutron scattering. From the observed dynamic susceptibility a semi-quantitative agreement with the prefactor of the logarithmic increase of the specificheat coefficient is found. We acknowledge helpful discussions with A. Schröder and P. Wölfle. This work was supported by the Deutsche Forschungsgemeinschaft.
2019-04-14T02:17:27.512Z
1998-02-09T00:00:00.000
{ "year": 1998, "sha1": "111c516609f0ce1dcf15db9df3fc3cc1b01b68a5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9802086", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "111c516609f0ce1dcf15db9df3fc3cc1b01b68a5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246055009
pes2o/s2orc
v3-fos-license
Differentiation of Cells Isolated from Human Femoral Heads into Functional Osteoclasts Proper formation of the skeleton during development is crucial for the mobility of humans and the maintenance of essential organs. The production of bone is regulated by osteoblasts and osteoclasts. An imbalance of these cells can lead to a decrease in bone mineral density, which leads to fractures. While many studies are emerging to understand the role of osteoblasts, less studies are present about the role of osteoclasts. This present study utilized bone marrow cells isolated directly from the bone marrow of femoral heads obtained from osteoarthritic (OA) patients after undergoing hip replacement surgery. Here, we used tartrate resistant acid phosphatase (TRAP) staining, Cathepsin K, and nuclei to identity osteoclasts and their functionality after stimulation with macrophage-colony stimulation factor (M-CSF) and receptor activator of nuclear factor kappa-β ligand (RANKL). Our data demonstrated that isolated cells can be differentiated into functional osteoclasts, as indicated by the 92% and 83% of cells that stained positive for TRAP and Cathepsin K, respectively. Furthermore, isolated cells remain viable and terminally differentiate into osteoclasts when stimulated with RANKL. These data demonstrate that cells isolated from human femoral heads can be differentiated into osteoclasts to study bone disorders during development and adulthood. Introduction The Formation of the skeleton during embryogenesis and early stages of development is crucial for proper protection, support, and function of human organ systems [1][2][3][4]. This process is governed by two major bone cell types, which include osteoblasts and osteoclasts [5][6][7][8][9][10]. Osteoblasts are mononucleated cells responsible for forming new bone and derive from mesenchymal stem cell (MSC) progenitors [11][12][13][14]. Contrarily, osteoclasts are multinucleated immune cells that derive from hematopoietic stem cells (HSCs) and resorb old or damaged bone [8,10,[15][16][17][18]. The crosstalk between these two cell types is critical for the formation of early structures, such as digits, and bone homeostasis throughout adulthood [1,19,20]. Indeed, about 10% of the adult human skeleton is renewed each year and is dependent on the balance between osteoblasts and osteoclasts [5,8,9,21]. However, as humans age, the balance shifts toward bone resorption, and may lead to bone disorders such as osteopenia and osteoporosis (OP) [21][22][23][24][25][26]. Thus, elucidating the precise mechanisms that are responsible for this shift in homeostasis is of the utmost importance. Osteoclastogenesis and the activity of osteoclasts are reliant on osteoblasts [16,[27][28][29][30]. Osteoblasts secrete two proteins that are necessary for osteoclast function and differentiation: macrophage colony stimulating factor (M-CSF) and receptor activator of nuclear factor elucidating the precise mechanisms that are responsible for this shift in homeostasis is of the utmost importance. Osteoclastogenesis and the activity of osteoclasts are reliant on osteoblasts [16,[27][28][29][30]. Osteoblasts secrete two proteins that are necessary for osteoclast function and differentiation: macrophage colony stimulating factor (M-CSF) and receptor activator of nuclear factor kappa-β ligand (RANKL) [31][32][33]. As osteoblasts secrete M-CSF, this ligand subsequently binds to colony-stimulating factor-1 (c-fms) receptors located on granulocyte/macrophage progenitors (GMPs) or peripheral blood mononuclear cells (PBMCs) [34][35][36]. Here, GMPs differentiate into monocyte/macrophage cells, which are considered osteoclast precursors [37]. Simultaneously, osteoblasts are secreting RANKL, which will bind to RANK receptors expressed on the cell surfaces of monocytes/macrophages [38,39]. The monocytes/macrophages can then terminally differentiate into functional osteoclasts, which will be responsible for resorbing bone (Figure 1). The M-CSF and RANKL signaling pathways promote cell differentiation, survival, and proliferation by activating the Akt, Erk, NF-κB, and MAPK pathways [18]. It has previously been reported that murine RAW264.7 cells of the monocyte/macrophage lineage respond to these factors and can be used as a model for studying osteoclastogenesis [16,27]. However, the effects of M-CSF and RANKL in bone marrow cells isolated directly from human femoral heads remains unclear. GMPs, which express c-fms receptors that M-CSF ligands can bind. After binding, GMPs differentiate into monocyte/macrophage precursors, that further differentiate into pre-osteoclasts and active osteoclasts when RANKL binds to RANK receptors. Active osteoclasts are multinucleated and express TRAP and Cathepsin K. Osteoclasts are expressed predominantly during bone resorption, making them difficult to isolate in high concentrations so as to study osteoclastogenesis [40][41][42]. The isolation process becomes more difficult when obtaining osteoclasts or pre-osteoclasts directly from the bone marrow of patients with bone disorders, such as OP or osteoarthritis (OA). Moreover, previous studies suggest that while osteoclasts may be isolated, they are obtained at low concentrations, especially when terminally differentiated [42][43][44][45]. Furthermore, as these cells do not proliferate, it is difficult to maintain and keep them viable [43][44][45]. These cells also undergo apoptosis rapidly when isolated, posing many challenges when maintaining them [46]. To combat these challenges, previous studies have utilized PBMCs obtained from human blood [32,36,37,41,47]. As these cells are consisted of monocytes/macrophages, this method is a powerful technique to study osteoclastogenesis. However, less data are available regarding cells isolated directly from the bone marrow of humans, which may provide additional insight into the function of osteoclasts in the bone microenvironment. Therefore, the development of a reliable and usable model that isolates osteoclast precursors that become functional osteoclasts directly from the bone Figure 1. Development of an active osteoclast. Hematopoietic stem cells (HSCs) differentiate into GMPs, which express c-fms receptors that M-CSF ligands can bind. After binding, GMPs differentiate into monocyte/macrophage precursors, that further differentiate into pre-osteoclasts and active osteoclasts when RANKL binds to RANK receptors. Active osteoclasts are multinucleated and express TRAP and Cathepsin K. Osteoclasts are expressed predominantly during bone resorption, making them difficult to isolate in high concentrations so as to study osteoclastogenesis [40][41][42]. The isolation process becomes more difficult when obtaining osteoclasts or pre-osteoclasts directly from the bone marrow of patients with bone disorders, such as OP or osteoarthritis (OA). Moreover, previous studies suggest that while osteoclasts may be isolated, they are obtained at low concentrations, especially when terminally differentiated [42][43][44][45]. Furthermore, as these cells do not proliferate, it is difficult to maintain and keep them viable [43][44][45]. These cells also undergo apoptosis rapidly when isolated, posing many challenges when maintaining them [46]. To combat these challenges, previous studies have utilized PBMCs obtained from human blood [32,36,37,41,47]. As these cells are consisted of monocytes/macrophages, this method is a powerful technique to study osteoclastogenesis. However, less data are available regarding cells isolated directly from the bone marrow of humans, which may provide additional insight into the function of osteoclasts in the bone microenvironment. Therefore, the development of a reliable and usable model that isolates osteoclast precursors that become functional osteoclasts directly from the bone microenvironment is crucial. A current option is human femoral heads obtained after hip replacement surgery, as they are readily available and are representative when studying human bone disorders. While RAW264.7 cells are efficient and reliable to study osteoclastogenesis, they are a murine cell line and may not be representative of human disorders. Therefore, the utilization of cells isolated from human femoral heads may be a better model to study osteoclast activity. Here, we demonstrate that cells isolated directly from the bone marrow of OA patients after undergoing hip replacement surgery can be differentiated into osteoclasts that are viable and functional. We showed that pre-osteoclasts do not differentiate readily when stimulated with only M-CSF, but differentiate frequently when exposed to both M-CSF and RANKL. Furthermore, we demonstrated that isolated cells stained positive for tartrate-resistant acid phosphatase (TRAP) and Cathepsin K, which are enzymes expressed by osteoclast precursors and osteoclasts during bone resorption. While 60% of the cells stimulated with RANKL differentiated into multinucleated osteoclasts, 92% and 83% of the cells stained for Cathepsin K and TRAP, respectively. These data demonstrate that our cell population was predominantly osteoclasts and pre-osteoclasts. These data indicate that cells isolated from the femurs of diseased patients can be grown and differentiated to better understand the imbalance between osteoblast and osteoclast activity. In summary, these results and methods will help future research uncover potential therapeutics that are desperately needed to treat bone disorders. Femoral Head Retrieval Human femoral heads were obtained from ChristianaCare hospitals (Wilmington, DE, USA and Newark, DE, USA). The femoral heads were collected from 10 male OA and 6 female patients that underwent total hip replacement surgery. The age range of the patients at the time of surgery was 49-83 years. Following surgical removal, the samples were stored in a 4 • C refrigerator and collected for this study on the same day. Cell Isolation from Femoral Heads Trabecular and cortical bone were extracted from the femoral heads and placed into a 50 mL conical tube containing 10 mL of Hanks Balanced Salt Solution (HBSS). Followed by bone removal, the bone marrow of the samples was washed with additional HBSS to collect bone marrow cells. After the cells settled for 2−3 min, the solution was filtered through a 70 µm cell filter into a separate 50 mL falcon tube containing 5 mL of alpha modified Minimum Essential Medium Eagle (α-MEM; Caisson Labs, Smithfield, UT, USA, Cat# MEL08-500ML), supplemented with 10% fetal bovine serum (FBS; Gemini Bioproducts, West Sacramento, CA ,USA), 1% penicillin/streptomycin (pen/strep; Fisher Scientific, Pittsburg, PA, USA), and 1% antibiotic/antimycotic (anti/anti; Gemini Bioproducts, West Sacramento, CA, USA). The filtered solution was centrifuged at 1800 revolutions per minute (RPM) for 9 min at 4 • C. The cell pellet was resuspended in α-MEM and plated into 12-well or 24-well plates at a cell density of 1 × 10 5 cells/mL ( Figure 2). Extracted cells from femoral heads were plated and supplemented with α-MEM, along with 10% FBS, 1% penicillin/streptomycin, 1% antibiotic/antimycotic, and 25 ng/mL M-CSF. The cells were incubated at 37 • C with 5% CO 2 for five days. On day five, the media were replaced, and cells were stimulated with 50 ng/mL RANKL and 25 ng/mL M-CSF (Sino Biological, Beijing, China) or left unstimulated (M-CSF only) for five days. Five days later, the media were replaced, and cells were restimulated with RANKL or left unstimulated. On day 14, the stimulation was terminated, and the media were removed from the wells. Optimization of Differentiating Pre-Osteoclasts into Osteoclasts To determine the optimal conditions to differentiate pre-osteoclasts into functional osteoclasts, various cell densities and concentrations of RANKL/M-CSF were supplemented in the cell culture. After the primary cells were isolated from the human femoral heads, they were subjected to four experimental conditions. The cells were then stained for TRAP and were imaged to identify osteoclasts. Tartrate Resistant Acid Phosphatase (TRAP) Staining To observe if the cells isolated from the human femoral heads could be differentiated into osteoclasts, they were stained for TRAP, an enzyme highly expressed by osteoclasts [48]. After removing the media from the wells, the cells were washed three times with 1x phosphate buffered saline (PBS). The cells were then fixed with 4.4% paraformaldehyde (PFA, pH 7.2; Sigma-Aldrich, St. Louis, MO, USA) for 15 min at room temperature. After, the cells were washed three times with DiH 2 O and stained for TRAP using an acid leukocyte phosphatase kit (Cat# 387-1KT, Signa-Aldrich, St. Louis, MO, USA) following the manufacturer's protocol. The cells were washed three times with DiH 2 O, and the nuclei were counterstained with Hematoxylin Gill No. 3 (Cat# 387-1KT, Signa-Aldrich, St. Louis, MO, USA) for 2−3 min. The wells were washed several times with alkaline water and left to dry in the dark for at least two days. Osteoclasts were identified as having at least three nuclei and visible TRAP stains. As macrophages can express TRAP, only cells with more than three nuclei and positive TRAP staining were included in the osteoclast count. At least 10-15 random images of each experimental condition were obtained using the Zeiss Axiovert 10 microscope (Nohe Laboratory, University of Delaware, Newark, DE, USA) with the 20×/12 Achrostigmat objective, providing at least 20 images for each OA patient (N = 10; 5 male and 5 female samples). The experiments conducted for each patient were performed in triplicate. Representative images of the total cell count and osteoclast count are displayed underneath each respective bar graph. The images were processed and counted with ImageJ (NIH, Bethesda, MD, USA). Cell Viability and Proliferation Assay Primary cells isolated from human femoral heads were treated with a Green Live/Dead (Catalog #6342, Immunochemistry Technologies, Bloomington, MN, USA) stain for viability and Calcein-AM-red-orange (Catalog #C34851, Thermo Fisher Scientific) stain for proliferation once a day for 5 days. The cells were counted each day and were considered viable if they stained positive for Calcein-AM but did not stain positive for green, as the green can only penetrate the cell membranes of dead cells. Images and counts were collected using the Nikon Eclipse TE300 epifluorescence microscope (15 Innovation Way, University of Delaware, Newark, DE, USA). Experiments were conducted with three patients and were repeated in triplicate. Immunofluorescence To determine the activity level of osteoclasts, immunofluorescence was utilized. The cells were isolated from human femoral heads and plated at 1 × 10 6 cells/mL on 18 mm diameter rounded coverslips. On day 14, the media were aspirated, and the cells were washed with ice-cold 1X PBS and fixed with 4.4% PFA for 20 min at room temperature. The cells were washed with ice-cold 1X PBS and permeabilized for 10 min using 0.1% saponin (Sigma-Aldrich, St. Louis, MO, USA) diluted in 1X PBS on ice. After, non-specific binding was prevented by adding 3% bovine serum albumin (BSA, Fisher Scientific, Pittsburgh, PA, USA) diluted in 1X PBS and supplemented with 0.1% saponin for 1 h on ice. Cells from the control and M-CSF + RANKL groups were then treated with rabbit polyclonal anti-TRAP (Lot #C0314, Santa Cruz Biotechnology, Dallas, TX, USA) and goat polyclonal anti-Cathepsin K (Lot #J1613, Santa Cruz Biotechnology, Dallas, TX, USA) primary antibodies diluted at 1:100 in 1X PBS supplemented with 3% BSA and 0.1% saponin for 1 h on ice. The secondary control group was not incubated with primary antibodies. After 1 h, the cells were washed with 1X PBS on ice. All of the experimental groups were then treated with chicken-anti-rabbit (Alexa Fluor TM 488, Catalog #A21441, Invitrogen, Eugene, OR, USA) and donkey-anti-goat (Alexa Fluor TM 568, Catalog #A11057, Invitrogen, Eugene, OR, USA) secondary antibodies diluted at 1:500 in 1X PBS supplemented with 3% BSA and 0.1% saponin in the dark for 1 h on ice. The cells were washed for 5 min with 1X PBS on ice and the nuclei were stained using Hoechst 33342 (Catalog #AR0039, Bolster Bio, Pleasanton, CA, USA) for 8 min at room temperature away from the light. The coverslips were washed with 1X PBS on ice and the coverslips were mounted on glass slides with Cytoseal TM (Thermo Fisher Scientific, Waltham, MA, USA) and allowed to dry for two days. The slides were then imaged using the Zeiss LSM880 with Airyscan Confocal Microscope (Wolf Hall, University of Delaware, Newark, DE, USA) using 20× and 63× objective lenses. At least 10 representative images were obtained from each group and processed using ImageJ. All data were normalized to the secondary control. Experiments were conducted with three OA patients and were repeated in triplicate. Statistical Analysis Data are displayed as mean + standard error of the mean (STE). Bar graphs were constructed to display the osteoclast percentage of total cells present in each group. "*" denotes statistical significance, where p is set to 0.05. All of the statistical analyses were conducted using Students' t-test followed by the Tukey−Kramer HSD test. All outliers were removed using the Chauvenet's Criterion method. RAW264.7 Cells Do Not Differentiate Readily into Osteoclasts RAW264.7 cells are murine monocytes/macrophages that can be differentiated into osteoclasts. However, the extent of osteoclastogenesis is not clear, and this cell line may be utilized to observe osteoclast activity [16]. To determine if RAW264.7cells could be differentiated into osteoclasts using 10 ng/mL of RANKL, a TRAP assay was utilized. Compared to the control group that was not stimulated with RANKL, the RANKL stimulated group was not significantly higher, as only 2.5% of cells stained positive for TRAP (Figure 3). were removed using the Chauvenet's Criterion method. RAW264.7 Cells Do not Differentiate Readily into Osteoclasts RAW264.7 cells are murine monocytes/macrophages that can be differentiated into osteoclasts. However, the extent of osteoclastogenesis is not clear, and this cell line may be utilized to observe osteoclast activity [16]. To determine if RAW264.7cells could be differentiated into osteoclasts using 10 ng/mL of RANKL, a TRAP assay was utilized. Compared to the control group that was not stimulated with RANKL, the RANKL stimulated group was not significantly higher, as only 2.5% of cells stained positive for TRAP ( Figure 3). Optimal Conditions for Osteoclastogenesis Utilizing M-CSF and RANKL Because RAW264.7 cells did not differentiate readily into osteoclasts in the present study, a different model was explored to observe osteoclast activity. While mouse models are available, a human model may be more reliable to study human osteoclastogenesis. Therefore, to obtain a different model for observing osteoclastogenesis, osteoclasts were extracted from the femoral heads of patients diagnosed with OA. After, the osteoclasts were plated at various concentrations and treated with differing amounts of M-CSF or RANKL to determine the optimum conditions for osteoclastogenesis. The optimized conditions included plating pre-osteoclasts at a density of 1.5 × 10 5 cells/mL with 25 ng/mL Optimal Conditions for Osteoclastogenesis Utilizing M-CSF and RANKL Because RAW264.7 cells did not differentiate readily into osteoclasts in the present study, a different model was explored to observe osteoclast activity. While mouse models are available, a human model may be more reliable to study human osteoclastogenesis. Therefore, to obtain a different model for observing osteoclastogenesis, osteoclasts were extracted from the femoral heads of patients diagnosed with OA. After, the osteoclasts were plated at various concentrations and treated with differing amounts of M-CSF or RANKL to determine the optimum conditions for osteoclastogenesis. The optimized conditions included plating pre-osteoclasts at a density of 1.5 × 10 5 cells/mL with 25 ng/mL M-CSF in α-MEM for 3 days. After 3 days, the cells were given fresh α-MEM with 25 ng/mL and 50 ng/mL RANKL for 11 days, and media were refreshed every 3-4 days. TRAP staining demonstrated that 98% of cells stained positive for TRAP ( Figure 4D). However, in other conditions, the cells were stained 6% ( Figure 4A Cells isolated from Female and Male OA Patients Differentiate into Functional Osteoclasts As both osteoclasts and macrophages can express TRAP, multinucleated cells must be identified to confirm the formation of an osteoclast. Here, TRAP and hematoxylin were utilized to measure the formation of osteoclasts. Furthermore, as OA affects both men and women, we conducted experiments with both genders. To determine the effectiveness of the differentiating cells isolated from female and male OA patients into osteoclasts, cells were treated with M-CSF and RANKL or M-CSF only (control Cells isolated from Female and Male OA Patients Differentiate into Functional Osteoclasts As both osteoclasts and macrophages can express TRAP, multinucleated cells must be identified to confirm the formation of an osteoclast. Here, TRAP and hematoxylin were utilized to measure the formation of osteoclasts. Furthermore, as OA affects both men and women, we conducted experiments with both genders. To determine the effectiveness of the differentiating cells isolated from female and male OA patients into osteoclasts, cells were treated with M-CSF and RANKL or M-CSF only (control). As indicated by TRAP positive and multinucleated cells, we demonstrated that~60% of cells stimulated with RANKL differentiated into osteoclasts, whereas only~15% of control cells differentiated into osteoclasts and most remained monocytes/macrophages ( Figure 5). Furthermore, isolated cells from both male and female patients respond similarly to treatment. Cells Isolated from Female and Male OA Patients Are Viable and Do Not Proliferate Osteoclasts are terminally differentiated cells derived from HSCs. Furthermore, data demonstrating the viability of bone marrow cells isolated from human femoral heads are unclear. Therefore, to assess the viability and terminal differentiation to osteoclasts, cells were incubated with Green Live/Dead stains and Calcein-AM-red-orange stains. After five days, the cells in the control, M-CSF only, and M-CSF + RANKL experimental groups did not exhibit proliferation ( Figure 6A). Furthermore, as displayed by the very expression of the Green Live/Dead stain, the cells were viable and healthy in each group ( Figure 6B). RANKL differentiated into osteoclasts, whereas only ~15% of control cells differentiated into osteoclasts and most remained monocytes/macrophages ( Figure 5). Furthermore, isolated cells from both male and female patients respond similarly to treatment. Cells Isolated from Female and Male OA Patients Are Viable and Do Not Proliferate Osteoclasts are terminally differentiated cells derived from HSCs. Furthermore, data demonstrating the viability of bone marrow cells isolated from human femoral heads are unclear. Therefore, to assess the viability and terminal differentiation to osteoclasts, cells were incubated with Green Live/Dead stains and Calcein-AM-red-orange stains. After five days, the cells in the control, M-CSF only, and M-CSF + RANKL experimental groups did not exhibit proliferation ( Figure 6A). Furthermore, as displayed by the very expression of the Green Live/Dead stain, the cells were viable and healthy in each group ( Figure 6B). Osteoclasts Isolated from OA Patients Express TRAP and Cathepsin K Osteoclasts express active enzymes that are responsible for degrading old or damaged bone. The most notable osteoclast markers are TRAP and Cathepsin K, as both as are stains. Cells were counted for 5 days, and immunofluorescence was captured using an epifluorescent microscope. Calcein-AM is indicative of the total number of viable cells, whereas green indicates an unviable cell. Scale bars are set to 10 µm. All experiments were conducted in triplicate and images were processed using ImageJ. Osteoclasts Isolated from OA Patients Express TRAP and Cathepsin K Osteoclasts express active enzymes that are responsible for degrading old or damaged bone. The most notable osteoclast markers are TRAP and Cathepsin K, as both as are highly expressed during resorption. Here, the osteoclast markers TRAP and Cathepsin K were immunostained in cells stimulated with RANKL or were left unstimulated. Cells were imaged using confocal microscopy at 20× and 63× magnification ( Figure 7A,B). Similar to the TRAP stain, the RANKL experimental group contained~60% osteoclasts while the control group displayed~15% osteoclasts (data not shown). Furthermore, cells stimulated with RANKL expressed high levels of both TRAP and Cathepsin K, while this expression was minimal in the control cells ( Figure 7C,D). At 63× magnification, high levels of TRAP and Cathepsin K were expressed in distinct domains within the RANKLstimulated osteoclast, but this expression was seen very minimally in the control group, which is consistent with previous results ( Figure 7B) [49][50][51]. the control group displayed ~15% osteoclasts (data not shown). Furthermore, cells stimulated with RANKL expressed high levels of both TRAP and Cathepsin K, while this expression was minimal in the control cells ( Figure 7C,D). At 63× magnification, high levels of TRAP and Cathepsin K were expressed in distinct domains within the RANKL-stimulated osteoclast, but this expression was seen very minimally in the control group, which is consistent with previous results ( Figure 7B) [49][50][51]. Immunostaining of cells isolated from male (N = 2) and female (N = 1) OA patients. Confocal microscopy was utilized to image cells at 20× (A) and 63× (B) magnification. The population of cells stimulated with M-CSF + RANKL was ~60%, whereas the control group had ~15% osteoclasts. As displayed in the figure, M-CSF + RANKL stimulated highly expressed TRAP and Cathepsin K cells, while the control cells produced very little of these osteoclast markers (C, D). Cells from 10 representative images obtained at random from all three patients were counted. The RANKL stimulated cells expressed significantly higher TRAP and Cathepsin K than the control groups (C, D). Experiments were completed in triplicate and processed using ImageJ. All fluorescence was normalized to the secondary control. "*" denotes statistical significance, where p is set to 0.05. Figure 7. Immunostaining of cells isolated from male (N = 2) and female (N = 1) OA patients. Confocal microscopy was utilized to image cells at 20× (A) and 63× (B) magnification. The population of cells stimulated with M-CSF + RANKL was~60%, whereas the control group had~15% osteoclasts. As displayed in the figure, M-CSF + RANKL stimulated highly expressed TRAP and Cathepsin K cells, while the control cells produced very little of these osteoclast markers (C,D). Cells from 10 representative images obtained at random from all three patients were counted. The RANKL stimulated cells expressed significantly higher TRAP and Cathepsin K than the control groups (C,D). Experiments were completed in triplicate and processed using ImageJ. All fluorescence was normalized to the secondary control. "*" denotes statistical significance, where p is set to 0.05. Discussion Proper formation of the skeleton during development is crucial for mobility and the protection of organs. The formation of new bone and the degradation of old or damaged bone is regulated by mononuclear osteoblasts and multinucleated osteoclasts [1,52,53]. Irregular activities of these bone cells can lead to juvenile osteoporosis in childhood, or osteopenia and osteoporosis later in adulthood [5,54]. While it is suggested that these bone disorders may arise due to hyperactivity of osteoclasts, the function of these bone resorption cells isolated directly from humans is not established [1,18]. Therefore, there is an urgent need to establish a reliable method of isolating viable bone marrow cells that can be differentiated into osteoclasts. It has been demonstrated previously that RAW 264.7 cells can differentiate into osteoclasts [16]. However, the applications or extent of osteoclastogenesis of these cells is unclear. Therefore, we first demonstrate that RAW264.7 cells can be differentiated into osteoclasts, but very minimally (Figure 3). When stimulated with RANKL, only 2.5% of these cells expressed TRAP, indicating most cells remained monocytes/macrophages ( Figure 3). These cells may continue proliferating into additional monocytes/macrophages, but do not readily terminally differentiate into osteoclasts at day 5, which could explain the minimal TRAP expression [55]. To differentiate cells into osteoclasts to study bone disorders, we utilized femoral heads from patients diagnosed with OA after undergoing total hip replacement surgery. Human femoral heads are readily available due to the increasing occurrence of hip replacement surgery. These femoral heads provide an opportunity to study osteoclastogenesis with cells isolated directly from the bone microenvironment. Isolating cells from this microenvironment provides a useful tool to study the functions of osteoclasts within their natural bone environment. We observed that the removal of the trabecular bone from the femoral heads and washing interior surface provided cells that could be differentiated into osteoclasts [56,57]. While this method has been used in previous research, it is unclear whether these cells isolated directly from the human bone can be differentiated into functional osteoclasts after being stimulated with M-CSF and RANKL [40,58,59]. Previous studies demonstrate that PBMCs can be obtained in large quantities to study osteoclastogenesis; however, utilizing cells directly from the bone marrow may be a useful alternative approach. Here, we provide optimal conditions for osteoclastogenesis that produced functional osteoclasts ( Figure 4). This method provides an isolation technique directly from femoral heads that can produce enzymatically active osteoclasts, especially in other bone disorders such as osteoporosis. While 98% of the isolated cells stained positive for TRAP, it has been illustrated in previous research that monocytes/macrophages and osteoclasts can express this protein [51,60]. For nuclei identification, we utilized hematoxylin, which binds to DNA and fluoresces blue [61,62]. In the current study, the population of cells stimulated with RANKL was 60% osteoclasts ( Figure 5). This group displayed a significantly higher percentage of osteoclasts when compared to the control group (~15% osteoclasts; Figure 5). Notably, although not shown, there was no significant difference between the osteoclast percentage between both genders (N = 10). Thus, it is suggested that the mechanisms of osteoclastogenesis within OA patients are not affected by gender. Moreover, these data suggest that cells isolated from human femoral heads can be differentiated into functional osteoclasts and this method may be utilized to study their activity in OP. Finally, to assess the functionality of differentiated osteoclasts, cells were immunostained for the osteoclastic markers TRAP and Cathepsin K [63]. Both enzymes are highly expressed during bone resorption and are key players in bone turnover [64]. We utilized confocal microscopy to obtain images of the cells stimulated with RANKL or left unstimulated. Here, we showed that distinct osteoclasts formed and expressed high levels of TRAP and Cathepsin K compared to the control groups ( Figure 7). Furthermore, it has been demonstrated that monocytes/macrophages express low levels of TRAP and Cathepsin K, which is consistent with the current study ( Figure 7A) [50,51,59,60]. These results indicate that cells differentiated directly from human femoral heads can be treated with M-CSF and RANKL to become functional osteoclasts. In conclusion, the objective of the current study was to obtain a reliable model of osteoclastogenesis to study bone disorders. Our data demonstrated that cells isolated from human femoral heads are superior to study osteoclastogenesis when compared to RAW264.7 cells (Figures 3 and 4). Furthermore, we demonstrated that cells isolated from human femoral heads became multinucleated cells and stained positive for TRAP ( Figure 5). Isolated cells were viable and did not proliferate, indicating that the RANKL stimulated cells were terminally differentiated into osteoclasts ( Figure 6). Finally, RANKL stimulated cells stained positive for both TRAP and Cathepsin K, along with containing more than two nuclei, indicating that they were functional osteoclasts (Figure 7). Taken together, our data provide a reliable method for obtaining functional osteoclasts from human femoral heads. While this current study has provided an optimized method for generating osteoclasts from human femoral heads, future studies should delineate the resorptive activity of these cells to confirm the Cathepsin K and TRAP expression. These data can assist future work in elucidating the precise role of osteoclasts in bone disorders, such as osteopenia and osteoporosis.
2022-01-20T16:29:05.143Z
2022-01-18T00:00:00.000
{ "year": 2022, "sha1": "4b4c21c6e502b24d814f8b2613e7d147dda1efaf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2221-3759/10/1/6/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1a8f45aca05ac61a95feb84b7c5a58cb12bcf8b3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239054818
pes2o/s2orc
v3-fos-license
Development and Validation of an E2F-Related Gene Signature to Predict Prognosis of Patients With Lung Squamous Cell Carcinoma Background Lung squamous cell carcinoma (LUSC) generally correlates with poor clinical prognoses due to the lack of available prognostic biomarkers. This study is designed to identify a potential biomarker significant for the prognosis and treatment of LUSC, so as to provide a scientific basis for clinical treatment decisions. Methods Genomic changes in LUSC samples before and after radiation were firstly discussed to identify E2 factor (E2F) pathway of prognostic significance. A series of bioinformatics analyses and statistical methods were combined to construct a robust E2F-related prognostic gene signature. Furthermore, a decision tree and a nomogram were established according to the gene signature and multiple clinicopathological characteristics to improve risk stratification and quantify risk assessment for individual patients. Results In our investigated cohorts, the E2F-related gene signature we identified was capable of predicting clinical outcomes and therapeutic responses in LUSC patients, besides, discriminative to identify high-risk patients. Survival analysis suggested that the gene signature was independently prognostic for adverse overall survival of LUSC patients. The decision tree identified the strong discriminative performance of the gene signature in risk stractification for overall survival while the nomogram demonstrated a high accuracy. Conclusion The E2F-related gene signature may help distinguish high-risk patients so as to formulate personalized treatment strategy in LUSC patients. INTRODUCTION Lung cancers remain the leading cause of cancer-related death worldwide (1). Nonsmall cell lung cancer (NSCLC) is the predominant subtype of lung cancers accounting for approximately 85%, of which more than 30% cases are lung squamous cell carcinomas (LUSC) (2). LUSC, as compared with lung adenocarcinoma (LUAD), correlates with more adverse clinical prognoses, and there is a lack of available targeted drugs. Radiotherapy and chemotherapy are traditional treatment strategies (3), while there is a high risk of treatment failure in patients with advanced LUSC due to the development of treatment resistance (4). Despite the fact that immunotherapy has shown great potential in treatment of LUSC over the past years, it brings benefits to a limited population (5). It was reported that the 5-year overall survival (OS) rate in patients with stage I/II LUSC was about 40%, and even lower to 5% when a stage III/IV LUSC was present (6). Currently, basic biomarkers and precise targets for the prognosis and treatment of LUSC are still unclear. In this setting, further research into the potential prognostic biomarkers of LUSC is required, so as to provide better prognostic prediction and individualized treatment. Similar to many other carcinomas, the initiation and progression of LUSC are closely related to the dysregulation of cell cycle (7,8). The timing of the cell to proliferate, to enter reversible quiescent phase, to differentiate, or to apoptosis is controlled by the cell cycle clock apparatus (9). Dysregulation of the cell cycle process is a necessary step in malignant transformation (10). The E2 factor (E2F) pathway is a major pathway involved in the cell cycle in mammals, and the E2F family of transcription factors play various biological roles including cell cycle control (11). Research found that the cell cycle-related E2F genes are significantly associated with the prognosis of lung cancer patients and provide a potential therapeutic strategy (12). Nevertheless, to our knowledge, there has been no study reporting the discriminative role of the E2F family in identifying high-risk LUSC. In this study, we explored the genomic changes in LUSC samples before and after radiatiotherapy to identify E2F pathway as the potential risk factor for prognosis in LUSC patients. An E2F-related prognostic gene signature was then established and further validated in additional independent cohorts. Finally, a decision tree and a nomogram were established according to the gene signature and multiple clinicopathological characteristics to improve risk stratification and quantify risk assessment for individual patients. Data Processing The microarray dataset GSE42172 which contained paired normal A549 lung cancer cells (n = 6) and radiation-exposed A549 cells (n = 6) was selected to explore the genomic changes before and after radiation. Also, the clinical annotations and follow-up information of 916 LUSC patients across different platforms were included in this study. The datasets GSE29013, GSE30219, and GSE37745 were downloaded from Affymetrix Human Genome U133 Plus 2.0 Array GPL570, and the expression data of these datasets were integrated using the R package combat (Supplementary Figures S1A, B) after eliminating batch effects. After integration, 166 patients in this cohort were enrolled in the training set. The datasets GSE14814, GSE17710, GSE42127, and GSE74777 from different platforms were used as a validation set 1 after integration using the combat package, which contained 266 patients. In addition, RNA-Seq data in FPKM of 499 patients who met the criteria were obtained from TCGA, and the expression data were taken as a validation set 2 after normalization by transcripts per kilobase per million (TPM). Signature Establishment The gene set variation analysis (GSVA) was conducted to evaluate changes of cancer biomarkers obtained from the Molecular Signatures Database (MSigDB) before and after radiotherapy in the dataset GSE42172 (13). Markers of significant changes in the training set (t > 1) were quantified using single-sample gene set enrichment analysis (ssGSEA) (14). A univariate Cox proportional hazard (COX-PH) regression model was utilized to assess the prognostic value of diverse cancer biomarkers for LUSC patients. Multiscale embedded gene coexpression network analysis (MEGENA) (15), an R package with performance superior to coexpression network analysis, was performed to analyze the genes with standard deviation >0.9, and the planar filtered network (PFN) was plotted based on the gene expression correlation. A LUSC-specific gene network composed of interconnected subnetworks or modules was constructed using the multiscale clustering method, and the module feature genes were identified using moduleEigengenes R function to calculate the correlation between the modules and the E2F signaling pathway and to determine the most relevant module. With the p-value in COX-PH <0.05 as the threshold, 53 candidate genes from the E2F-related module were screened out. Then, a least absolute shrinkage and selection operator (LASSO) regression model was employed to further screen reliable prognostic indicators (16). The standardized gene expression values weighted using corresponding LASSO coefficients were included, and a risk score related to the E2F signaling pathway, E2F-related score (ERS), was established as follows: Bioinformatics and Statistics GSEA was implemented to verify the E2F signaling pathway enrichment in the high-ERS group with the E2F-target genome from MSigDB (17). Date analysis and graph plotting were carried out using R software (version 4.0.4, http://www.r-project.org). The survival analysis was completed with the Kaplan-Meier method along with log-rank test. Additionally, the prognostic value of each parameter for OS was evaluated using a COX-PH model. A timedependent receiver operating characteristic (tROC) curve was drawn to assess the predictive value of ERS assisted by the R package "survivalROC," followed by comparison of the areas under the curve at different time points (AUC(t)). Meta-analysis (I 2 <30%, fixed model) was carried out to assess the prognostic significance in the merged cohort. Afterwards, consensus clustering of patients was conducted using the R package "ConsensusClusterPlus" based on the expression of candidate genes, whereby evaluating the discriminative performance of candidate genes (18). A decisionmaking tree was created for risk stratification with recursive partitioning analysis (RPA) utilizing the R package rpart (19). Two independent datasets, IMvigor210 and a dataset containing 47 responders with melanoma to immunotherapy, were downloaded and analyzed (20). The IMvigor210 dataset was derived from the freely available, fully documented software and data package under the Creative Commons Attribution 3.0 license from http://research-pub.gene.com/IMvigor210CoreBiologies. A sum of 298 patients with urothelial carcinoma who had complete clinical data and 47 patients with skin melanoma who had underwent immunotherapy were integrated to identify the value of ERS for immunotherapy. The Tumor Immune Dysfunction and Exclusion (TIDE) algorithm was utilized to evaluate the value of ERS in clinical immunotherapy. The R package "rms" was utilized to draw nomogram and calibration curve (21). Decision curve analysis (DCA) was carried out by Wilcox test with the DCA package to test the difference between two groups (22). Differences among multiple groups were examined by the Kruskal-Wallis test and the differences among categorical data were processed by the Chi-square test. Workflow of the Study First, E2F was one of the significantly changed pathways after radiation. The E2F signaling pathway was demonstrated as the main risk factor for the prognosis of LUSC patients ( Figure 1A). Then, MEGENA, univariate COX-PH, and LASSO analyses were conjunctively employed to filter candidates and to construct an E2F-related gene signature of survival significance ( Figure 1B), which was further assessed using the training and two external validation sets. Additionally, its prognostic capability was verified and the response to treatment was evaluated by meta-analysis to determine its potential as a promising prognostic marker ( Figure 1C). At last, a decision tree was established to improve risk stratification, along with a nomogram generated to quantify the risk evaluation and survival probability of individuals on the basis of ERS and multiple clinicopathological characteristics ( Figure 1D). The E2F Signaling Pathway Is a Major Risk Factor for Radiotherapy Response in LUSC The analyzed results of the radiation dataset in GSE42172 showed that 18 cancer-related pathways were markedly changed after radiation (t > 1), in which two pathways including the E2F signaling pathway were notably downregulated, and 16 pathways including p53 signaling were notably upregulated ( Figure 2A). According to the ssGSEA score of the 18 changed pathways and the OS data in the training set, each pathway was conferred a Cox coefficient. Accordingly, the E2F signaling pathway exerted a greater effect on survival than other cancerrelated pathways (such as cell cycle, signal transduction pathway, EMT, angiogenesis, apoptosis, etc.) ( Figure 2B). During the follow-up period, remarkable higher E2F ssGSEA scores were observed in the dead patients as compared with the surviving patients ( Figure 2C). In the training set, two groups were divided according to the median E2F ssGSEA score. The results showed a lower OS rate ( Figure 2D) and shorter average survival time ( Figure 2E) in the high-score group. Establishment of E2F-Associated Prognostic Gene Signature In the training set, MEGENA analysis was conducted with whole-transcriptome profiling data and E2F ssGSEA score. We observed a minimum error rate of the model when scale = 7 (Supplementary Figures S2A-D). A LUSC-specific gene network with 70 modules was generated (Supplementary Figure S3A). Among these modules, module 25 and its submodule 71 shared the closest association with E2F ssGSEA score (r = 0.52, p = 5e−13/r = 0.53, p = 2e−13) (Supplementary Figures S3B and S4A). The genes extracted from modules 25 and 71 were subjected to univariate COX-PH analysis, and 53 promising candidate factors (47 risk factors and six protective factors) were identified with the threshold of p < 0.05 (Supplementary Figure S3C). Next, the LASSO regression model was utilized to determine the most reliable prognostic factors. Using a 10-fold cross-validation to avoid overfitting, the optimal l value 0.06779023 was selected ( Supplementary Figures S3D and S4B). The remaining 11 genes had their own nonzero coefficients ( Figure 3A). Finally, ERS was calculated according to the formula: ERS Is a Risk Factor for OS in Each Set In the training set, most risk factors exerted positive correlations with E2F transcription factor ( Figure 3B). With the E2F target genome from MSigDB, GSEA results demonstrated more abundant enrichment of the E2F signaling pathway in the high-ERS group ( Figure 3C). The patients who died during the follow-up period exhibited notably higher ERS compared with the surviving patients ( Figure 3D), and the patients in the high-ERS group showed markedly poorer survival ( Figure 3E). Results of Kaplan-Meier analysis exhibited worse prognoses of patients with higher ERS scores versus those with lower scores ( Figure 3F). Among a variety of clinicopathological variables, the multivariate COX-PH model identified the American Joint Committee on Cancer (AJCC) TNM staging and ERS as two independent risk factors for OS in the training set. In addition, tROC analysis demonstrated ERS as the most accurate predictive biomarker for OS ( Figure 3G). Furthermore, the patients were assigned into two groups by consensus clustering with the optimal k value as the threshold, which showed remarkably different prognoses, indicative of the good potential of the ERS to distinguish patients with different prognostic risks ( Figure 4I and Supplementary Figure S5A). To validate the prognostic robustness of E2F-associated gene signature in diverse sets, two external sets were selected for validation. Similarly, in the validation sets 1 and 2, more E2F signaling pathway enrichment was verified in the high-ERS group with the E2F target genome set by GSEA ( Figures 4A, B). The dead patients had a noticeable higher ERS than the surviving patients in cohort 1, yet no significant variance was noted in cohort 2 ( Figures 4C, D). The patients with high scores had markedly poorer survival ( Figures 4E, F). The results of the Kaplan-Meier analysis further revealed that the OS rate predicted by high ERS was lower than that predicted by low ERS (Figures 4G, H). The cohort was grouped into different subtypes with consensus clustering with the optimal k value as the threshold, and the prognosis differed between subtypes ( Figures 4J, K and Supplementary Figures S5B, C). In addition, multivariate COX-PH analysis suggested ERS be independently prognostic for adverse OS ( Figure 4L). ERS Indicates Poor Survival in the Pooled Cohort and Can Be a Potential Biomarker for Therapeutic Resistance Meta-analysis was conducted to assess the prognostic significance of E2F-related gene signature in the pooled cohort of one training set and two verification sets. Consequently, patients with high ERS showed worse prognoses than patients with low ERS ( Figure 5A). In total, 916 patients from the three sets were integrated for further investigation. The ERS was upregulated significantly in deaths at follow-up, even higher in those with a shorter survival time ( Figure 5B). ERS could also distinguish the high-risk patients suffering from adverse outcomes from different subgroups, such as different clinicopathological characteristics, including gender, age, and TNM stage ( Figure 5C). Considering that the E2F signaling pathway may enhance the resistance to treatment, we probed into whether ERS is a biomarker of therapeutic resistance. It was predicted by GSEA that higher ERS was strikingly correlated with resistance to diverse treatments (such as chemotherapy, radiotherapy, and targeted therapies) ( Figure 6A). Subsequently, therapeutic information and clinical outcome were downloaded from TCGA to verify the prediction. Following primary surgical treatment, compared with the low ERS group, the ratio of patients in the high-ERS group with the progressive disease to that of patients with partial remission or stable disease was prominently upregulated ( Figure 6B). Subsequently, we assessed the value of the ERS in predicting the therapeutic outcomes of patients. To this end, patients with anti-PD-L1 immunotherapy in the IMvigor210 cohort were assigned into high ICI score and low ICI score subgroups. It was worthy to note that in the IMvigor210 cohort, the patients with low ERS had significantly longer survival time than those with high ERS ( Figure 6C). Besides, the lower ERS was associated with the objective response to anti-PD-L1 treatment ( Figure 6D), and the objective response rate of anti-PD-L1 treatment was higher in the low-ERS group than that in the high-ERS group ( Figure 6E). The Submap module in the GenePattern was utilized for evaluation and comparison of the patients in the training set and 47 responders to immunotherapy. As compared with the high-score group, anti-CTL4-A treatment was more effective for the low-score group (p = 0.036) ( Figure 6F). With the response to immunotherapy predicted by the TIDE algorithm, the low-score group was more likely to respond to immunotherapy, while there was no evident difference between the two groups ( Figure 6G) (Chi-square test, p > 0.05). The Combination of ERS and Clinicopathological Characteristics Contributes to Improving Risk Stratification and Survival Prediction Four parameters were available for 916 LUSC patients, namely age, gender (male or female), TNM stage, and ERS. After risk stratification using the decision tree, only the TNM stage and ERS remained in the decision tree, and three different risk subgroups were identified ( Figure 7A). It was noteworthy that the ERS was the optimum stratification factor. The OS rates showed noticeable differences among these three risk subgroups ( Figure 7B). Multivariate COX-PH analysis results indicated ERS to be the optimum prognostic indicator ( Figure 7C). In order to quantify the risk assessment and survival probability of LUSC patients, a nomogram was generated using ERS and other clinicopathological characteristics ( Figure 7D). According to the calibration analysis, the 1-, 3-, and 5-year survival probability predicted by the nomogram nearly reached the ideal results ( Figure 7E), indicating high accuracy of the nomogram. Furthermore, the 3-year DCA revealed that the nomogram had optimum decision benefit at most thresholds ( Figure 7F). In comparison with other characteristics, the nomogram exerted the most powerful and stable capability for predicting survival, with an average area under the curve above 0.6, considerably superior to the pathological TNM staging ( Figure 7G). DISCUSSION Surgery is the main treatment strategy of NSCLC, with chemoradiotherapy, targeted therapy, and immunotherapy as adjuvants (23). However, it was estimated that more than 85% of patients with NSCLC have lost optimum time for surgical treatment at the first diagnosis, and only 25% to 30% can be treated by the traditional surgical resection (24). With the continuous development of computer technology, radiobiology, and functional imaging in recent years, radiotherapy has shown considerable advantages in the treatment of patients with locally advanced NSCLC (25). Existing research unraveled that radiotherapy is safe and effective for patients with stage I NSCLC, hence, radiotherapy is the primary choice for patients with early lung cancer who are elder or have poor cardiopulmonary function, rather than surgery (26). Due to the demands for precision medicine, the importance of radiotherapy has been highlighted, but the sensitivity to radiotherapy is a limiting factor for its therapeutic effect (27). Besides, few reports are focusing on the changes in the pathways before and after radiotherapy for NSCLC. Identification of biomarkers to estimate the prognosis of patients electing to receive radiotherapy is of importance in the clinical management of NSCLC (28). The E2F transcription factor family plays a crucial role in regulating cell cycle progression, while the E2F-RB1 pathway is dysregulated in approximately 90% of lung cancers (29). It was uncovered that enhanced E2F activity contributes to the activation of nAChR (encoded by CHRNA5) by its ligands (such as nicotine) in the neurons, whereby promoting radioresistance through facilitating cell cycle progression (30). Radiotherapy is commonly used in the clinical treatment of LUSC, so we aim at identifying whether the E2F signaling pathway can serve as a prognostic indicator of LUSC. In this study, We first explored that the E2F pathway was identified as the mainly changed pathway after radiation using the "GSVA" algorithm in the GSE42172 dataset. We then used all of the changed pathways and the clinical data in the training set to apply Cox regression, and we find that the E2F pathway is the best prognostic factor. Therefore, we chose E2F pathway for subsequent analysis. MEGENA was performed to identify LUSCspecific E2F-related gene modules based on whole-transcriptome profiling data, and then Cox univariate and LASSO regression models were used to screen prognostic biomarkers, which were taken to establish an E2F-related gene signature of prognostic value. A risk scoring system based on the signature, called ERS here, was then constructed. Survival analysis identified that ERS was a risk factor for the OS of patients in each cohort, and a higher ERS was associated with a worse survival outcome. The prognostic value of the gene signature was further validated in two independent cohorts derived from different platforms. In the metaanalysis and subgroup analysis, ERS was still capable of discriminating high-risk patients, suggesting that the performance of ERS is reliable in pooled populations and similar-stage subgroups. In groups of adjuvant therapy, patients with higher ERS suffered from worse survival outcomes as compared with those with lower ERS. Patients with lower ERS gained more benefits from CTL4-A and PD-L1 treatments, which might be associated with the gene signature-derived resistance to therapies, indicating the potential role of the gene signature as a promising marker of therapeutic resistance in LUSC patients. Moreover, a decision tree combining the ESR and multiple clinicopathological characteristics was constructed to improve risk stratification. We found that only the TNM stage and ERS remained in the decision tree, and three different risk subgroups were identified. Among the three subgroups, significant difference was noted regarding OS. The ERS was identified as the predominant discriminative factor, which was further validated by the multivariate COX-PH analysis. These collectively suggest that the E2F-related gene signature is potentially a powerful risk factor for OS of LUSC patients. In subsequent work, a nomogram was generated to quantify the risk assessment for individual patients, with the involvement of the ERS and other clinicopathological characteristics. On calibration curves, the predicted results appeared to highly approach to the actual outcomes, indicative of a high accuracy of the nomogram in prognosis prediction. In addition, tROC analysis demonstrated that the nomogram performed the best on survival prediction at different time points during follow-up, as compared with other variables. Of the biomarkers involved in the gene signature, some have been studied in many cancers, while most of them are rarely investigated in LUSC. It is proven that E2F-related genes have great implications in cell cycle, proliferation, differentiation, and apoptosis, and they are regarded as the determinant of the timing for G1/S transition. An animal experiment demonstrated that the increased expression of E2F activators may result in upregulation of E2F target genes and a risk of spontaneous cancer formation. There have been studies reporting the dysregulated expression of E2F activators in multiple human malignancies, such as bladder, breast, ovarian, prostate, gastrointestinal, and lung cancers. Although high-level E2F activators and its associations with clinicopathological characteritics and prognosis have been partly reported in human NSCLC, to the best of our knowledge, its role in LUSC has not been probed. In this setting, we here developed a risk scoring system, ERS, to improve the prediction for the survival of LUSC patients, and further validated its performance in external independent cohorts, which outperformed conventional immunotherapeutic biomarkers. The retrospective nature of our study is an inevitable limitation. Although we included as many datasets as possible for rigorous validation and combined multiple different approaches to reduce batch effects, sampling bias caused by tumor genetic heterogeneity and cross-platform integration could only be reduced but not completely eliminated. Meanwhile, further experimental studies are required to elucidate tumor E2F-related biological functions underlying the gene signature in LUSC. CONCLUSION To sum up, a novel E2F-related gene signature was established here to discriminate high-risk LUSC patients with radioresistance. Combining multiple clinicopathological characteristics, a decision tree and a nomogram were further built to respectively optimize the risk stratification for OS and to quantify risk assessment for individual patients. The E2F-related gene signature could provide a useful tool to distinguish highrisk LUSC patients with radioresistance who may benefit from adjuvant therapies, thus to facilitate personalized management. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. These data can be found here: "GSE42172,GSE29013,GSE30219,GSE37745, GSE14814, GSE17710, GSE42127, and GSE74777." In addition, RNA-Seq data in FPKM of 499 patients who met the criteria were obtained from TCGA, and the expression data were taken as a validation set 2 after normalization by transcripts per kilobase per million (TPM). AUTHOR CONTRIBUTIONS CW conceived and designed the whole project and drafted the manuscript. XG and XZ analyzed the data and wrote the manuscript. MZ carried out data interpretations and helped data discussion. YC provided specialized expertise and collaboration in data analysis. All authors contributed to the article and approved the submitted version.
2021-10-22T13:23:41.668Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "c598f83a4f087b157a60b1962a40ff10f7f06d7e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.756096/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c598f83a4f087b157a60b1962a40ff10f7f06d7e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
88520113
pes2o/s2orc
v3-fos-license
An efficient particle-based online EM algorithm for general state-space models Estimating the parameters of general state-space models is a topic of importance for many scientific and engineering disciplines. In this paper we present an online parameter estimation algorithm obtained by casting our recently proposed particle-based, rapid incremental smoother (PaRIS) into the framework of online expectation-maximization (EM) for state-space models proposed by Capp\'e (2011). Previous such particle-based implementations of online EM suffer typically from either the well-known degeneracy of the genealogical particle paths or a quadratic complexity in the number of particles. However, by using the computationally efficient and numerically stable PaRIS algorithm for estimating smoothed expectations of time-averaged sufficient statistics of the model we obtain a fast algorithm with very limited memory requirements and a computational complexity that grows only linearly with the number of particles. The efficiency of the algorithm is illustrated in a simulation study. Introduction This paper deals with the problem of online parameter estimation in general state-space models (SSM) using sequential Monte Carlo (SMC) methods and an expectation-maximization (EM) algorithm.SSMs, which are also referred to as general hidden Markov models, are currently applied within a wide range of engineering and scientific disciplines; see e.g.[3,Chapter 1] and the references therein.In its most basic form, as proposed by [5], the EM algorithm, which is widely used for estimating model parameters in SSMs, is an offline algorithm in the sense that every recursive parameter update requires the processing of a given batch of data.When the batch is very large or when data is received only gradually in a stream, this approach may be slow and even impractical.In such a case, using an online EM-algorithm is attractive since it generates a sequence of parameter converging towards the true parameter by processing recursively the data in a single sweep. The algorithm we propose is a hybrid of the online EM algorithm proposed by [1] and the efficient particle-based online smoothing algorithm suggested recently by [11].On the contrary to previous algorithms of the same type, see e.g.[4], which have a quadratic computational complexity in the number of particles, our algorithm stays numerically stable for a complexity that grows only linearly with the number of particles. Preliminaries We will always assume that all distributions admit densities with respect to suitable dominating measures and we will also assume that all functions are bounded and measurable. In SSMs, an unobservable Markov chain {X t } t∈N (the state process), taking values in some space X and having transition density and initial distribution q θ and χ, respectively, is only partially observed through an observation process T t=1 g θ (x t , y t )q θ (x t−1 , x t ) dx 0:T . Unless we are operating on a linear Gaussian model or a model with a finite state space X, this likelihood is intractable and needs to be approximated.If we wish to infer a subset of the hidden states given the observations, the optimal choice of distribution is the conditional distribution φ s:s |T ;θ of X s:s , (s ≤ s ) given the observations Y 0:T .This is given by We refer to φ T ;θ = φ T |T ;θ as the filter distribution and to φ 0:T |T ;θ as the joint smoothing distribution. The EM algorithm computes the maximum-likelihood estimator using some initial guess θ 0 of the same.It proceeds recursively in two steps.First, in the E-step, given some parameter estimate θ i , it computes the intermediate quantity where E θi denotes the expectation under the dynamics determined by the parameter θ i , and in the second step, the M-step, it updates the parameter fit according to θ i+1 = arg max Q(θ, θ i ).Under weak assumptions, repeating this procedure produces a sequence of parameter estimates that converges to a stationary point of the likelihood.If the joint distribution of the SSM belongs to an exponential family, then the intermediate quantity may be written as where •, • denotes scalar product, φ(θ) and c(θ) are known functions, and s T is a (vector-valued) sufficient statistic of additive form.Given the smoothed sufficient statistic φ 0:T |T ;θi (s T ), the M-step of the algorithm can be typically expressed in a closed form via some update function θ i+1 = Λ(φ 0:T |T ;θi (s T )/T ).Hence, being able to compute such smoothed expectations is in general crucial when casting SSMs into the framework of EM. As mentioned, the sufficient statistic s T (x 0:T ) is of additive form, i.e. where all functions are possibly vector-valued.We denote vector components using superscripts, i.e., st (x t:t+1 ) = (s 1 t (x t:t+1 ), . . ., s t (x t:t+1 )).Example 1 Consider the linear Gaussian SSM where {V t } t∈N and {U t } t∈N are mutually independent sequences of independent standard Gaussian variables.The parameters of this model are θ = (a, σ 2 V , σ 2 U ) and the model belongs to an exponential family with sufficient statistics given by The M-step update function is then given by When computing smoothed expectations of additive form it is advantageous to use the backward decomposition of the joint smoothing distribution.This decomposition comes from the fact that the state process is, conditionally on the observations, still Markov in the forward as well as backward directions.The backward kernel ← − q φ t;θ ;θ , i.e., the distribution of X t conditionally on X t+1 and Y 0:t , is given by Using the backward kernel, the joint smoothing distribution may be written as The backward distribution can also be used effectively when estimating expectations of additive form; indeed, if where γ t = t −1 .The recursion is initialized by τ 0 ≡ 0. The fundamental idea of the online EM algorithm is to update sequentially the smoothed sufficient statistics using (1) and plugging these quantities into the updating procedure Λ.In practice one uses, rather than γ t = t −1 , some step size γ t satisfying the regular stochastic approximation requirements Nevertheless, since the backward kernel involves the filter distributions, the recursion (1) cannot be computed in a closed form, and we are hence forced to approximate the same.We will use particle methods for this purpose. Particle Methods A particle filter updates sequentially, using importance sampling and resampling techniques, a set {(ξ i t , ω i t )} N i=1 of particles and associated weights targeting a sequence of distributions.In our case we will use the particle filter to target the filter distribution flow {φ t;θ } t∈N in the sense that where Ω t = N i=1 ω i t .Notice that the parameters and weights depend on the parameters even though this is implicit in the notation.In the bootstrap particle filter, the sample {(ξ i t , ω i t )} N i=1 is updated by, first, resampling the particles multinomially according to weights proportional to the particle weights, second, propagate the resampled particles forward in time using the dynamics of the state process space, and, third, assigning the particles weights proportional to the local likelihood of the new observation given the particles.The update, which is detailed in Algorithm 1, will in the following be expressed as Algorithm 1 Bootstrap particle filter Require: Parameters θ and a weighted particle sample {( Draw ξ i t+1 ∼ q θ (ξ In the previous scheme, Pr({ω t } N =1 ) refers to the discrete probability distribution induced by the weights {ω t } N =1 .As a by-product, the historical trajectories of the particle filter provide jointly an estimate of the joint-smoothing distribution.These trajectories are constructed by linking up the particles with respect to ancestors.However, this method suffers from a well-known degeneracy phenomenon in the sense that the repeated resampling operations collapse the particle lineages as time increases.Consequently, the weighted empirical measures associated with the paths degenerate in the long run; see [10] for some discussion. A way to combat the degeneration is to use instead the backward decomposition presented above.Using the output of the bootstrap particle filter we obtain the particle approximation of the backward kernel.Plugging this into the backward decomposition we arrive at the forward-filtering backward-smoothing (FFBSm) algorithm, where φ 0:T |T ;θ (f ) is approximated by For a general objective function f , this occupation measure is impractical since the cardinality of its support grows geometrically fast with T .In the case where f is of additive form the computational complexity is quadratic since the computation of the normalizing constants is required for each particle and each time step.Consequently, FFBSm is a computationally intensive approach. In the case where the objective function is of additive form we can use the forward decomposition presented earlier to obtain an online algorithm; see [4].We denote the auxiliary functions {τ i t } N i=1 initialized by setting τ i 0 = 0 for all i = {1, . . ., N }.When a new observation is available, an update of the particle sample is followed by a recursive update of the auxiliary functions {τ i t } N i=1 according to After this, the FFBSm estimate is formed as Appealingly, this approach allows for online processing of the data and requires only the current particles and auxiliary statistics to be stored.Still, the computational complexity of the algorithm grows quadratically with the number of particles, since a sum of N terms needs to be computed for each particle at each time step. To speed up the algorithm, [11] propose, in the particle-based rapid incremental smoother (PaRIS) algorithm, (2) to be replaced by a Monte Carlo estimate.Given {τ i t−1 } N i=1 we update the auxiliary statistic by drawing indices {J according to the backward dynamics governed by the particle filter, i.e., drawing After this, each auxiliary statistic is updated through the Monte Carlo estimate and the estimate of φ 0:t|t;θ (s t ) is obtained as Again, the procedure is initialized by setting τ i 0 = 0 for i = {1, . . ., N }.In this naive form the computational complexity is still quadratic; however, this approach can be furnished with an accept-reject trick found by [6], which reduces drastically the computational work.The accept-reject procedure can be applied when the transition density for the hidden chain is bounded, i.e., there exists some finite constant q + θ such that q θ (x, x ) ≤ q + θ for all (x, x ) ∈ X 2 .This is a very weak assumption which is generally satisfied.In the scheme, an index proposal J * drawn from Pr({ω i t−1 } N i=1 ) is accepted with probability q θ (ξ J * t−1 , ξ i t )/q + θ , and the procedure is repeated until acceptance.Under additional assumptions it can be shown that the expected number of proposals is bounded ; see [6,11].Consequently, the overall computational complexity of PaRIS is linear, and the algorithm can be proved to be numerically stable in the long run for any fixed Ñ ≥ 2; see [11,Theorems 8 and 10].In addition, in the same article it is shown that the sample size Ñ should small, say, less than 10.Now, we may cast the PaRIS algorithm into the framework of the online EM algorithm of [1] by simply replacing (3) by the the updating formula where again the sequence {γ t } t∈N should satisfy the usual stochastic approximation requirements.A standard choice is to set γ t = t −α for .5 < α ≤ 1.The algorithm is summarized in Algorithm 2. As mentioned, the standard batch EM algorithm update, at iteration i + 1, the parameters using and it can be established, under additional assumptions, that every fixed point of EM is indeed a stationary point of the likelihood.The online EM algorithm updates instead, at iteration (i.e., time step) t, the parameters on the basis of where the subscript θ 0:t−1 indicates that the expectation is taken under the model dynamics governed by θ i at time i + 1.Thus, if the sequence {θ t } t∈N converges, we may expect, since the factor t−1 i= +1 (1 − γ i ) reduces the influence of early parameter estimates, that a fixed point of the online EM procedure coincides with a stationary point of the asymptotic contrast function, i.e., by identifiability, the true parameter value, see [7].At present, a convergence result for the online EM algorithm for SSMs is lacking (a theoretical discussion is however given by [1]), but for independent observations (e.g., mixture models) convergence is shown in [2]. Another algorithm that is worth mentioning here is the block online EM algorithm [9], where the parameter is only updated at fixed and increasingly separated time points.This algorithm can be shown to converge; however, simulations indicate that this block processing approach is less advantageous than updating the parameter at every time step.An overview of parameter estimation methods is given by [8]. Simulations We test the algorithm on two different models, the linear Gaussian model in Example 1 and a stochastic volatility model.With these simulations we wish to show that the PaRIS-based algorithm is preferred to the FFBSm-based version.In the implementations we start updating the parameter first after a few observations have been processed in order to make sure that the filter estimates are stable. Linear Gaussian model For the linear Gaussian model in Example 1 we compare the parameter estimates produced by the PaRIS-based algorithm with those produced by the FFBSmbased algorithm of [4].The observed data are generated by simulation under the parameters θ = (.8, .4 2 , .9 2 ).We tune the number of particles of both algorithms and the number of backward draws Ñ in the PaRIS-based algorithm such that the computational time of both algorithms are similar.This implies 250 particles for the FFBSm-based algorithm and 1250 particles and Ñ = 5 for the PaRISbased algorithm.We also restrict ourselves to estimation of the parameters a and σ V . In Figure 1 we present output of the algorithms based on 10 independent runs on the same simulated data, where θ 0 = (.1, 2 2 , .9 2 ) and where we update only the a and σ 2 V parameters.We set γ t = t −0.6 and start updating the parameters after 60 steps.As clear form the plot, both algorithms tend towards the true parameters.In addition, the PaRIS-based algorithm exhibits, as a consequence of the larger particle sample size, a lower variance. Stochastic volatility model The stochastic volatility model is given by where again {V t } t∈N and {U t } t∈N are independent sequences of mutually independent standard Gaussian noise variables.The parameters of the model are θ = (φ, σ 2 , β 2 ), the sufficient statistics are given by s1 and parameter updates are given by We generate the data through simulation using the parameters θ = (.975,.16 2 , .63 2 ).Again, we set the parameters of the algorithm in such a way that the computational times of the two algorithms are similar.This implies 110 and 500 particles for the FFBSm-based and PaRIS-based algorithms, respectively.In addition, the latter used Ñ = 4 backward draws. In Figure 2 we present the output of both algorithms from 20 independent runs using the same data input for each run.We initialize the algorithms with θ 0 = (.5, .8 2 , 1 2 ), use γ t = t −.6 , and start the parameter updating after 60 observations.We notice, as for the previous model, that both algorithms seem to converge towards the correct parameters and that the FFBSm-based algorithm exhibits the higher variance. Finally, to show that the algorithm indeed converges we perform one run of the algorithms on the stochastic volatility model parameterized by θ = (.8, .1, 1) using θ 0 = (.1, .1 2 , 2 2 ) and as many as T = 2,500,000 observations.For the FFBSm-based algorithm we used N = 125 particles and for the PaRIS-based algorithm we used N = 500 and Ñ = 2.Both algorithms use γ t = t −. 6 and do not update the parameters for the first 60 observations.The results are reported in Figure 3, which indicates convergence for both algorithms.Taking the mean of the last 1000 parameter estimates yields the estimates (.802, .093,1.01)and (.807, .084,1.03) for the PaRIS-based and FFBSm-based algorithms, respectively. Conclusions We have presented a new particle-based version of the online EM algorithm for parameter estimation in general SSMs.This new algorithm, which can be viewed as a hybrid of the PaRIS smoother proposed by [11] and the online EM algorithm of [1], has a computational complexity that grows only linearly with the number of particles, which results in a fast algorithm.Compared to existing algorithms, this allows us to use considerably more particles and, consequently, produce considerably more accurate estimates for same amount of computational work. of σ 2 VFig 1 . Fig 1. FFBSm-based (left boxes) and PaRIS-based estimates (right boxes) of a (panel a) and σ 2 V (panel b) in the linear Gaussian model.Dashed horizontal lines indicate true parameter values.
2016-02-24T17:01:30.000Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "87b2b1b659dcbf7776c67543ab642c28ccd30cd0", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2015.12.255", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "87b2b1b659dcbf7776c67543ab642c28ccd30cd0", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
14372869
pes2o/s2orc
v3-fos-license
Pulmonary Leukocytoclastic Vasculitis as an Initial Presentation of Myelodysplastic Syndrome Systemic vasculitis involving the lung is a rare manifestation of myelodysplastic syndrome (MDS), and secondary vasculitis is considered to have poor prognosis. A 44-year-old man presented with fever and dyspnea of 1 month duration. A chest radiograph revealed bilateral multiple wedge shaped consolidations. In addition, the results of a percutaneous needle biopsy for non-resolving pneumonia were compatible with pulmonary vasculitis. Bone marrow biopsy was performed due to the persistence of unexplained anemia and the patient was diagnosed with MDS. We reported a case of secondary vasculitis presenting as non-resolving pneumonia, later diagnosed as paraneoplastic syndrome of undiagnosed MDS. The cytopenia and vasculitis improved after a short course of glucocorticoid treatment, and there was no recurrence despite the progression of underlying MDS. Introduction Myelodysplastic syndrome (MDS) is a clonal stem cell disorder that is characterized by ineffective hematopoiesis, and which can progress to acute leukemia. After Dreyfus et al. 1 described the six cases of MDS associated with cutaneous leuwww.e-trd.org lungs, and pitting edema was observed in both legs. Chest radiography and chest computed tomography (CT) showed bilateral multiple wedge shaped consolidations in a subpleural area, with a small amount of pleural effusion ( Figure 1). Laboratory tests revealed a white cell count of 7,600/μL (neutrophil, 74%; eosinophil, 0.7%), hemoglobin concentration of 5.6 g/dL (reticulocyte, 1.2%; mean corpuscular volume, 114 μm 2 ), platelet count of 182,000/μL, erythrocyte sedimentation rate of 46 mm/hr, and C-reactive protein level of 13.6 g/dL. Analysis of the patient' s arterial blood gases indicated a PaO 2 of 58 mm Hg, PaCO 2 of 33 mm Hg, HCO 3 of 31 mm Hg, and SaO 2 of 94%. Results of liver and renal function tests, except aspartate aminotransferase (91 U/L), alanine aminotransferase (64 U/L), and bilirubin (0.6 mg/dL) were within normal range. Owing to the possibility of community-acquired pneumonia, a course of empirical antibiotics was initiated, and the patient received a transfusion of packed red blood cells to relieve symptomatic anemia. However, cultures for common bacteria, acid-fast bacilli, and fungi were all negative; a simple chest radiography became worsened; fever up to 40 o C persisted despite antibiotic therapy; and he developed multiple painful, erythematous, palpable rashes on both lower legs ( Figure 2). We performed a skin biopsy to differentiate such as a drug rash or a transfusion reaction, but microscopic examination revealed the presence of neutrophilic infiltration in perivascular and interstitial area, which was compatible with cutaneous leukocytoclastic vasculitis ( Figure 3). Percutaneous needle biopsy was followed to rule out organizing pneumonia or consolidative lung cancer including lymphoma. However, the results of the needle biopsy indicated necrotizing vasculitis with perivascular infiltration of neutrophils and lymphocytes, granuloma formation, and intraluminal fibrosis, which were compatible with a diagnosis of leukocytoclastic vasculitis (Figure 4). Serologic tests were all negative for venereal disease, hepatitis B surface antigen, hepatitis B and C, human immunodeficiency virus antibodies, anti-nuclear antibody (ANA), anti-neutrophil cytoplasmic antibody (ANCA), rheumatoid factor (RF), and cryoglobulin. There was no specific finding in abdominal CT scan. Since anemia persisted even after transfusion, we performed bone marrow aspiration with biopsy to identify the cause of unexplained macrocytic anemia. Evaluation of the bone marrow revealed hypercellularity (80%) that was abnormal for the his age, blast count of 2.6%, decreased erythropoiesis, and dysmegakaryopoiesis with karyotype of 46 XY,+1,der(1;7)(q10;q10). Finally, he was diagnosed as MDS, especially type of refractory cytopenia with multilineage dys- plasia. Treatment of intravenous methylprednisolone (1 mg/ kg) was started for immunologic manifestations, and fever subsided immediately. After 3 days, the patient's skin and lung lesions began to improve ( Figure 5A), and his anemia improved. Since the identified karyotype was associated with poor prognostic outcome which has potential of progress to acute leukemia, we recommended him to undergo allohematopoietic stem cell transplantation (HSCT) as treatment for MDS. He refused our suggestion; however, and insisted on being discharged from our hospital, so we prescribed him an oral corticosteroid for 1 month, serially tapering the dosage every week. After 1 year, he returned to our emergency department complaining of left flank pain. CT scans of the patient' s abdomen revealed splenomegaly, and leukemic transformation was suspected on peripheral blood smear. His chest radiograph, however, showed improvement of pulmonary vasculitis compared to 1 year before, despite no further therapeutic intervention ( Figure 5B). We then transferred him to another facility to undergo bone marrow transplantation. Discussion In MDS patients, systemic autoimmune diseases could be associated as paraneoplastic syndrome, such as vasculitis, arthritis, inflammatory bowel disease, pulmonary infiltrates, and peripheral neuropathy [2][3][4][5] . The prevalence of autoimmune disease in MDS is generally known to be between 10% and 20% 6,7 . Median survival of MDS patients is reported as 25 months; however, median survival of MDS patient with autoimmune disease is only 9 months 2 . Even though there were some disagreements about the prognosis of MDS accompanying other autoimmune manifestations, MDS with vasculitis consistently reported poor prognosis with high mortality rates 4,8,9 . Death from vasculitis-related disease were described in previous reports, which includes hemorrhage, embolism, and infection associated with immunosuppressive therapy, or unknown cause 2,10 . Laboratory abnormalities such as hemolytic anemia, thrombocytopenia, and presence of ANA, ANCA, RF, and cryoglubulin could occur 2,8 . Median age of MDS is older than 70 years and incidence of MDS increases with age 11 . However, our patient was 45 years old that is lower than median age of de novo MDS. Further-A B . Neutrophilic infiltration and fibrinoid necrosis of the blood vessel wall are shown, and alveolar space is filled with fibrinoid exudate on lung biopsy (H&E stain, ×200). www.e-trd.org more, karyotype was 46 XY,+1,der(1;7)(q10;q10) that can be seen in treatment related MDS/acute myeloid leukemia. We investigated his socio-occupational history and medical history, but he had no exposure of chemicals such as benzene or pesticide. He was severe alcoholic for more than 20 years and he was smoker as well. There are a few studies of association between alcohol consumption and the risk of MDS, but results are inconclusive 12,13 . Considering the age and clinical manifestations, our patient is somewhat different from general case of de novo MDS, however, there is no evidence of therapy related or secondary MDS. The mechanism of this immunologic phenomenon in MDS is not clear, but autoimmunity is believed to be triggered by increased apoptosis in the dysplastic bone marrow. Kiladjian et al. 14 discovered that patients of MDS with autoimmune disease had significantly fewer gamma-delta T-cells than those of MDS without autoimmune disease. Gamma-delta T-cells play an important role in the immune reaction against tumor cells, by producing tumor necrosis factor α and interferon γ 14 . Several studies have reported that T regulatory cells (Tregs), which have a role in the maintenance of immune tolerance, are involved in the etiology of MDS, and their expansion results in suppression of host anti-tumor responses in malignant disease. Kordasti et al. 15 reported that MDS patients who scored highly on the International Prognostic Scoring System had higher Treg percentages. In this report, we have described a case of concurrently diagnosed MDS and leukocytoclastic vasculitis involving the lung and skin, which improved after high dose corticosteroid therapy. The patient's anemia also resolved, which may be explained by improving anemia of chronic disease rather than bone marrow response of MDS to steroid therapy. Since we predicted that immunologic manifestations would be aggravated without MDS treatment, we recommend early allo-HSCT. However, his vasculitis did not recur after short course of steroid therapy, despite progression of MDS. In conclusion, nonresolving pneumonia can be initial manifestation of MDS, and treatment of vasculitis prior to MDS may be better approach to avoid fatal complications of this paraneoplastic syndrome. Conflicts of Interest No potential conflict of interest relevant to this article was reported. A B Figure 5. Chest radiographs post-steroid therapy at 3 days (A) and 1 year (B).
2017-10-29T05:33:11.879Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "31b7aa7aaca6d5a4e2c65bd15ac8a734a9dd0af4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4046/trd.2016.79.4.302", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31b7aa7aaca6d5a4e2c65bd15ac8a734a9dd0af4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204841786
pes2o/s2orc
v3-fos-license
Efficacy of plant leaf extracts against mustard aphid Lipaphis erysimi (Kalt.) under field condition The bio-efficacy of five plant leaf extracts were tested in Morang-2 variety of Rapeseed against Mustard aphid (Lipaphis erysimi Kalt.) during November to March, 2016/17 at research field of Institute of Agriculture and Animal Science (IAAS), Lamjung Campus, Lamjung, Nepal. The plant leaf extracts were prepared by decomposing chopped leaves of Neem, Bakaino, Hattibar, Khirro, and Bojho in cow urine for one month period. Total five extracts were prepared, one by mixing all the leaf while remaining four extracts were prepared excluding one ingredient in each mixture but keeping Tobacco and Bojho in all five extracts. The experiment was laid out in Randomized Completely Block Design using five botanical extracts, chemical (Cypermethrin 10% EC) and control which were replicated thrice. The Rapeseed plant was sprayed with prepared extracts at 30 days after sowing (DAS), 45 DAS and 60 DAS and aphid number counted after 5, 10 and 15 days of each spray from 10 cm apical shoot. The greatest reduction of aphid population was found in chemical followed by complete mixture treated plots but their reduction was not statistically different. While control and plant extracts of without Neem treated plots had resulted less reduction in the number of aphid. Grain yield was also found highest (1436.75 kg/ha) in complete mixture treated plots indicating complete mixture of plant extracts might be the best alternative for aphid management in Rapeseed. It is concluded that all the plant extracts showed insecticidal properties against aphid in rapeseed crop and successfully be integrated, as a part of Integrated Pest Management. INTRODUCTION Rapeseed (Brassica campestris L. var. tori; Family: Brassicaceae) is the best oilseed crop and has the highest acreage among all the oilseed crops grown in the country i.e.85% (Ghimire et al., 2000). In Nepal, the total area under Rapeseed cultivation is 173254 ha, its production is 152263 mt and productivity is 879 kg/ha. Many oilseeds crops are grown in Nepal, among them Rapeseed is the main one and supplies almost 80% of vegetables oil in Nepalese diet (Dhakal, 1985(Dhakal, -1991. In spite of the importance of oilseed crops, the average productivity (0.87 t/ha) in Nepal is low as compared to that of the world of 1.28 t/ha. Among many factors responsible for low yield, the insect pests play a significant role in reducing the yield and this crop is attacked by about 25 species of insect pests resulting in both quantitative and qualitative losses varying from 45-50% (Pradhan et al., 1960). Among them Mustard aphid is the most destructive insect pest (Biswas et al., 2000). The yield losses of 27-69% due to attack of aphid (Bakhetia and Brar, 1983) and 15% reduction in it's oil content (Verma and Singh, 1987). Chemical insecticides still remain the key tool for the control of this pest. Farmers spray insecticides in their fields indiscriminately which causes phytotoxicity, resistancy in pest, destruction of beneficial organisms, disruption of agro-ecosystem, human health hazards and environmental pollution (McIntyre et al., 1989). With several investigations, application traditional organic insecticides recommended as the best alternative to control Mustard aphid (Bakhetia, 1984 andKhurana et al., 1989). Botanicals are comparatively less toxic, less expensive, and also safe for beneficial organisms. Among 2400 species of Bio-active plants in world, almost 324 are found in Nepal. These abundant naturally occurring biologically active plants appear to have a prominent role for the development of future commercial pesticide in Nepal, not only for increased productivity but for the safety to the environment and public health. Therefore, present investigation was undertaken in this direction for assaying the insecticidal properties of different plant leaf extracts against mustard aphid. MATERIALS AND METHOD The experiment was conducted at horticultural farm of Institute of Agriculture and Animal Science (IAAS), Lamjung campus (mid-hills) during the winter season 2016/17. Seeds of Rapeseed variety Morang-2 were sown on 21 st November in 2m×2.1m size plots following RCB design with 7 treatments and 3 replications. 1. Preparation of extract The leaves of Neem (Azadiractin indica), Bakaino (Melia azedirach), Hattibar (Agave americana), Khirro (Sapium insigne), Bojho (Acorus calamus), and Tobacco (Nicotiana tabaccum) were chopped separately of 1-2 cm long. These chopped leaves of Neem, Bakaino, Hattibar, Khirro @ 150 g while Bojho and Tobacco leaf @ 75g were used for preparation of extracts in fresh cow urine (3 liter in each extract). Total five extracts were prepared, one by mixing all the leaf while remaining four extracts were prepared excluding one ingredient in each mixture but keeping Tobacco and Bojho in all five extracts. The prepared extracts were decomposed for 1 month period by mixing it once in each week. 3. Field preparation and crop management Field was preparared by ploughing, disking and leveling and seeds were sown in each plot with spacing of 30 cm RR and 10 cm PP. All the crop management practices were followed as recommended by MOAD. 4. Preparation of spray The well decomposed plant extract was filtered with muslin cloth and then mixed the filtrate with water at 1:4 ration while Cypermethrin 10% EC @ 1.5 ml/liter of water and then sprayed with the help of hand sprayer (2 lit capacity) at 15 days interval after 1 month (30 days) of sowing for three times i.e. 30DAS, 45DAS and 60DAS. The spray was prepared as below; The population of Mustard aphid was observed and recorded at 5 th , 10 th and 15 th days of each spray from 10 cm apical twig (centre branch) of randomly selected 5 plants of each plot. The data on weather variables such as temperature, RH % and rainfall from sowing to harvesting were also taken by installing respective devices on research site. All the observed data were subjected to ANOVA using IBM SPSS windows version 20 and GEN STAT 15ED. During first spray, chemical treated plot was found to be almost free from attack of mustard aphid while control plot showed highest incidence of aphid in all readings. Among plant extracts, Complete mixture reduces aphid statically same as chemical. Biswas (2013) was also found that chemical treated plot has minimum number of aphid than locally prepared plant extracts and control treatments. During Second Spray, all together of aphid count in each treatment increases as compare to previous spray reading. The trend of aphid incidence was the same which reflects that complete mixture can better control mustard aphid than other selected extracts. Along with this, it was also analyzed that effectiveness of plant extracts decreased with passing time of spray which can be justified by increased in aphid count from 5 DASp to 15 DASp. Bhatt and GC (2005) also reported that effectiveness of botanicals to reduce aphid population was significantly higher until 5 days of spray and decreases gradually after that. Kafle (2015) also found that the effectiveness of insecticides decreased with increasing time of spray. During third spray also chemical and complete mixture resulted maximum aphid reduction than other treatments and had statically similar effect while W/O Neem and control treated plots showed least effectiveness, remaining other extracts had statically similar effect. The result revealed that excluding Neem from the extract mixture (i.e. W/O Neem), effect was same as the control which signifies that Neem in mixture must be needed for effective result. Saikia et al. (2000) also reported that leaf extract of Neem in the plant extracts caused significant mortality of aphid which results almost same effect as chemical. Sable et al., (2014) found that chemical was highly effective with knockdown effect in controlling aphid followed by Neem and its mixture. The earlier works on the use of plant extracts has been also concluded same result by Pandey et al. (1987). 4. Damage level of Aphid infestation The Economic threshold Level (ETL) was fixed at 15 aphids/plant of 10 cm twig, as referenced from literature (Saunakiya and Tiwari, 2014). The observations taken during each spray found that at 15DASp1st, 10DASp2nd and 15DASp3rd crosses the boundary of ETL by trend line of control while in other treatments, the line was found to be far below the ETL. It represents that, the number of aphids doesn't reaches even to ETL with the application of selected plant extracts except in control ( Figure 4). 5. Grain Yield of Rapeseed Highest grain yield was found in complete mixture (1436.75 kg/ha) and Least in control (1126.90 kg/ha) which is also supported by Biswas (2013). Complete mixture had more yield (but not statically different) even than chemical which was highly effective to reduce aphid population, because of more number of pollinators and availability of nutrients through complete mixture. The plots treated with plant extracts and chemical showed that their yield is statically at par, it is because of less no of aphid than ETL (15 aphids/plant of 10 cm twig) in those plots during all the spray which leads to almost similar yield ( Figure 5). CONCLUSION In all the three spray, complete mixture gave the best results next to chemical as both aphid number reduction and yield were greatest in the plants received this treatment among the all tested plant extracts. Although plant extract of complete mixture failed to reduce about maximum aphid population like chemical (Cypermethrin 10% EC), but use of complete mixture is an eco-friendly management tactics of aphid. It is locally available, safe for the pollinators and natural enemies and also safe for environment. Along with this, the abundance of plant having insecticidal property in Nepal makes it as emerging solution against Mustard aphid.
2019-09-16T18:04:54.563Z
2018-01-01T00:00:00.000
{ "year": 2019, "sha1": "e61b59a1aa44b5cdaffb8c87374878b1131b53c9", "oa_license": "CCBYNC", "oa_url": "https://zenodo.org/record/1318737/files/WSN-105-2018-29-39-1.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2d889a8e8cccfbad69b75dd6e5731e46885c3551", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
73532945
pes2o/s2orc
v3-fos-license
Role of magnetic interactions in neutron stars In this work, we present a calculation of the non-Fermi liquid correction to the specific heat of magnetized degenerate quark matter present at the core of the neutron star. The role of non-Fermi liquid corrections to the neutrino emissivity has been calculated beyond leading order. We extend our result to the evaluation of the pulsar kick velocity and cooling of the star due to such anomalous corrections and present a comparison with the simple Fermi liquid case. Advantages of quark matter • The DURCA process cannot occur because it is not kinetically possible at such temperature of interest. • The MURCA requires a bystander particle. The neutrino emission rate is found to be insignificant. Quark dispersion relation • Interactions within the medium severely modify the self-energy of the quarks. For quasiparticles with momenta close to the Fermi momentum pF, the one-loop self-energy is dominated by the soft gluon exchanges. Quark self energy • The analytical expression for the one-loop quark self energy (for T ∼ |E − μ| ≪ g μ ≪ μ) exhibits a logarithmic singularity close to the Fermi Surface. • Low temperature expansion of the on-shell fermion self energy for the ultrarelativistic case is given as: Not over yet.. • HDL (Hard Dense Loop) resummation for gluon propagator required because higher order diagrams can contribute to lower order in coupling constant which is missing in bare p-QCD; resummation done by means of Mean free path of the degenerate neutrinos • The absorption process: • The scattering process: Mean free path of the nondegenerate neutrinos • The absorption process: • The scattering process: • The relation between neutrino emissivity and neutrino mean free path is obtained as: Specific heat capacity of degenerate quark matter • The specific heat of normal(non-color superconducting) degenerate quark matter shows NFL behavior at low temperature. • Thus, at low temperatures, the resulting deviation of the specific heat from its FL behavior is significant in case of normal quark matter and thus of potential relevance for the cooling rates of NS with a quark matter component. Think.. • In this work, we have calculated the MFP of degenerate and non-degenerate neutrinos both for the scattering and absorption processes. • We then find the expression for neutrino emissivity for nondegenerate neutrinos with NLO corrections. It is seen that both MFP and emissivity contain terms at the higher order which involve fractional powers in (T /µ). • We have found that there is a decrease in the MFP due to NLO corrections. • We reconfirm that the leading order correction to the quantities like MFP or emissivity are significant compared to the Fermi liquid results. • The NLO corrections, which we derive here, have however been found to be numerically close to the LO results. Conclusions • In this work, we have derived the expressions of the pulsar kick velocity including the NFL corrections to the specific heat of the degenerate quark matter core. • The contributions from the electron polarization (χ) for different cases has also been taken into account to calculate the velocities. • We have included the effect of the external magnetic field into the specific heat of the degenerate quark matter for the calculation of the pulsar kick velocity. The calculation of the specific heat of the degenerate quark matter in magnetic field for the NFL LO and NLO are new. • We have found that the NFL LO contributions are significant while calculating the radius-temperature relationship as seen from the graphs presented for the case of the neutron star with moderate and high magnetic field. The anomalous corrections introduced to the pulsar kick velocity due to the NFL (LO) behavior increases appreciably the kick velocity for a particular value of radius and temperature. However, for all the cases, no appreciable change in the R-T relationship has been observed for the NLO correction with respect to the LO case. Dynamical screening • The longitudinal and transverse HDL propagators are given as: • For q 0 → 0 longitudinal photons acquire an effective mass m D 2 =2m 2 which screens IR singularities. • For q 0 → 0 transverse (or magnetic) interactions are NOT screened; only dynamical screening. • Retaining the leading term for (q 0 /q → 0) we obtain: • Frequency dependent screening with a frequency dependent cutoff. • This cut-off is able to screen IR singularities so that finite results are obtained.
2016-04-26T08:57:26.000Z
2016-04-26T00:00:00.000
{ "year": 2016, "sha1": "d1b28bf22ed2f31faa19c3304f2e919e216b5529", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/14/epjconf_icnfp2014_04001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9c90a6e7648a8cb3126bfc7c2fafb8f0f5b6c598", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3169678
pes2o/s2orc
v3-fos-license
Microbial communities associated with ferromanganese nodules and the surrounding sediments The formation and maintenance of deep-sea ferromanganese/polymetallic nodules still remains a mystery 140 years after their discovery. The wealth of rare metals concentrated in these nodules has spurred global interest in exploring the mining potential of these resources. The prevailing theory of abiotic formation has been called into question and the role of microbial metabolisms in nodule development is now an area of active research. To understand the community structure of microbes associated with nodules and their surrounding sediment, we performed targeted sequencing of the V4 hypervariable region of the 16S rRNA gene from three nodules collected from the central South Pacific. Results have shown that the microbial communities of the nodules are significantly distinct from the communities in the surrounding sediments, and that the interiors of the nodules harbor communities different from the exterior. This suggests not only differences in potential metabolisms between the nodule and sediment communities, but also differences in the dominant metabolisms of interior and exterior communities. We identified several operational taxonomic units (OTUs) unique to both the nodule and sediment environments. The identified OTUs were assigned putative taxonomic identifications, including two OTUs only found associated with the nodules, which were assigned to the α-Proteobacteria. Finally, we explored the diversity of the most assigned taxonomic group, the Thaumarchaea MG-1, which revealed novel OTUs compared to previous research from the region and suggests a potential role as a source of fixed carbon for ammonia oxidizing archaea in the environment. INTRODUCTION Ferromanganese/polymetallic nodules form at the sedimentwater column interface in deep-sea environments (4-6000 m). Generally small in size (1-5 cm) and formed as concentric laminated structures, they are primarily composed of manganese (Mn), iron (Fe), and a large number of other metals, including copper, nickel, zinc, and titanium; however, composition varies by nodule and oceanic province. Despite their small size, the global estimate for metal content in ferromanganese (FeMn) nodules is 2 × 10 14 kg each of Fe and Mn (Somayajulu, 2000). Recently, an increase in the value of rare earth metals has stimulated an interest in mining these resources. The removal of FeMn nodules from the seafloor could have unknown ramifications on the environment, since the processes governing nodule formation and maintenance and the role nodules play in supporting the adjacent biosphere is poorly understood. The formation process for FeMn nodules has been a scientific unknown since their discovery in the 1870's (Murray, 1891). Emphasis had been placed on abiotic processes, with formation times on the order of a few mm ·10 6 yr −1 based on radiometric data (Kerr, 1984). But new quantification techniques have reduced the estimates of formation time to a few mm ·10 3 yr −1 (Boltenkov, 2012). As our understanding of the role microorganisms play in geochemical processes has increased, research has started to shift toward determining if biotic processes play a role in nodule formation. Much of the recent evidence of a microbial component to nodule geochemistry revolves around visual inspection of nodules using scanning electron microscopy. These studies have identified different exolithic and endolithic morphotypes of microorganisms (Zhang et al., 2002;Lysyuk, 2008;Wang et al., 2009a,b). Like most deepsea environments, little is known about the physiologies and metabolisms of microorganisms associated with FeMn nodules or the impact these microbial processes may have on global ocean metal chemistry. Previous work on FeMn nodules comes from samples collected from the Clarion-Clipperton Zone in the eastern North Pacific (Wang et al., 2012). An equally large FeMn nodule province exists within the central South Pacific Gyre (SPG), where FeMn nodules can occupy as much as 70% of the exposed surface sediment. The SPG has the lowest primary productivity and sedimentation rates of the major ocean gyres and, as a direct result, has extremely oligotrophic, recalcitrant underlying sediment (D'Hondt et al., 2009). FeMn nodules are responsible for a number of abiotic processes, including the degradation refractory organic compounds in to labile, low molecular weight organic compounds (Sunda and Kieber, 1994). These metabolically available compounds may then stimulate microbial respiration. Nodules also act as concentrators of metallic dications (Ni 2+ , Cu 2+ , Zn 2+ ) (Dick et al., 2008) and anionic forms of phosphorous, vanadium, molybdenum, and tungsten (Koschinsky and Halbach, 1995), many of which are important co-factors for key biochemical processes. Thus, FeMn nodules may play an important role in the degradation of buried organic compounds and the global carbon cycle. Previous attempts to isolate genetic material from marine FeMn nodules have been unsuccessful, with the exception of a single published 16S rRNA gene sequence (Wang et al., 2009c). We sampled nodules and sediments from three different sites in the SPG to gain a greater understanding of the biogenic controls associated with nodule formation and cycling. The V4 hypervariable region of the 16S rRNA gene was amplified from DNA extracted from different layers within each nodule and the surrounding sediment and sequenced using 454 sequencing. Deep sequencing the microbial community allowed us to determine the dominant 16S rRNA gene sequences and compare how the community structure varied between nodules from different sites and by the source layer from within the nodules. We found that the most abundant group of organisms could be assigned to the Thaumarchaea, but that this group was extremely diverse in the nodule/sediment ecosystem. We found that the microbial communities associated with the nodule were distinct from the communities present in the sediment and the nodule communities varied based on sampling site. Further, the microbial communities were significantly different between nodule layers. SAMPLE COLLECTION Sediment and FeMn nodules were collected as part of Expedition Knox-02RR (December 2006-January 2007 aboard the R/V Roger Revelle). Nodules and surface sediments were collected from SPG2, 9, and 10, with an additional sediment sample collected from SPG3 ( Table 1). The largest nodule collected measured 6.5 cm in diameter. Nodules and corresponding sediments were aseptically sampled from a multicore on the catwalk, as samples were brought onboard. Sediments from 0 to 5 cm were sampled from the same cores from which the nodules were obtained and stored at −80 • C. Nodules were rinsed gently with 0.2 μm filtered and autoclaved ambient bottom water to remove sediment adhering to the surface. A flame-sterilized hammer and chisel were used to aseptically section the nodules based on visual changes in strata (delimited as either outer layer, inner layer, or core, where applicable). Further subsamples were generated and stored in 1.5 mL cyrovials and stored at −80 • C. DNA EXTRACTION Extraction of DNA from nodules proceeded using a modified phenol-chloroform extraction method (Zhou et al., 1996;Juretschko et al., 1998). Approximately 0.5 mL of sample was resuspended in 675 μL of 2% CTAB (cetyltrimethylammonium bromide) lysis buffer [100 mM Tris, 100 mM EDTA, 250 mM Na 2 PO 4 , 1.5 M NaCl, brought to 40 mL and adjusted to pH 8.0; addition of 2% CTAB; diluted to 50 mL total volume and autoclaved] and vortexed thoroughly for 30 s. To each slurry, 20 μL of proteinase K (800 units·mL −1 ) was added and incubated horizontally for 30 min at 50 • C. To each sample, 150 μL of 10% SDS was added and incubated further for 120 min at 65 • C followed by the addition of 600 μL of PCI (phenol:chloroform:isoamyl alcohol) 25:24:1 and incubation for 20 min at 65 • C. Samples where then centrifuged for 10 min at 10,000 × g. The upper layer was transferred to a new tube (care was taken to avoid transferring material from the interface or below), 0.7 volumes of isopropanol was added, and incubated for 60 min at room temperature. Samples were again centrifuged at 10,000 × g for 15 min, 0.5 mL of cold 70% ethanol was added, and then centrifuged for an additional 5 min. Following removal of the supernatant, the pellet was left to air dry in a fume hood for 15-30 min (as necessary) and resuspended in 30 μL of sterile, DNase-free H 2 O. Samples were then quantified (3 μL) using the Qubit 1.0 fluorometer and the Qubit dsDNA HS Assay Kit (Life Technologies). Due to low yield using the described phenol-chloroform method, extraction of DNA from sediment samples was performed using the MoBio PowerLyzer PowerSoil DNA kit following the manufacturer's protocol, and quantified as above. All samples with >0.1 ng/μL final DNA concentration were cleaned and concentrated using the ZYMO Clean + Concentrator 5 (6:1 DNA Binding Buffer, as per suggested protocol) and samples were resuspended in 20 μL of sterile, DNase-free H 2 O. Initial PCR products were pooled and the PCR product (∼550 bp) was gel excised using the Qiagen Gel Extraction Kit (Qiagen) following the manufacturer's protocol. Excised DNA products were amplified in duplicate to generate sufficient material for pyrosequencing. The same forward primer was used, but the reverse primer (U1048R) had the 454 Roche Titanium Adapter B sequence (5 -CTATGCGCCTTGCCAGCCCGCTCAG-3 ) added to the 5 end. The second round of PCR amplification proceeded as above, with the following exceptions: [1] each primer was added at 0.6 μM final concentration; [2] no BSA or PVP was used; and, [3] 5 ng of template of DNA was added. The same settings were used for the thermocycler, except that amplification was only performed for 30 cycles. PCR products were pooled and cleaned using the AMPure Bead XP (Agencourt) kit, following the manufacturer's protocol. Samples were quantified using PicoGreen and visualized using Agilent Bioanalyzer using the High Sensitivity (Agilent) chip. The various 16S rRNA gene amplicons were pooled following the recommend procedure in the Amplicon Library Preparation Method Manual (Roche, GS FLX Titanium Series, October 2009). Pyrosequencing was performed by EnGenCore (University of South Carolina, Columbia, SC) utilizing Titanium FLX chemistry. The raw data files have been deposited in the NCBI Sequence Read Archive with the accession number SRA082599. Four of the samples with the same starting DNA were processed separately during the Titanium amplification and sequencing steps to provide technical replicates as to how well the procedure reproduced results for identical samples. While the absolute value of the number of sequences generated was different for each replicate, the relative abundance of the OTUs remained virtually the same (Supplemental Figure 1). DATA ANALYSIS Trimming, cleaning, and clustering of the 16S rRNA gene amplicon sequences generated via pyrosequencing was performed using mothur (V1.28) following the Schloss laboratory's standard operating procedure (SOP) (available at www.mothur.org) (Schloss et al., 2009. In brief, a combination of the programs trim.flows (set to 350 flows for FLX data), shhh.flows, and trim.seqs was used to identify high quality sequences and trimmed of any remaining adapter and primer sequences, using the recommended settings (allowing for 1 difference in the barcode, 2 differences in primer sequence, a maximum homopolymer length of 8 nucleotides, and a minimum length of 200 bp). Sequences remaining after these initial steps were aligned to a reference file generated using previously aligned SILVA 16S and 18S rRNA gene sequences (V111) from Bacteria, Archaea and Eukarya. The program screen.seqs was used to restrict the area of the sequences analyzed to an area immediately surrounding and including the V4 hypervariable region (13,861-23,959 bp of the aligned sequence). Following a series of steps to collapse related sequences in to more manageable numbers, groups were removed that did not have at least 1000 sequences remaining. The groups with >1000 sequences were processed using UCHIME to detect putative chimeric sequences by comparing all sequences to the most abundant sequences in the dataset (Edgar et al., 2011). Putative taxonomic assignments were derived using the classify.seqs program and filtered to remove sequences with the any taxonomic assignment to Chloroplast or Mitochondria. The programs dist.seqs and cluster (set to average neighbor) were used as described in the SOP. The mothur tool was also used to analyze the processed sequences to determine αand β-diversity measures and putative taxonomic assignment. For this downstream analysis, operational taxonomic unit (OTU) calls were made at the 99% identity level ("label = 0.01"), if applicable, all groups were randomly subsampled to 900 sequences ("size = 900"), and for any process were multiple iterations were required, 1000 iterations were used ("iters = 1000"). α-diversity was determined using the summary.single command, which estimated Good's coverage (Good, 1953) for the samples. Several calculators were used to determined β-diversity, including ThetaYC (θ YC ), Jaccard dissimilarity, and Bray-Curtis dissimilarity. Statistical significance of these β-diversity measures were computed using parsimony, weighted Unifrac (Lozupone and Knight, 2005), and unweighted Unifrac (Lozupone and Knight, 2005). Non-metric multidimensional scaling (NMDS) ordination plots were constructed and statistical significance was determined using an analysis of molecular variance (AMOVA) test (Excoffier et al., 1992). OTUs were putatively classified using the classify.otu program. PHYLOGENETIC TREE CONSTRUCTION Representative sequences determined to putatively belong to the Thaumarchaea MG-1 group were generated in mothur. Representatives were aligned with CLUSTAL W (Thompson et al., 1994) to environmental sequences (Durbin and Teske, 2010) and sequenced members of the Phylum Thaumarchaea (Hallam and Konstantinidis, 2006;Hatzenpichler et al., 2008;Walker et al., 2010) and trimmed to a region (289 bp) that included the V4 hypervariable region of the 16S rRNA gene. Maximum likelihood trees were constructed with 1000 bootstraps using the Kimura (K80) (Kimura, 1980) RESULTS AND DISCUSSION Analysis of 16S rRNA gene sequences generated allowed us to examine the community diversity and OTU abundances of the FeMn nodule-associated and sediment mircoorganisms from four sites in SPG. Analyzing the total diversity of the samples (α-diversity), how the community diversity compared between samples (β-diversity), and the overall community composition functioned as a corollary for establishing putative roles and functions of the microbes associated with FeMn nodules. ALPHA-DIVERSITY α-diversity is a general measure of species diversity (OTU richness) used to contrast different samples/sites in ecological studies. Results for this study returned a total of 1270 OTUs across the 20 samples. Based on the observed OTU richness, it is apparent that the sediment samples collected from 0 to 5 cm near the collected nodules tended to have a higher number of OTUs (Table 2), though not exclusively. Samples collected from different layers within each nodule demonstrated a wide range in the number of observed OTUs, but these numbers do not coincide with specific source layer. The OTU richness of the inward layers (denoted as inner and core) for nodules collected at sites SPG2 and SPG10 are higher, while SPG9 is more even in richness from both outer and inner layers ( Table 2). In general, use of Good's Coverage Estimate suggests that our depth of sequencing, and subsequent subsampling, approximately covers 84-98% of the OTU diversity within the samples, with the lowest coverage generated from the sediment samples ( Table 2). The data suggest that the sediments are more diverse than the nodules. It is generally assumed that increases in diversity are linked to an increased range of potential energy sources [e.g., increased microbial diversity in estuaries and coastal environments compared to the open ocean (Zinger et al., 2012)]. The increased diversity between the nodule and sediment environment may suggest that the surface sediment, despite having low total organic carbon (D'Hondt et al., 2009), is capable of sustaining more microbial diversity than the nodules due to the availability of potential energy sources. Alternatively, the nodule environment introduces a number of cellular stressors due to the presence of increased metal concentrations that may hinder the growth of a more diverse microbial population. The data also suggest that the outer layers of the nodules are less diverse than the inner layers. The implication may be that the inner layers are more capable of promoting microbial diversity and growth. This may be counterintuitive if it is assumed that the most influential metabolic process were linked to metal oxidation. The outer layers are the site of active nodule growth and the only region directly in contact with the surrounding organic material (OM) of the sediment, thus the major sites of metal oxidation and access to extant OM. The increased diversity in the inner layers may be the result of metabolic activity linked to metal oxide reduction (or some type of cycling between reduction and oxidation of the metal species). Alternatively, microorganisms in the communities of the inner layers may be entombed, such that the increased diversity is an artifact of multiple instances of organisms colonizing the active outer layers and becoming trapped. BETA-DIVERSITY β-diversity is a measure used to compare the communities of different samples/sites. Community variation calculations were performed using three different methods that were computed using: [1] only the differences in the OTUs present (Jaccard dissimilarity); [2] using the common OTUs absolute abundance (Bray-Curtis dissimilarity); and, [3] using the relative abundance of OTUs (θ YC ). In general, the calculations of statistical significance were in agreement for all possible sample groupings (see Supplementary Material, Appendix Table 1), but only the results of the θ YC calculations will be discussed, as this analysis is more robust in demarcating differences between communities (Yue and Clayton, 2005). Using hierarchical clustering to visualize the differences in communities between the samples and sites, it becomes apparent that SPG9 and SPG10 cluster together and away from SPG2, and that the sediment communities cluster away from the nodule communities, except for an inner layer sample from SPG2 (Figure 1). To increase the resolution of these variations and test the statistical significance, the samples were plotted using NMDS in three dimensions and tested for significance using the AMOVA test (Figure 2). Multiple designations of the samples were used to tease apart which factors attributed the most to community variation. Samples were classified by nodule site or sample source (layer or sediment), individually (e.g., all sediment samples are labeled "sediment" and sample site is ignored), or combined, with various iterations to test significance (see Supplementary Material, Appendix Table 2). Assigning samples by sample site and source allowed for the most accurate interpretation of the data. Many of the broad interpretations made from only labeling by sample site or source were supported for the different iterations of the data. Each sediment community was significantly different from each of the other sediment communities ( Table 3). The sediment communities were also significantly distinct from the nodule communities of SPG2, SPG9, and the SPG10 inner layers, though the SPG10 outer layers were not significantly distinct. This overlap between the SPG10 outer and inner layer communities may be the result of the inaccurate nature of the subsampling process and the difficulty of parsing subsamples that may overlap layers. SPG9 and SPG10 nodule communities were not significantly different from each other, but SPG2 nodule communities were significantly distinct from both SPG9 and SPG10. For both SPG2 and SPG9 the communities on the exterior of the nodule ("outer") were significantly different from the communities of the inward portions of the respective nodules (for SPG9 this includes an inner layer and a core sample, both distinct from each other and the outer layer). The different layers within SPG10 were not distinct from each other. Nodules from SPG9 and SPG10 do not have significantly distinct communities, despite a distance of ∼600 km. This is in contrast to the nodule from SPG2, which is significantly different and located at least ∼2100 km from SPG9 and SPG10. This difference appears to be related to specific OTUs that are part of the SPG2 nodule community and not part of the SPG9 and SPG10 nodule communities (Figure 3). Sites SPG9 and SPG10 are in a slightly different regime within the SPG, with higher average surface chlorophyll concentrations compared to SPG2 (D'Hondt et al., 2009). Though, if the apparent similarity between the nodules of SPG9 and SPG10 (and difference of nodules from SPG2) were a result of the physical/biological parameters of the overlying water column and OM inputs, it might be assumed that the sediments from SPG9 and SPG10 would have similar communities, when the data indicate these communities are distinct. Possibly the distinct communities are the result of differences in age of the nodules, the surrounding sediment environment, or the seeding populations. There are a number of OTUs that are present in all three of the nodule communities, and it may be that these members play a role in nodule formation/maintenance, while the differences represent possible flexibility in the recruitment of microbial populations. For the nodules from SPG2 and SPG9 it is possible to differentiate between the inner layers of the nodule and the outer layers. The implication from these results suggests that the interior conditions of the nodule may be selecting for a particular community composition that is different from the community composition of the exterior samples. Many of the community members are present in both the interior and exterior samples, and it is the level of abundance that changes. An increase in abundance may be linked to changes in the activity or role played by these OTUs, such that their metabolisms are favored in the interior conditions compared to other OTUs that decrease in abundance. COMMUNITY COMPOSITION The most abundant OTU signatures were examined to determine if the presence/absence of putative taxonomic assignments revealed information about nodule-microbe interactions. For 16S rRNA gene surveys, much of the emphasis is on the most abundant OTUs as a proxy for the most abundant organisms present in the samples. In general, this type of abundance data agrees with the more active members of the community, but there have been examples where the counter is true (Campbell et al., 2011). The top 30 most abundant OTUs (in total sequences assigned to the OTU) were assessed for the role they play in differentiating between different samples and were assigned putative taxonomic groups. Taxonomic assignments were used to predict how the microbial communities might be functioning based on the metabolisms of related organisms. The top 30 OTUs cover 43-80% (mean: 61%) of the total number of sequences assigned to OTUs for each sample (Figure 3). Many of the top 31-50 OTUs Assignments were as follows: OTU2,3,4,5,8,9,13,15,19,20,22,25,27,28,and 30,OTU7,14,16,18,and 29,OTU7,10,14,16,18, and 29, γ-Proteobacteria: Sinobacteraceae; OTU6 and 26, Bacteroidetes: Flavobacteriaceae; OTU17 and 23, α-Proteobacteria; OTU1, γ-Proteobacteria: Colwellia; OTU11, α-Proteobacteria: Rhodospirillaceae; OTU12, γ-Proteobacteria: "endosymbiont"; OTU24, γ-Proteobacteria: Alteromonodales: NB-1d. contain less <5% of the total abundance, with the rest of OTUs containing <1% of the total abundance for each sample. Twenty-two of the top 30 most abundant OTUs were present in both the sediment and nodule samples (Figure 3). The sediment samples have three OTUs found only in these samples (OTU13,28,and 30). Based on the SILVA taxonomy and assignment by mothur, these OTUs were putatively assigned to the MG-1 group within the Phylum Thaumarchaea (Figure 3). There were five OTUs not found in the sediment. Three of these OTUs were only associated with SPG2 (OTU1, 6 and 26), while the other two (OTU17 and 23) were present in all of the nodule samples. Of the OTUs only found associated with SPG2, OTU1 could be assigned to the Genus Colwellia, and OTU6 and 26 were assigned to the Family Flavobacteriaceae. For the other nodule-associated OTUs, both OTU17 and 23 were assigned to the Class α-Proteobacteria. Interestingly, 15 of the 30 most abundant OTUs were assigned to the MG-1 Thaumarchaea and 6 to the Family Sinobacteraceae in the Class γ-Proteobacteria. The OTUs associated only with SPG2 nodule were assigned to phylogenetic groups that are known to specialize in the degradation of high molecular weight compounds (Cottrell and Kirchman, 2000;Methé et al., 2005). While such a function would be common in sediments with higher loads of OM, the SPG sediments are the most carbon-poor marine sediments sampled to date (D'Hondt et al., 2009). Organisms specialized in OM degradation would need to effectively process recalcitrant material. Of all the OTUs, OTU1 was assigned the highest taxonomic rank, putatively a member of the Genus Colwellia, and was the most abundant OTU in its respective sample (>50% abundance) (Figure 3). The members of the Genus Colwellia are all psychrophiles, while certain members have been shown to form biofilms, possess co-enzyme F420, which may play a role in aromatic compound degradation, and contain the potential for denitrification (Methé et al., 2005). Aromatic compounds are generally more recalcitrant than other compounds, so if OTU1 also possesses such a genetic potential, it may be a viable metabolism for effective growth in the SPG. Interestingly, denitrification is an anaerobic metabolism, but the SPG has been shown to have measurable O 2 throughout the sediment column (D'Hondt et al., 2009). The other two OTUs unique to SPG2 were assigned to the Flavobacteriaceae, known to be a group associated with marine snow particles and key players in the microbial loop of the surface ocean (Cottrell and Kirchman, 2000;Kirchman, 2002;Grossart et al., 2005). Potentially, what makes the SPG2 nodule communities distinct were the members with a role in OM degradation, and not members with putative roles in metal chemistry. This may indicate that the SPG2 community is undergoing different processes biologically, and potentially chemically, than the nodules from the other sites. Less clearly defined were the OTUs that are exclusive to all of the nodule communities (Figure 3). Both OTU17 and 23 could only be assigned to the level of Class within the α-Proteobacteria. The α-Proteobacteria is a very large and diverse phylogenetic group. A number of the organisms within the α-Proteobacteria partake in metal biogeochemistry, including organisms in the Genus Magnetospirillum, which can form magnetosomes composed of magnetite [Fe(II,III) oxide] (Matsunaga et al., 1991). OTU11 was assigned to the Family Rhodospirillaceae, for which the Genus Magnetospirillum is a member. Most surprising of the putative taxonomic assignments was the breadth of diversity recovered from the Thaumarchaea MG-1 group. Half of the top 30 most abundant OTUs were assigned to this group, including the three OTUs exclusive to the sediment communities. The Thaumarchaea are one of the most abundant groups of organisms on the planet and, currently, all members are believed to be capable of the first step of nitrification (ammonia oxidation) and carbon fixation (Hatzenpichler, 2012;Tully et al., 2012). While there is no known direct link between this group of organisms and metal chemistry, their presence in the SPG may have a consequential impact. The SPG sediment environment is believed to be extremely carbon depleted, but with relatively high concentrations of reduced nitrogen compounds (D'Hondt et al., 2009). If the Thaumarchaea from this study are active and autotrophic, they may function as a source of reduced carbon, acting as a primary producer for microbial communities. Using representative sequences for each of the 15 OTUs of the top 30 OTUs assigned to the Thaumarchaea, a phylogenetic tree was constructed by including sequences from Thaumarchaea genomes and SPG site 12 sediment, where a targeted 16S rRNA gene survey analyzing the Thaumarchaea MG-1 diversity was done (Durbin and Teske, 2010) (Figure 4). Based on the phylogenetic groups assigned in Durbin and Teske (2010), OTU13, 30 and 28 fall within the MG-1 υ group. The MG-1 υ group was seen to increase with sediment depth and could not be found in the bottom water samples. Most the sequences fell within the MG-1 α group, which were found in the Durbin and Teske (2010) samples at the sediment-water column interface. The phylogenetic data from the nodules and sediment from SPG2, 3, 9, and 10 for the Thaumarchaea does not reveal any new distinct groups and covers many of the surface sediment related groups previously identified. CONCLUSION The results of the study reveal novel information regarding the types of microorganisms associated with FeMn nodules and present a starting point for further research into the role biology plays in their formation and maintenance. It clearly shows that the FeMn nodule-associated microbial community is significantly different from the surrounding sediment communities, suggesting the microbes in the nodules may play different metabolic roles than those of the sediment (and are not just "hitchhikers" from the surrounding environment). This idea is further supported by the underlying similarity between FeMn nodule communities (with some exceptions), especially for nodules from SP9 and SPG10, despite the distance between the sites. Furthermore, the communities associated with the inner portions of a nodule are distinct from its outer portions and surrounding sediment, implicating a possible selective pressure, such that the dominant physiologies of the inner nodule are different than those of the outer nodule. This may be the result of a shift from metal oxidation and OM degradation on the exterior to metal oxide reduction in the interior, or potentially some form of complex cycling of metal species and OM. The presence of sequences related to predominantly OM degrading organisms for the SPG2 nodule suggest an increased role in heterotrophic metabolism for these samples. The presence of Thaumarchaea in all samples may highlight a possible source for a food web supported by ammonia oxidation and carbon fixation in the energy-limited SPG environment. The lack of an abundant 16S rRNA gene sequence strongly linked to a phylogenetic group with known metal metabolisms may have implications for the role biology plays in nodule chemistry. It could be possible that the microorganisms associated with FeMn nodules do not play an active role in nodule formation through key metabolic functions (e.g., metal oxide reduction as a terminal electron acceptor in a electron transport chain), but the biochemical reactions associated with microorganisms may still be important. Oxidation/reduction of the metal species may be ancillary biochemical mechanisms related to biofilm formation/maintenance, cellular detoxification of reduced metals, or metabolic waste sequestration. Alternatively, the noduleassociated organisms may be utilizing the OM degraded by the FeMn nodule to sustain growth and nodule formation is truly abiotic. Further study of the genomic potential of the microbial community may reveal metal metabolisms in unexpected lineages or biological mechanisms linked to nodule chemistry.
2016-06-17T04:57:52.050Z
2013-06-25T00:00:00.000
{ "year": 2013, "sha1": "3c413674b62955728781c5254e8c6f24c145b853", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2013.00161/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c413674b62955728781c5254e8c6f24c145b853", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15611771
pes2o/s2orc
v3-fos-license
An accurate and efficient identification of children with psychosocial problems by means of computerized adaptive testing Background Questionnaires used by health services to identify children with psychosocial problems are often rather short. The psychometric properties of such short questionnaires are mostly less than needed for an accurate distinction between children with and without problems. We aimed to assess whether a short Computerized Adaptive Test (CAT) can overcome the weaknesses of short written questionnaires when identifying children with psychosocial problems. Method We used a Dutch national data set obtained from parents of children invited for a routine health examination by Preventive Child Healthcare with 205 items on behavioral and emotional problems (n = 2,041, response 84%). In a random subsample we determined which items met the requirements of an Item Response Theory (IRT) model to a sufficient degree. Using those items, item parameters necessary for a CAT were calculated and a cut-off point was defined. In the remaining subsample we determined the validity and efficiency of a Computerized Adaptive Test using simulation techniques, with current treatment status and a clinical score on the Total Problem Scale (TPS) of the Child Behavior Checklist as criteria. Results Out of 205 items available 190 sufficiently met the criteria of the underlying IRT model. For 90% of the children a score above or below cut-off point could be determined with 95% accuracy. The mean number of items needed to achieve this was 12. Sensitivity and specificity with the TPS as a criterion were 0.89 and 0.91, respectively. Conclusion An IRT-based CAT is a very promising option for the identification of psychosocial problems in children, as it can lead to an efficient, yet high-quality identification. The results of our simulation study need to be replicated in a real-life administration of this CAT. Background Many children suffer from behavioural and emotional problems [1][2][3] and these problems may seriously interfere with their daily functioning, now and later in life [4,5]. Yet many of these children remain untreated [5]. Early identification and treatment improves the prognosis of the children involved considerably [2,6]. Community-based preventive child healthcare (PCH) services, especially outreaching services, are in a unique position to identify such problems as early as possible. In the Netherlands, PCH professionals offer routine well-child care to the entire Dutch population to the age of about 14, free of charge. The early detection of children with psychosocial problems is an explicit part of their working package. In contrast to systems existing e.g. in the US, Dutch PCH does not offer treatment services. When (physical or psychosocial) problems are detected, children are referred to other parts of the healthcare system, especially to primary healthcare. Research has shown, however, that early identification in PCH is often far from perfect. For example, Brugman et al. showed that in Dutch PCH, about half of the children with a clinical CBCL Total Problem Score remained unnoticed when they were examined by a physician or nurse [1]. Other studies came to similar conclusions [7][8][9][10][11]. There are several possibilities to improve the identification of children with emotional and behavioural problems. Wiefferink et al. showed that using clear protocols and extensive staff training can lead to a significant increase in the number of children with problems identified and a decrease in the number of children incorrectly identified as having problems [12]. Other studies showed that using good questionnaires, to be filled in by parents, teachers or the children themselves, can also help to improve the quality of early identification [2,[13][14][15]. However, in community-based PCH the time available for each individual child is limited. This means that questionnaires that are practicable in such settings, have to be easy to score and therefore short. Also, they must be easy for all parents to answer. Short questionnaires, unless they have a very narrow scope, tend to be less reliable and less valid than desirable [16]. Identification of problems based on such questionnaires is therefore error prone, resulting in too many false classifications. Since the 1950s, new statistical models called Rasch or IRT (Item Response Theory) models have been developed which allow for Computerized Adaptive Testing (CAT), a short and efficient test procedure that does not compromise the accuracy of the test results. Originally, these models could only be applied to items with only two categories. This limited their application mainly to the field of intelligence testing and the assessment of school achievements [17]. In the last decades more widely applicable models have become available. This led to IRT-based test procedures in the field of quality of life measurements [18]. Some publications have been published describing the application of these models to the assessment of mental health problems [19][20][21][22]. Just like test procedures based on more traditional psychometric theories, IRT-based procedures help to determine the position of a person on some measurement scale, for instance on intelligence, school achievements or the level of psychosocial problems. In IRT that position is called the person location. IRT differs from traditional psychometrics in that it provides information about which items are relevant to use in an individual assessment and which are less useful. A simple example may illustrate this principle. Suppose in a particular arithmetic test, a child failed to give the correct answer to the question "How much is 2*3?" In that case it is probably not very useful to ask "How much is 34*17?" The latter question can help to distinguish between children on a higher position of the arithmetic ability scale, but will add little information for a child who failed to answer the first question correctly. Translating this to scales assessing emotional and behavioural problems, items indicating severe problems are not informative for children with no or few problems and items indicating less severe problems are not informative for children with severe problems. With IRT it is possible to determine the severity of individual items; i.e. the position on the scale where it is informative. That position is called the item location [17]. This information can be used to shorten the test length in the following way. After each answer on a single question an estimation is made of the person's probable score, or person location. Then the available items are scanned in order to determine which item could improve the estimated person location. This continues until a previously defined accuracy has been reached. In practice this process is only possible with the aid of computers: Computerized Adaptive Testing (CAT) [23]. For CAT to be possible, the location of the items must be known in advance, before actual testing of an individual starts. In this study we assessed whether CAT can also be used for a fast, short, yet high-quality identification of children with emotional and behavioural problems in community-based PCH. In order to do so, the following three questions will be answered: 1 Are the items of four questionnaires on emotional and behavioural problems suitable for an IRT-based CAT and, if so, which are the parameters (item locations) of the individual items, to be used in a CAT? 2 Which cut-off point results in a sensitive and specific distinction? 3 What are the validity and the specificity of such a CAT and how efficient is this procedure? Data collection, population and measures We used a data set collected in an earlier study [24] containing information about parent-reported problems of children aged seven to twelve. Data were collected in a two-step procedure. In the first step nine randomly selected regional PCH organizations were found willing to participate in our study. Second, parents who were invited for a regular care routine health examination of their child were asked to participate in the study and to fill in some questionnaires about emotional and behavioural disorders of their child. The study was approved by the Medical Committee of the Leiden University Medical Center. Data from 2041 parents were available, that is 84% of all invited parents. Table 1 presents some demographic chracteristics of the respondents and non-respondents. The sample may be considered as representative for the population under care in Dutch PCH in this age group, with Cohen's W (a measure of effect size) varying from .002 for gender to .109 (for ethnic origin). The SDQ was developed by Goodman as a screener for psychiatric problems in children, especially in community samples. Its validity and usability have been demonstrated in a large number of studies and in many countries, also in the Netherlands [24,33,34]. The SDQ contains 25 items and allows for the calculation of 5 subscales (Emotional Problems, Conduct problems, Problems with Peers, Hyperactivity and Prosocial Behavior. The first four subscales can be summed into a Total Problem scale. The PSC was developed by Jellinek and Murphy as a screener for psychosocial dysfunction. Its validity has also been well established. It allows for the calculation of a single Total Problem scale. At the time of the study no Dutch version was available. Therefore a Dutch version was developed inco-operation with the authors, based on three independent translations and back-translations [35]. This Dutch version was proven to be valid and reliable [24,36]. The PSYBOBA was developed in the Netherlands, as a screener for psychosocial problems among primary school children, specifically for Dutch PCH. Its 26 items also allow for the calculation of a single Total Problem scale. The validity of the PSYBOBA was shown to be perfectly comparable to that of the SDQ and PSC [24]. Which parent answered which of these three questionnaires was determined at random. The three sub-samples were similar with regard to the background characteristics mentioned above and with regard to the number of children being treated because of psychosocial problems and the number of children with a clinical score on the CBCL Total problem scale [24]. A clinical score was defined as a score above the 90 th percentile for specific age/gender groups in the Dutch normative sample, following the Dutch CBCL manual [37]. The PSC, SDQ and PSYBOBA were chosen for this study because there was evidence for their conceptual validity in relation to the kind of problems Dutch PCH aims to identify and because they met the requirements for use in the context of PCH: short, easy to administer and to score. Their validity in relation to a clinical CBCL Total problem scale score was shown to be similar, with sensitivity indices varying from 0.78 (PSC) to 0.86 (SDQ and PSYBOBA) and specificity indices from 0.90 to 0.91 [24]. The way in which we collected data led to an incomplete data matrix: the data for the PSC, the SDQ and the PSYBOBA are each available in about one third of the sample. Finally, PCH professionals answered questions on current treatment status and emotional and behavioural problems of each child, based on medical records and on the routine health examination of the child, during which a small structured interview was done for the purpose of this study. Data analysis We randomly divided the total sample in two subgroups. The first one, the calibration group (n = 1,650), was used to answer the first two questions (suitability of the items and determination of the cut-off point). The second, the validation group (n = 391), was used for the evaluation of the validity and efficiency. This evaluation in a separate group was done in order to prevent overestimation of validity and efficiency coefficients. To assess the suitability of the items for an IRT-based CAT we assessed whether the items fitted the assumption of one-dimensionality. For this aim, we determined whether the items showed enough fit with the Partial (6) 42 (11) Unknown 204 (10) 127 (33) Family composition (n (%)) Two parents 1755 (86) 308 (80) One parent 184 (9) 39 (10) Other 102 (5) 39 (10) Parental employment (n (%)) Credit Model (PCM), one of the one-dimensional IRT models. Using this model for a CAT has the advantage that it results in scores on an interval measurement level [38]. We performed this assessment using the RUMM 2020 software (http://www.rummlab.com.au/ ), [39] as this can handle incomplete data matrices like ours. RUMM 2020 provides so-called outfit statistics for each item, that indicate to what extent each item fits the model. Items were considered suitable for CAT measurement if they had an outfit statistic smaller than 1.7. Next, we calculated the item locations of the remaining items, using the same software. Additional file 1 presents an overview of the items, their means and standard deviations and -when not removed-item location. In order to determine whether the estimated item locations would be valid, independently from gender and ethnicity, we performed Differential Item Functioning (DIF) analyses for each item. We did this by multinomial logistic regressions, with the raw score on the item as the dependent variable. First, the estimated person location was the only predictor in the logistic regression model. Second, both gender, ethnicity and their interaction were added as predictors. Items were considered as showing DIF when these additional predictors had a significant effect and led to an increase of the explained variance of more than 3.5% [40]. Third, we determined an optimal cut-off point for the CAT scores, i.e. one which enables a good distinction between a non-clinical versus a clinical CBCL TPS. The CBCL TPS was used as the criterion measure, because it measures exactly the emotional and behavioural problems which Dutch PCH aims to identify and because both its concurrent and predictive validity have been widely established [41][42][43][44]. We simulated a CAT in the calibration group, using the answers on paper and pencil questionnaires as if they were given in a CAT and calculated the resulting person locations (CAT scores). We assume that in community-based PCH about 30 items is the maximum number feasible, and limited the number of items to be used in this CAT to 30. We used Fisher's information Index for the selection of the next item in the CAT [45]. A Bayesian approach with a right-skewed lognormal prior was used to estimate the person locations. Using the scores from this simulation we did a Receiver Operating Characteristics analysis with a clinical CBCL TPS as criterion and chose that point that resulted in a specificity of 0.90 as cut-off point. The exact estimate of the person locations, however, will vary somewhat with the number of items used in the CAT. In order to assess the effect of this variation we repeated the analyses with a fixed number of 5, 10 and 20 items and also with no limit to the number of items, but continuing until the person locations had been estimated with 95% accuracy. In all these CATs the first item was chosen at random. We calculated the sensitivities and specificities for all these analyses and inspected the differences, in order to verify that the maximum of 30 items we used was a sensible one. Finally, we evaluated the validity and efficiency of the CAT. The validity was assessed by means of a simulated CAT in the independent validation group. In this simulation we aimed to assess, with an accuracy of 95%, whether a person scored above or below the chosen cut-off point. In other words, the CAT was stopped when the 95% Confidence Interval of the estimated person location did no longer overlap with the chosen cutoff point. This procedure is known as clinical decision adaptive testing [46]. Again, the starting item was chosen at random, Fisher's Information index was used to select the next best item and a Bayesian approach was used to estimate the person locations. We assessed the validity of the estimated person locations by calculating the Area Under Curve (AUC), sensitivity and specificity with a clinical CBCL TPS and current treatment status as criteria. In order to enable some comparisons with results from other studies we also calculated kappas between the dichotomized CAT scores and dichotomized CBCL Total Problem Scale scores and being under treatment because of psychosocial problems. The efficiency of the procedure was evaluated by calculating the number of items needed in this simulated CAT and the number of respondents for whom the CAT resulted in 95% certainty on a score below or above the chosen cut-off point. Suitability of the items for an IRT-based CAT Of the 205 non open-ended items in the four questionnaires 190 met the criteria for a CAT: they had an outfit of less than 1.7; 15 items were removed because of an outfit larger than 1.7 (Table 2). Most items that had to be removed came from the CBCL (13 out of 15). The Person Separation Index was 0.93, indicating a high reliability. The DIF analyses showed that almost all estimates were not modified by gender and ethnicity. Only 8 of the 190 items showed some DIF; 5 items from the CBCL, 2 from the PSC and 1 from both the SDQ and the PSYBOBA. Five of these items showed some DIF in relation to gender (sexual problems, running away, attacking others, being ill without physical cause and problems with teachers) and three in relation to ethnicity (tantrums, not being assertive, talking about suicide). Most of these problems have a very low prevalence; the percentages of parents reporting such problems being clearly or often present ranged from 0 to 5.8%. These items may therefore be expected to have a small overall impact on the final estimations. We therefore decided not to remove them. Figure 1 presents the estimated item locations calculated for the remaining items and split by questionnaire. As mentioned before, these item locations are indications of the level of severity. The most severe items on the right (concerning very serious problems) were items from the CBCL, which in general appeared to have more severe items than the other three questionnaires. Determining the cut-off point After the item locations had been estimated, we performed a CAT simulation on the calibration group with a fixed number of 30 items. Figure 2 presents the number of respondents by the calculated person location on the latent scale, by CBCL TPS, divided into normal, borderline or clinical. The ROC analysis showed that with a cut-off point of -1.9 the specified specificity of 0.90 was reached. The sensitivity for a clinical CBCL TPS at that point was 93%. Table 3 presents the effects in terms of AUC, sensitivity and the specificity indices in relation to the use of different numbers of items in the CAT. The specificity shows little variation; using a fixed number of 5 or 10 items results in a decreased sensitivity. The results for a CAT with 20 or 30 items and for a CAT that continues until the 95% Confidence Interval no longer overlaps with the cut-off point are very similar. Validity and efficiency In the validation group, the ROC analyses showed that the CAT did very well in the identification of children with a clinical TPS; the AUC was 0.92 (CI: 0.85 -0.99). With the chosen cut-off point sensitivity was 0.89 (CI: 0.71 -0.97), with a specificity of 0.91 (CI: 0.87 -0.93). Kappa was 0.53. Using treatment status as criterion the Overall, in relation to the CBCL TPS, the CAT selection procedure resulted in a correct classification of 91% of all children involved. The CAT resulted in a correct classification for the large majority of cases with normal (96%) or clinical scores (89%). However, 20 (77%) of the 26 cases with a score in the CBCL borderline range, had an elevated CAT score. Figure 3 presents the number of items needed to reach convergence, i.e. to assess with 95% certainty whether the respondents had a true score below or above the chosen cut-off point of -1.9. In 40 cases (10%) convergence was not possible with less than 100 items. They had a mean person location of -1.88 (standard deviation, sd = 0.18); i.e. very near the chosen cut-off point. Their mean CBCL TPS was 28.4 (sd = 7.1).; 25% of them had a CBCL TPS in the borderline range; 5% in the clinical range. For the other 351 cases, the mean number of items used was 11.5 (sd = 13.0). For 37% of the respondents the procedure converged with less than 5 items; for 57% up to 9 items were needed. For 74% up to 20 items were used and for 82% up to 30 items. The mean CBCL TPS for respondents for whom less than 5 items were used in the CAT was 10.8 (sd = 10.5). We checked the convergence between the CBCL TPS based classification and the CAT classification for these respondents. In 98% of the cases the classification was identical. The CAT resulted in a score below cutoff point for 2 respondents with a clinical CBCL TPS and one respondent got a CAT score above the cut-off point with a CBCL in the normal range. Discussion This study showed that IRT-based Computerized Adaptive Testing indeed resulted in an accurate, yet very efficient identification of children with psychosocial problems. Most of the items of the four questionnaires under investigation met the requirements of an IRT model, needed to incorporate them in a CAT. A simulation study showed that the procedure identified children with a clinical CBCL TPS with high sensitivity and specificity. For 90% of all cases we could determine with 95% certainty whether they had an elevated score. In order to achieve these results, on average only 11.5 items were needed. For more than half of the children less than ten items were needed. There are, of course, other, more traditional techniques for reducing test length. However, in contrast to more traditional approaches, an IRT-based CAT provides high measurement quality, by adjusting items used in the assessment to the individual being tested. This has the additional advantage that this individual is not being confronted with items that are not relevant in his situation and that might be shocking for him or her. Therefore, the inclusion of items of the SDQ, PSC and PSYBOBA in the item pool used for this CAT offers the advantage that more items are available which are suitable for parents of children with no or few problems. Fit with the literature Our finding that an IRT-based CAT can result in accurate assessments with far less items than tests based on traditional psychometrics is fully comparable to findings in other studies, applying IRT CAT techniques in the fields of intelligence and school achievement assessment, [16,17] and in the field of Quality of Life [18,22,47]. The first studies on the application of IRT models in the field of the identification of behavioural and emotional problems in paediatric care have now been published, [19][20][21][22] and these studies came to similar conclusions. Hill et al. [22] present a detailed analysis to assess the suitability of items from the Pediatric Quality of life Inventory for a CAT on distress but do not provide data on criterion validity. Compared to other validation studies regarding CAT and mental health, our study and the study by Gardner et al [20] are the only ones that focus on a rather broad concept, rather than on more specific problems, like Gardner [21] and Fliege et al. [19] Gardner et al. [20] used the PSC as criterion. As we used the more widely validated CBCL as one criterion, our study provides a stronger argument for the usefulness and validity of CAT-based procedures in the field of mental health. Gardner et al [20] evaluated the extent to which a multidimensional adaptive test could be used to replicate screening decisions based on the Pediatric Symptom Checklist. He found a very high correspondence between the Adaptive PSC and the original 35 items PSC (kappa = 0.84), higher than the corresponding figure we found. The mean number of items he needed to achieve this was 12 items, out of 35, whereas we needed a mean of 12 items, to replicate the screening decision based on the 120 item CBCL. It is not exactly clear why Figure 3 Efficiency of the CAT procedure: percentage of persons for whom a score above or below the cut-off point could be estimated with 95% accuracy, by number of items used to achieve convergence. he found a higher correspondence than we did. Our cutoff point was not chosen in order to maximize kappa, but had we done so, our kappa would still be lower than Gardner's. An explanation might be that Gardner limited himself to PSC items, whereas we used items from four questionnaires. Thus, in our study there is less overlap between the items in the CAT and the criterion measure. This is probably the main reason why Gardner's study resulted in a higher kappa. Strengths and Limitations This study has several strengths but also limitations. A major strength is that it concerns a community-based sample of children with high response rates that is representative for the population under care. Furthermore, we used separate groups for the construction of the CAT and for its validation. A limitation of our study is that some of the items predicting the criterion, are part of the criterion itself, i.e. the CBCL items. In our view this does not hurt the validity of our conclusion regarding the quality of a short alternative for a longer questionnaire. Moreover, we simulated a CAT based on answers given to a full questionnaire, which is a deviation from the real practice set-up. A next stage will certainly be to evaluate the CAT in a set-up in which items are really presented using the CAT. Finally, although we had a rather large group, our validation group was relatively small, implying the need for a large-scale replication. Anyhow, our study provides a valid assessment of the potentials of an IRT-based CAT for PCH practice. Conclusion The most important conclusion of our study is that IRT-based CAT appears to be a very feasible and promising tool to improve the identification of psychosocial problems in PCH. As such it earns a quick passing through to the daily practice of well-child care and maybe even of paediatric care in general, where there is a clear need for easy to use and sustainable high quality screening tools to increase the paediatrician's ability to identify children with mental health needs [48]. Before having a final pass to clinical practice, several aspects have to be studied more thoroughly, though. This in particular concerns the use of our simulated version in a real-life situation, with parents filling out the CAT on real computers. Currently, a beta-version for this aim is available on the internet, but this is Dutch only and protected by passwords and firewalls to preserve patient confidentiality. A formal assessment of this implementation in daily practice is the next step for research, which will focus among other things on acceptability and usability for parents and PCH professionals and privacy issues. Similarly, our findings have to be replicated in other settings and maybe using other item pools as well. Anyhow, this new technology may provide a push to improve the quality of the identification of psychosocial problems in PCH. Additional material Additional file 1: Items evaluated in the IRT analyses: content, mean, standard deviation and item location. Data on items evaluated in the IRT-analyses, calibration sample.
2016-05-31T19:58:12.500Z
2011-08-04T00:00:00.000
{ "year": 2011, "sha1": "d85d179270d1dc7fcd598d1ab0e48999d647b459", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-11-111", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e22bf798a96746f88565843c82b964fa24351b7", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
6289580
pes2o/s2orc
v3-fos-license
Primary and Comprehensive Stroke Centers: History, Value and Certification Criteria In the United States (US) stroke care has undergone a remarkable transformation in the past decades at several levels. At the clinical level, randomized trials have paved the way for many new stroke preventives, and recently, several new mechanical clot retrieval devices for acute stroke treatment have been cleared for use in practice by the US Federal Drug Administration. Furthermore, in the mid 1990s we witnessed regulatory approval of intravenous recombinant tissue plasminogen activator for administration in acute ischemic stroke. In the domain of organization of medical care and delivery of health services, stroke has transitioned from a disease dominated by neurologic consultation services only to one managed by vascular neurologists in geographical stroke units, stroke teams and care pathways, primary stroke center certification according to The Joint Commission, and most recently comprehensive stroke center designation under the aegis of The Joint Commission. Many organizations in the US have been involved to enhance stroke care. To name a few, the American Heart Association/American Stroke Association, Brain Attack Coalition, and National Stroke Association have been on the forefront of this movement. Additionally, governmental initiatives by the US Centers for Disease Control and Prevention and legislative initiatives such as the Paul Coverdell National Acute Stroke Registry program have paved the way to focus on stroke prevention, acute treatment and quality improvement. In this invited review, we discuss a brief history of organized stroke care in the United States, evidence to support the value of primary and comprehensive stroke centers, and the certification criteria and process to become a primary or comprehensive stroke center. Introduction Evidence that organization of cardiovascular care may be effective in reduction of morbidity and mortality has existed since at least the 1950s when cardiologists implemented specialized care in coronary units for patients with acute heart disease. 1 We learned in chemistry class of the laws that govern free energy that positive entropy, a low or 'disordered' energy state, can be righted by pumping energy into such a system. 2 Similarly, it stands to reason that in stroke care, an organized structure can be developed by placing appropriate resources, management and vision into the system and by supplying the 'energy' (i.e., enthusiastic stroke team members) to make the system function well. 2 Furthermore, it has been shown multiple times that organized stroke care in the form of stroke care units reduces morbidity and mortality associated with stroke. 1 Components and processes associated with stroke units have included but are not limited to stroke care maps, stroke teams, and quality im-In the United States (US) stroke care has undergone a remarkable transformation in the past decades at several levels. At the clinical level, randomized trials have paved the way for many new stroke preventives, and recently, several new mechanical clot retrieval devices for acute stroke treatment have been cleared for use in practice by the US Federal Drug Administration. Furthermore, in the mid 1990s we witnessed regulatory approval of intravenous recombinant tissue plasminogen activator for administration in acute ischemic stroke. In the domain of organization of medical care and delivery of health services, stroke has transitioned from a disease dominated by neurologic consultation services only to one managed by vascular neurologists in geographical stroke units, stroke teams and care pathways, primary stroke center certification according to The Joint Commission, and most recently comprehensive stroke center designation under the aegis of The Joint Commission. Many organizations in the US have been involved to enhance stroke care. To name a few, the American Heart Association/American Stroke Association, Brain Attack Coalition, and National Stroke Association have been on the forefront of this movement. Additionally, governmental initiatives by the US Centers for Disease Control and Prevention and legislative initiatives such as the Paul Coverdell National Acute Stroke Registry program have paved the way to focus on stroke prevention, acute treatment and quality improvement. In this invited review, we discuss a brief history of organized stroke care in the United States, evidence to support the value of primary and comprehensive stroke centers, and the certification criteria and pro-provement efforts. 2 In some regions such as in North America, primary stroke centers have become the key unit of organization for the delivery of stroke care, and more recently comprehensive stroke center certification has become a reality under the regulatory guidance of the The Joint Commission. The Joint Commission sets quality standards for hospitals in the United States and serves as a certifying body for hospitals and for certain hospital-based programs such as in stroke. Comprehensive stroke centers provide a structure to take stroke care to a new level of excellence, the potential for handling more complicated stroke cases and a venue to provide better outcomes. In this review we discuss a brief history of organized stroke care in the United States, evidence to support the value that primary and comprehensive stroke centers may bring, and the criteria and certification process to become a primary or comprehensive stroke center. We have entered a new era or stroke care that is being ushered in by comprehensive stroke centers and new advances in stroke prevention, diagnosis and treatment. 3 Brief History of Organized Stroke Care in the United States Healthy People 2010 and Paul Coverdell National Acute Stroke Registry. Now, we embark on a discussion of the history of organized stroke care in the United States (US) in the modern era. In the US organization of stroke care was heightened by two key national prevention initiatives that were brought about by the need to address almost 900,000 stroke-related hospitalizations annually, over $50 billion in lost productivity and health care costs, and the personal ravages of physical and psychosocial devastation associated with stroke. 4 The initiatives were sponsored by the United States Center for Disease Control and Prevention and were designed to improve stroke health. They included Healthy People 2010 and the Paul Coverdell Stroke Registry. The former initiative was developed to identify substantial preventative health threats to US citizens, increase the quality and years of life, and eliminate health disparities. 4 Sixteen of a total of 467 objectives were established to address heart disease and stroke. The Paul Coverdell National Acute Stroke Registry program was implemented in 2001 as state-based stroke quality registries to measure and follow acute stroke care outcomes, and lead to high quality acute stroke care and prevention of stroke mortality and recurrence. Wave I (2001) of the Paul Coverdell National Acute Stroke Registry program included 4 states and Wave II (2002) included 5 states. Then, in June 2004, funds were provided at the state level for Coverdell registries in Georgia, Illinois, Massachusetts and North Carolina earmarked to develop and implement programs for data collection and analysis for quality improvement interventions at the hospital level catalyzed by partnerships with physicians, stroke care teams, and hospital administrators. 4 Finally, in 2007 funding was provided to 6 state health departments for continued quality improvement work in acute stroke, and the registry remains active. In the US in the late 1990s and early 2000s there was an obvious need for quality improvement in acute and recurrent stroke preventive care. It was suspected that application of quality stroke care was unevenly distributed. For example, we carried out a statewide assessment of acute stroke diagnostic and treatment capabilities in the Midwestern State of Illinois among 183 of 202 (91%) adult acute care inpatient medical hospital facilities between December 1999 and June 2000. 5 Key select findings from our survey are listed in Table 1. As can be observed in Table 1, significant gaps in acute stroke care existed primarily in the non-Greater Chicago Metropolitan Area though both the non-Greater Chicago Metropolitan Area and Greater Chicago Metropolitan Area had key gaps in stroke community awareness programs and availability of acute stroke care teams. 5 In a survey in North Carolina conducted before our survey and from which our survey was patterned, there were gaps found in availability of acute stroke care teams, stroke care maps, stroke units, and rapid patient identification programs. 6 Around the year 2000 these surveys demonstrated the need for improvement in acute stroke care, and therefore, the need for stroke quality initiatives such as the Coverdell program to catalyze quality stroke care and raise awareness of such need. In a follow-up study by Goldstein in North Carolina, the availability of certain diagnostic tests, but not specialty staff or stroke units, increased between 1998 and 2008, and there were possible improvements between 2003 and 2008, suggesting possible establishment of programs to develop stroke care systems. 7 Emergency Medical Services, Stroke Center Network, and Brain Attack Coalition. After the approval of recombinant tissue plasminogen activator (rtPA) therapy for treatment of acute ischemic stroke in the mid-1990s, there was a need for integration and organization of Emergency Medical Services (EMS) as part of an effective stroke system. Such organizations in the United States as the American Heart Association/American Stroke Association (AHA/ASA) and the National Stroke Association (NSA) supported an integrated and organized approach to EMS involvement and the use of "911" telephone triggers to prompt EMS to respond to stroke as a high level emergency 8 and follow the principles of rapid identification and treatment of acute stroke. 9 Furthermore, in the mid-1990s the NSA Stroke Center Network program was developed, and NSA established the NSA Stroke Center Recommendation Guidelines which were used to develop a foundation for stroke center infrastructure. 4,10 The latter were incorporated by the Brain Attack Coalition (BAC), formed in 1996, to improve medical services and detection of stroke. 4,11 The BAC has been instrumental in helping to craft guidelines for primary and comprehensive stroke center programs as we will discuss in a section below. The three aforementioned organizations are independent of one another. AHA/ASA is an organization dedicated to advocacy and education of the public and health care providers for prevention, treatment, diagnosis and rehabilitation of heart attack and stroke and for funding of scientific endeavors in the respective disease-specific areas. NSA has a major goal of patient advocacy in stroke and is also dedicated to education of patients and health care providers for stroke prevention, treatment, diagnosis and rehabilitation. BAC has been involved in making recommendations for stroke prevention and treatment and for holding educational meetings for physician providers and clinical researchers in the stroke field. Primary Stroke Center Certification. In 2003 the AHA/ASA and The Joint Commission agreed on a certification process for stroke through a Disease-Specific Certification program that included a voluntary evaluation process driven by the demonstration of a consistent approach to clinical outcome measurement and minimum standards for stroke care built around acute ischemic stroke treatment with rtPA. 4 Primary Stroke Center Certification began in 2004 and by April 2005 about 15 hospitals per month were being reviewed. 4 By 2011, there were over 800 The Joint Commission primary stroke centers in the US out of some 4000-5000 total hospital facilities. Some states in the United States have established a state designation for stroke centers through a local health department certification mechanism, and in some regions legislation has been passed to have acute stroke patients bypass non-primary stroke center designated hospitals to allow diagnosis and treatment at primary stroke center-designated acute receiving hospitals. It is acknowledged that quality initiatives for stroke care have evolved throughout the world and that Joint Commission International developed a process for certifying hospitals outside of the US. 4 Whereas in the US a major focus has been the primary stroke center as the unit for acute stroke treatment, facilities in European hospitals for treating stroke patients have focused on the stroke unit as the primary organizational component for acute stroke care. In 2005, a survey of 886 randomly selected hospitals in 25 countries showed that less than 10% of European hospitals treating acute stroke patients had optimal facilities, and in about 40% the minimal standard was not met. 12 Mississippi Stroke Education Consortium and Regional Care. Finally, one of the first larger scale systematic approaches to regional stroke care in the US was the Mississippi Stroke Education Consortium, guided by a state-based volunteer advocacy group that was founded in 1994 to set policy to decrease the impact of stroke through an ongoing educational process geared to laypersons and healthcare professionals. 13 Among many advances the Mississippi Stroke Education Consortium provided a proposal for the development of a statewide emergency stroke network that included the following 5 components: 1) A substantial initial education template for health care professionals and the public; 2) Acute and subacute stroke care criteria; 3) Level I-III medical center designation according to ability to meet acute and subacute care criteria; 4) EMS transport criteria; and 5) A continuous education plan for public and health care providers. Growth of regional stroke care in the first decade of the 21st century has been substantial. In 2000, the first counties adopted regional regulations to route acute stroke patients to primary stroke centers, and this was followed by adoption of such policies in 2 states in 2004. 14 By 2010, 16 states and counties in 3 additional states had such legislation for EMS to route acute stroke patients to primary stroke centers and bypass those noncertified facilities. By the end of 2010, it was estimated that 53% of the US population was covered by such routing protocols. 14 database solution to track outcomes and make improvements to care based on ongoing data analysis and checks. 15 The AHA/ ASA Get With The Guidelines-Stroke (GWTG-S) program provides such a data solution and has been utilized by over 1,000 hospitals in the US and has over 1 million patient records. GW-TG-S serves as a national stroke registry and quality improvement program and is believed to be representative of the national fee-for-service Medicare ischemic stroke population. 16 In a recent review of metric compliance and improved patientcentered outcomes in stroke, it was concluded that there are limited high-quality studies and methodologic flaws exist making it difficult to interpret the reported associations. 17 Furthermore, the possible importance of residual confounding in the study of hospitalized stroke patients in relation to the influence of compliance with guideline-based processes on risk-adjusted mortality and adjustment for stroke severity have been emphasized. 18,19 Examples of Need for Better Organization of Stroke Care. There are a number of examples that demonstrate the need for better organization of stroke care that can be anchored by primary stroke centers. For example, in 2001 Burgin et al. reported that whereas a high percentage of acute stroke patients received computed tomography brain studies in non-urban East Texas communities, aggressive treatment of blood pressure commonly occurred and at blood pressures below treatment recommendations. 20 Rural areas and small communities in the US and elsewhere have been the subject of health disparities in stroke care and the need to have an appropriate nexus to implement best-practice recommendations. 21,22 Furthermore, despite national stroke public awareness campaigns, public knowledge of strokerisk factors and warning signs has not improved substantially over time, and care seeking after stroke symptoms remains suboptimal (~50% of cases). 23,24 In addition, there is potential for a large financial cost associated with inadequate primary and recurrent stroke prevention measures, and rehospitalizations among Medicare beneficiaries has become a target for financial penalties leveled on hospitals by the US Center for Medicare and Medicaid Services. 25,26 Provision of recurrent stroke prevention services has been shown to be suboptimal in a high percentage of stroke patients and thus, provides further justification for the need to improve upon the delivery of such services. 27 Also, there is a need for better transitioning of stroke patients from the inpatient hospital or rehabilitation setting to home or institutional care. 28 In-hospital initiation of stroke prevention and allied therapies may serve as a means to improve such transitioning. 29 Better organizational systems such as those provided by well-constructed primary stroke center models are needed to accomplish this. Added Value and Feasibility. Finally, organized stroke care adds value in that it reduces the following risks associated with stroke: death by 14%, death or institutionalized care by 18%, and death or dependency by 18%. [30][31][32][33] Importantly, in the US it has been shown that the possibility of establishing a primary stroke center is both desirable and reachable as the presence of resources needed to achieve primary stroke center status is present in an estimated over 40% of US hospitals. Additional Rationale. There is additional rationale to justify organization of stroke care according to primary stroke center or stroke unit processes. Safety of hospital care has become a major target for improvement. Adverse events and errors may be common in stroke patients. In a 750-bed academic hospital during a 3.5 year period ending in 2004, 173 (12.0%) patients had an adverse event. Common events as might be expected in stroke patients included falls, medication errors, and adverse events. 34 According to the study findings, it was estimated that almost 50% of the adverse events were preventable and involved medications and other situations commonly occurring in care delivery processes in a stroke patient group and ranged from acute thrombolytic administration to end-of-life care. Of the preventable adverse events 37% were transcription/documentation errors, 23% a failure to perform a clinical task, 10% were communication or handoff errors, and 10% were failure to perform independent checks or proper calculations. Organizational means to solve such gaps in care such as primary stroke center or stroke unit systems are available. Implementation of stroke unit care, for example, has been available for decades and has been shown to reduce risk of death via the prevention of treatment complications. 35,36 In addition, database solutions such as GWTG-S are now used to help identify certain process gaps in stroke care in need of resolution. Stroke Center Designa tion and Quality Improvement. Stroke center designation has been associated with a number of quality improvements including but not limited to access to timely thrombolytic therapy and utilization of stroke unit care. 37 Primary stroke centers may be established successfully as a metropolitan-wide matrix in large population areas to facilitate diagnosis and treatment of acute stroke patients. 38 Organization of acute stroke in this way may be advantageous especially when there is high annual hospital volume or high physician patient volume in relation to stroke care which heightens preferable outcomes or cost savings. 39,40 An organized stroke care system such as an inpatient stroke unit has been associated with reduced length of care and case fatality, cost-effectiveness when followed by early supportive discharge, and as a model for stroke care, generalizeability if implemented in non-principal referral hospitals. [41][42][43] It should be noted that there is evidence to sug-gest that primary stroke center designated hospitals had better outcomes than non-certified hospitals before The Joint Commission (TJC) program for primary stroke center designation was implemented. Possibly, the certified hospitals had organizational programs already in place prior to achieving certification status. Who Should Be Leading Stroke Team Management? There has been debate as to whether a specifically-trained vascular neurologist or other physician should manage stroke patients from the time of onset of stroke and beyond. [44][45][46] It has been argued that vascular neurologists and others trained specifically to treat stroke patients have better stroke outcomes but may be associated with higher costs of stroke care. Caplan has opined that among all of the physicians who could be involved in stroke care, neurologists with interest, training and experience in caring for stroke patients are most likely to have the proper attributes to manage stroke patients. 44 Lees acknowledges the need for more stroke specialists. 45 The consensus, therefore, is that those trained to take care of stroke patients are best suited to provide stroke care. 46 It has been shown that: 1) The higher the level of stroke care organization and complexity of stroke case mix, in general the better the patient outcomes; 47 2) The higher the level of organized stroke care, the lower the 30-day mortality for each ischemic stroke subtype 48 ; and 3) With organized acute stroke care, all age groups may benefit as there is reduced institutionalization or death. 49 Therefore, the role of the stroke team lead physician and the overall system of organization for stroke care are highly inter-related and important aspects of the care delivery system. Role and Value of Stroke Performance Measures. The implementation of stroke performance measures has been associated with large-scale improvement in stroke care. 50 Stroke performance measures primarily have emphasized acute and subacute aspects of stroke care, and thus, there is a need to expand the measures to be more inconclusive of outpatient stroke care and functional recovery. When the influence of patient and hospital factors are taken into account, in the Paul Coverdell National Acute Stroke Registry, hospital-level factors explained about 18% of total variation in quality of care, whereas the majority of variability in quality stroke care was accounted for by patientlevel factors (82%). 51 Major criteria for the establishment of primary stroke center membership have been associated with benefits in acute stroke treatment processes. For example, among 34 academic medical centers, institutions that follow a greater number of BAC features were more likely to administer rtPA. 52 In addition, in the California Acute Stroke Pilot Registry, a Coverdell pilot registry, implementation of standardized stroke orders and monitor-ing was associated with improvement in use of proven acute stroke treatment or prevention interventions. 53 Furthermore, the impact of standardized stroke orders at discharge were studied in a cluster-randomized trial by the Quality Improvement in Stroke Prevention investigators in 12 hospitals. 54 The primary outcome was optimal treatment at 6 months defined as taking a statin agent, blood pressure < 140/90 mm Hg, and receipt of anticoagulation if atrial fibrillation was present. With the hospital as the unit of analysis, the endpoint, optimal treatment, was not significant, whereas at the individual patient level rates of optimal treatment did improve in the intervention compared to non-intervention hospitals. Two other randomized trials, hohowever, failed to show benefit of performance feedback on ischemic stroke care quality markers after discharge. 55,56 Contributions of GWTG-S. As previously mentioned, GW-TG-S has provided a substantial amount of guidance in relation to quality of care and outcomes in acute stroke in the US and elsewhere. In Asia, for example, it has been shown that GWTG-S performance measures are applicable with appropriate modification for ethnic factors. 57 In the US there have been a series of important publications from GWTG-S. Now, we review select papers from GWTG-S and in Table 2 provide a summary of key findings. Based on 905 hospitals and 479,284 consecutive stroke or TIA admissions, the influence of stroke subtype on quality of care was reported. 58 There were 61.7% ischemic strokes, 23.8% TIAs, 11.1% intracerebral hemorrhages, and 3.5% subarachnoid hemorrhages. Overall, many hospital-based acute stroke care and prevention measures were underutilized in intracerebral hemorrhage and subarachnoiod hemorrhage when compared to ischemic stroke/TIA. 58 The time period of study spanned from April 1, 2003 to December 30, 2007. Based on 322,847 hospitalized stroke patient discharges from a volunteer sample of 790 US academic and community hospitals during the time period 2003-2007, participation in GWTG-S was analyzed to determine if there were improvements in performance adherence. 59 Compared to baseline, by the 5th year, the following improvements were noted in 7 performance measures: administration of intravenous thrombolytics (42% vs. 73%), early antithrombotics (91% vs. 97%), deep venous thrombosis prophylaxis (74% vs. 90%), discharge antithrombotics (96% vs. 99%), anticoagulation for atrial fibrillation (95% vs. 98%), lipid treatment (74% vs. 88%), and smoking cessation (65% vs. 94%), and the composite of performance measures (84% vs. 94%) with P < 0.0001 for all comparisons. 59 Furthermore, there was a 1.18-fold yearly increase in the odds that care opportunities were independent of secular trends, and improved stroke care was observed in all hospitals no matter of size, geog-raphy and teaching status. Based on 383,318 acute ischemic stroke admissions from 1,139 hospitals between 2003 and 2008, 7 performance measures (see above paragraph for performance measures) were assessed for defect-free care between women and men. 60 Overall, women had less defect-free care than men (66% vs. 71%) and were less likely to be discharged home (41% vs. 50%). The authors suggested that the differences may be due to residual confounding or other unmeasured factors but that additional research was needed to determine reasons for the health care disparities. 60 Based on 397,257 patients with ischemic stroke from 1,181 hospitals during the time period between 2003 and 2008, 7 performance measures were studied to determine differences in care according to race/ethnicity. 61 Overall, when compared to white patients, black patients were significantly less likely to receive intravenous thrombolysis, deep venous thrombosis prophylaxis, discharge antithrombotics, anticoagulants for atrial fibrillation, and lipid therapy, and of dying in the hospital. Hispanic patients received similar care and had similar mortality to white patients. Black and Hispanic patient length of hospital stay was higher than that of whites, but quality of care improved for each race/ethnic groups over time. 61 Based on 2,598 patients with ischemic stroke or TIA in 106 hospitals followed from discharge to 3 months, 76% at 3 months were taking all recurrent stroke prevention medications (antiplatelet therapies, warfarin, antihypertensive therapies, lipidlowering therapies, or diabetes medications as appropriate) administered at discharge. 62 Persistence was associated with decreasing number of classes of medications prescribed, increasing age, medical history, less severe stroke disability, having insurance, working status, health literacy, increasing quality of life, financial hardship, geographic region and hospital size. Based on 479,284 consecutive ischemic stroke or TIA admissions from 981 hospitals during the time period 2003-2008, the frequency of low-density lipoprotein cholesterol testing was determined. Over time the frequency of testing increased from 54% to 82%, however, measurement frequency was lower in women, non-smokers, those with atrial fibrillation or history of stroke or TIA, and those with TIA (vs. ischemic stroke). 63 Furthermore, low-density lipoprotein cholesterol testing was higher the longer a program participated in GWTG. Based on 991,995 admissions from 4 US regions during the time period 2003-2010, 8 guideline recommended treatments including intravenous rtPA, antihypertensives at discharge, smoking cessation counseling, weight loss education, antithrombotics, anticoagulants for atrial fibrillation, deep venous thrombosis prophylaxis, and lipid-lowering medications at discharge were studied. 64 Overall, use of each of the therapies varied according to the following results: 58-68% for intravenous rtPA, 73-76% for lipid-lowering therapy, 80-84% for antihypertensives, 96-97% for antithrombotics, 88-91% for deep venous throm bosis prophylaxis, 49-55% for weight loss reduction, and 72-77% for defect-free care. By region, patients in the South had the lowest odds of use of rtPA, antihypertensives, and defect-free care, but were more likely to receive lipid-lowering agents vs. those in the Northeast. Patients in the Midwest had lower odds of administration of intravenous rtPA and defect free care. Those in the West had lower odds of administration of antihypertensives but greater odds of being treated with lipid-lowering therapy. Role and Value of Telestroke. It is estimated that approximately 50% of the US population has reasonable access to a primary stroke center. 65 For those that do not have timely access to a primary stroke center, some may benefit from access to a center Metric of Interest: Comment Stroke Subtype: Hospital-based acute stroke care and prevention measures were underutilized for intracerebral hemorrhage and subarachnoiod hemorrhage vs. ischemic stroke/TIA Adherence: Significant improvements from baseline at 5 years in administration of intravenous thrombolytics, early antithrombotics, deep vein thrombosis prophylaxis, discharge antithrombotics, anticoagulation for atrial fibrillation, lipid treatment, and smoking cessation; and the composite of performance measures Defect-Free Care: Women had less defect-free care on 7 performance measures than men and were less likely to be discharged home Race/Ethnicity: When compared to white patients, black patients were significantly less likely to receive intravenous thrombolysis, deep venous thrombosis prophylaxis, discharge antithrombotics, anticoagulants for atrial fibrillation, and lipid therapy; and of dying in the hospital. Hispanic patients received similar care and had similar mortality to white patients. Black and Hispanic patient length of hospital stay was higher than that of whites, but quality of care improved for all race/ethnic groups over time Medication Persistence: 76% persistence at 3 months after discharge with all recurrent stroke prevention medications (antiplatelet therapies, warfarin, antihypertensive therapies, lipid-lowering therapies, or diabetes medications as appropriate) Low-density lipoprotein cholesterol Testing: Over time the frequency of testing increased from 54% to 82%, however, measurement frequency was lower in women, non-smokers, those with atrial fibrillation or history of stroke or TIA, and those with TIA (vs. ischemic stroke) Regional Treatment: Patients in the South had the lowest odds of use of rtPA, antihypertensives, and defect-free care, and were more likely to receive lipid-lowering agents vs. those in the Northeast. Patients in the Midwest had lower odds of administration of intravenous rtPA and defect-free care. Those in the West had lower odds of administration of antihypertensives but greater odds of being treated with lipid-lowering therapy with telemedicine through a hub-spoke relationship between a hospital without stroke expertise and one with telestroke expertise. In a recent survey to determine active telemedicine programs for stroke in the US, 56 such programs had confirmed telestroke activity including 38 programs from 27 states. 66 Whereas these programs are thriving in certain regions, some such activities may be challenged by lack of reimbursement for services, lack of program funds, inability to obtain physician licensure, and other challenges. 66 Evidence and policy statements by AHA/ASA have been published previously to help position telestroke activities 67,68 as have a practical aspects of telestroke systems' guide. 69 Telestroke is a means to extend stroke expertise to underserved areas and when applied by competent individuals, it is a viable remote presence alternative option to in-person availability, increases delivery of rtPA in acute ischemic stroke, and can do so within acceptable standard rates of efficacy and safety. [70][71][72][73][74] Cost and Cost-Effectiveness. The cost of stroke in the US and cost analysis of stroke centers, telestroke and rtPA administration have been reviewed previously by Demaerschalk and colleagues. 75,76 It has been argued that because stroke centers can reduce length of hospital stay and both stroke centers and telemedicine programs can increase the use of rtPA, it is very possible that these care processes are cost-effective. [75][76][77][78] In the original BAC publication on recommendations for the establishment of primary stroke centers, a similar argument was made that costs might be recouped by shortening length of stay for hospitalized stroke patients and by reducing complications associated with stroke. 79 Length of hospital stay is considered one of the major drivers of costs associated with stroke and other inpatient medical care. 80 Reduction of length of hospital stay and rehospitalizations has heightened importance based on the new US healthcare system plan for reimbursement and cost savings. Additional high-quality cost-effectiveness research in relation to stroke center and telemedicine is needed to guide judicious future use of health care resources. 76 Evidence to Support the Value of Comprehensive Stroke Centers Thus far, we have shown that organized stroke care in the form of enhanced medical delivery processes such as stroke units and primary stroke centers is associated with improvements in a number of performance measures, may be associated with reduced mortality and dependency, and other benefits. Emphasis on reducing medical errors and prevention of early rehospitalizations has become a major focus in the US healthcare system, and thus, highlights the need for systems of care that will reduce medical errors and complications. Given the potential for a high complexity of stroke case mix and the need to deliver cuttingedge interventions, a movement to establish comprehensive stroke centers has evolved. Comprehensive stroke centers are those capable of handling a full spectrum of care to seriously ill patients with stroke and cerebrovascular disease. 81 Since there has been a relative paucity of study, there is limited information about scientific evidence to support the value of comprehensive stroke centers. The argument strongly in favor of comprehensive stroke centers is based on the need for a higher level of specialized care given the spectrum of available diagnostic, treatment, preventive and rehabilitation resources, and new technical advents in these respective areas. Several studies have emerged that support the value of comprehensive stroke centers. For example, there may be a disparity between outcomes for stroke patients admitted to hospitals during weekends vs. weekdays. In one study comprehensive stroke centers showed no difference in 90-day mortality for stroke patients admitted on weekends vs. weekdays, whereas the risk of death if admitted on weekends at other care facilities was higher on weekends. 82 Furthermore, in a registry-linkage system study from Finland, the number-needed-to-treat to prevent 1 death or institutional care at 1 year was 29 for comprehensive stroke centers vs. 40 for primary stroke centers when compared to general hospitals. 83 A British study showed that stroke interventional endovascular services were available in only a small number of hospitals, and only about 50% of them who had no available endovascular service for stroke had transfer plans with a center that did provide the services. 84 Certification Process for Primary and Comprehensive Stroke Centers Primary Stroke Centers. An important step moving forward for the establishment of primary stroke center certification was the BAC recommendations for primary stroke centers. 79 Major elements of a primary stroke center according to BAC included: 1) Patient care areas (e.g., acute stroke teams, written care protocols, emergency medical services, emergency department services, a stroke unit for those centers providing ongoing inpatient care for stroke patients, and neurosurgical services); 2) Support services (commitment from the parent medical organization, a stroke center director, neuroimaging services, laboratory services, outcome and quality improvement activities, and continuing medical education). 79 A key message of the BAC recommendations was timely provision of acute stroke services, and therefore, such laboratory services as general ones, electrocardiography and chest X-ray needed to be available on a 24hour/day, 7-day/week basis; computed tomography brain scanning on a 24-hour/day, 7-day/week basis; and neurosurgical services with in 2 hours. 4,15 With the establishment of primary stroke center recommendations, the next step was the development of a process for certification. As previously mentioned in this review, TJC and AHA/ASA agreed on a certification process for stroke that was classified as a Disease-Specific Certification. 4 Three major elements of TJC Primary Stroke Center Certification were established: 1) Compliance with and use of evidence-based stroke guidelines; 2) Implementation of TJC standards (e.g., accuracy of patient identification, effectiveness of communication among caregivers, reconciliation of medications, reduction of risk of harm from falls, and TJC disease-specific standards such as performance measurement, clinical information management, and program management); and 3) Measurement of clinical outcomes. 4 In relation to stroke performance measures, a set of ischemic stroke harmonized measures was developed and included but was not limited to deep venous thrombosis prophylaxis, antithrombotic therapy at discharge, anticoagulation therapy at discharge if the patient had atrial fibrillation, dysphagia screening, stroke education, smoking cessation advice/counseling, and assessment for rehabilitation. 4 Furthermore, a subset of these stroke performance measures were included for hemorrhagic stroke patients (e.g., deep venous thrombosis prophylaxis, dysphagia screening, stroke education, etc.). It was at the discretion of the local primary stroke center to determine quality improvement plans and means to measure clinical outcomes. GWTG-S became a popular database tool to record and track performance measures. In 2011, the BAC group revised and updated recommendations for establishment of primary stroke centers. 85 Based on literature review and local experience, the following areas were highlighted in the revised, updated statement: 1) Importance of acute stroke teams; 2) Importance of stroke units with telemetry monitoring; 3) Utilization of magnetic resonance imaging and diffusion-weighted magnetic resonance imaging sequences; 4) Assessment of the cerebral vasculature by magnetic resonance angiography or computed tomographic angiography; 5) Cardiac imaging to assess stroke etiology; 6) Early deployment of rehabilitation therapy; and 7) Independent local site certification that includes a site visit and disease performance measures. Comprehensive Stroke Centers. In 2005, the BAC published a consensus statement about recommendations for comprehensive stroke centers. 81 The recommendations emphasized service needs to deliver specialized care and included the following key components: 1) Specialized personnel (e.g., vascular neurology, vascular neurosurgery, critical care medicine, rehabilitative medicine, staff stroke nurses, and diagnostic radiology/neuroradiology); 2) Diagnostic techniques (e.g., magnetic resonance imaging with diffusion, computed tomographic angiography, conventional cerebral angiography, transesophageal echocardiography, and transcranial Doppler); 3) Availability of surgical and interventional therapies (e.g., carotid endarterectomy, endovascular ablation, ventriculostomy, intra-arterial reperfusion, and brain hematoma evacuation); 4) Infrastructure (e.g., stroke unit, intensive care unit, operating suite and interventional services coverage 24-hours/day, 7-days/week); and 5) Educational/Research Programs (e.g., community and professional education, and patient education). 81 In September 2012, TJC launched an advanced certification program for Comprehensive Stroke Centers. 86 The new level of certification recognizes the substantial resources needed to establish and manage complex stroke and cerebrovascular cases. The certification requires centers to meet Disease-Specific Care requirements including but not limited to the following criteria: the program is in the US and has TJC accreditation; uses standard methods to deliver clinical care and uses performance measures over time; and cares for a minimum number of patients. A summary of eligibility requirements, in synch with the BAC recommendations, 81 is listed in Table 3. 86 The first comprehensive stroke center applicants were reviewed in 2012 and approval certifications are being issued. For more information about the application process or application for a comprehensive stroke center the reader is referred to the following web sites: http://www.jointcommission.org/certification/advanced_ certification_comprehensive_stroke_centers.aspx and dscin-fo@jointcommission.org Conclusion Stroke care has substantially evolved during the past several decades. The results of clinical trials of acute stroke care, prevention and rehabilitation have led to new evidence-based options for care of stroke patients. One of the major advancements in stroke is organization of care which has and is being transformed by primary and comprehensive stroke centers. The latter approaches promise to provide better outcomes and cost-effectiveness ones.
2018-04-03T05:17:34.554Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "79f13e0b915d61718a5a983d9ef6723334505f34", "oa_license": "CCBYNC", "oa_url": "http://www.j-stroke.org/upload/pdf/jos-15-78.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79f13e0b915d61718a5a983d9ef6723334505f34", "s2fieldsofstudy": [ "Medicine", "Psychology", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
13957996
pes2o/s2orc
v3-fos-license
A questionnaire measuring staff perceptions of Lean adoption in healthcare: development and psychometric testing Background During the past decade, the concept of Lean has spread rapidly within the healthcare sector, but there is a lack of instruments that can measure staff’s perceptions of Lean adoption. Thus, the aim of the present study was to develop a questionnaire measuring Lean in healthcare, based on Liker’s description of Lean, by adapting an existing instrument developed for the service sector. Methods A mixed-method design was used. Initially, items from the service sector instrument were categorized according to Liker’s 14 principles describing Lean within four domains: philosophy, processes, people and partners and problem-solving. Items were lacking for three of Liker’s principles and were therefore developed de novo. Think-aloud interviews were conducted with 12 healthcare staff from different professions to contextualize and examine the face validity of the questionnaire prototype. Thereafter, the adjusted questionnaire’s psychometric properties were assessed on the basis of a cross-sectional survey among 386 staff working in primary care. Results The think-aloud interviews led to adjustments in the questionnaire to better suit a healthcare context, and the number of items was reduced. Confirmatory factor analysis of the adjusted questionnaire showed a generally acceptable correspondence with Liker’s description of Lean. Internal consistency, measured using Cronbach’s alpha, for the factors in Liker’s description of Lean was 0.60 for the factor people and partners, and over 0.70 for the three other factors. Test-retest reliability measured by the intra-class correlation coefficient ranged from 0.77 to 0.88 for the four factors. Conclusions We designed a questionnaire capturing staff’s perceptions of Lean adoption in healthcare on the basis of Liker’s description. This Lean in Healthcare Questionnaire (LiHcQ) showed generally acceptable psychometric properties, which supports its usability for measuring Lean adoption in healthcare. We suggest that further research focus on verifying the usability of LiHcQ in other healthcare settings, and on adjusting the instrument if needed. Electronic supplementary material The online version of this article (doi:10.1186/s12913-017-2163-x) contains supplementary material, which is available to authorized users. Background During the past decade, interest in adopting Lean in the healthcare sector has increased [1], the primary aims of implementation being to improve the quality of care [2] and to increase efficiency [3]. Adopting Lean and let it become a natural part of daily work routines is challenging [1,4]. Most common is to adopt Lean to some extent and limited to certain parts of the organization [1,[5][6][7][8]. In such cases, system-wide improvements cannot be expected [9]. A recent review [10] of Lean in healthcare concluded that research is needed on how to evaluate the extent of Lean adoption and on how Lean is perceived by healthcare staff. Thus, the aim of the present study was to develop a questionnaire measuring staff perception of Lean adoption in healthcare, including an analysis of its psychometric properties. Liker's description of Lean and Lean in healthcare One challenge when describing Lean adoption is that there is no consensus concerning how to define Lean, and the principles of Lean can be expressed and understood in several different ways [1,[11][12][13]. In the present study, we have chosen Liker's [14] description of Lean. Other descriptions have been proposed by, for instance, Womack, Jones and Roos [15], whose description of Lean is similar to Liker's, cited frequently and described extensively. However, their description has been criticized for not paying attention to the human resources in a Lean organization [16]. Another framing of Lean was suggested by Shah and Ward [17]. Their description of Lean, however, lacks a long-term perspective and does not address decentralized decision-making, which is important in healthcare. Spear and Bowen [18] also described Lean adapted in the industry sector, using four core aspects. Liker's [14] description of Lean was considered best suited for this study as the principles included are quite generic, include both an operative and a philosophical side of Lean, and stress human resources [14]. Liker identifies 14 central principles in four domains: philosophy, processes, people and partners and problemsolving (the 4P) (Fig. 1). According to Liker [14], the domain philosophy means basing decisions on long-term thinking aiming to creating values both for the individual patient and for society as a whole, with the customer in focus, which is something the entire organization should strive for. Similarities with healthcare are the focus on the customer and on creating values for the patient [19]. Further, Liker [14] described the domain processes, which address initiatives to increase quality and efficiency, mainly by using the allocated resources optimally and reducing waste. This can be achieved by mapping processes and improving flow. When flow is optimal, there is no or minimal waste, the staff know what is expected of them, when to do what and they also know what their colleagues are doing and can see the importance of each part of the whole process. Reducing waste means reducing what does not add value to the product or service, from the customer/patient perspective. Waste includes waiting time, unnecessary movements, product defects and not using employees' creativity. The domain people and partners involves respecting and challenging people and enabling them to grow, within and in connection with the organization [14]. Respecting people and enabling their growth are Fig. 1 Lean as described by Liker [14] in terms of 4 domains and 14 principles also central in healthcare as the care provided should be person-centered, respect and enabling should also apply to staff, partners and suppliers [19][20][21]. This approach also includes the organization's responsibility for enabling staff and giving them the prerequisites to provide high-quality patient care [19]. The domain problem-solving aims at achieving the right quality and flow in the organization by finding the root causes of problems. Staff members continuously solve problems and are, in this way, involved in evaluations, decisions and development of their workplace. Thus, we found Liker's [14] description to be the most useful when developing a questionnaire measuring Lean in healthcare. Instruments measuring Lean Different instruments have been developed to measure Lean in different occupational sectors (e.g., [17,[22][23][24][25][26][27][28][29]); they are based on different conceptual foundations, entail different data collection methods, use different respondents and are mostly developed for the industry sector. According to Guillemin et al. [30], when selecting an instrument it is important that it suits the context it is to be used in. Hence, we did not consider instruments developed for industry as a basis for further development into an instrument suitable in a healthcare context. It is reasonable that an instrument intended to measure Lean in healthcare should include the core values of the healthcare professions, i.e. to enable people and show respect for them, as is done in person-centered care [19]. Another important aspect is to adapt the instrument to those who can provide the requested information [31], in this case the staff. We found two instruments from sectors other than industry that we regarded as interesting candidates for further development in the present study: Roszell's [29] and Malmbrandt and Åhlström's [28] instruments. Roszell's [29] instrument is specifically developed for healthcare. The questionnaire is based on expert opinions and literature describing Lean, and the intended respondents are nurses. However, it consists of 110 items, which we consider to be an unfeasible size for regular use by practitioners. Malmbrandt and Åhlström [28] developed their instrument in European service sector companies, which share properties with healthcare in focusing on direct contact with customers/patients. Malmbrandt and Åhlström's development and validation process were both theoretical and empirical driven using a structured literature search, interviews with expert practitioners and workshops with researchers, academics and Lean expert practitioners. The instrument consists of 28 items measuring Lean adoption, each item with five response alternatives ranging from low Lean maturity to high Lean maturity. On the basis of reactions by their informants, Malmbrandt and Åhlstöm [28] deemed the content validity of the instrument to be satisfying, and they state that the instrument is sufficiently sensitive to detect changes over time. The aim of the present study was, based on Liker's description of Lean, to further develop Malmbrandt and Åhlström's instrument, which uses measures of staff perceptions of Lean maturity in a healthcare context. An additional aim was to describe and test the resulting instrument's face validity, construct validity, internal consistency and stability. Permission to further develop Malmbrandt and Åhlström's instrument for the healthcare sector was obtained from the authors. Method The development and evaluation process was based on a cross-sectional design with a mixed-method approach [32], comprising one theoretical step, followed by two steps based on the empirical data (Fig. 2). The study was approved by the Regional Ethical Review Board in Uppsala (Reg. no. 2014/525). Theoretical development of the questionnaire Given our decision to base our questionnaire on Liker's description of Lean, we first used a deductive approach to examine whether Malmbrandt and Åhlstöm's instrument addressed all principles of Lean as described by Liker [14]; see Additional file 1. We found that three principles were not addressed, i.e. principles 8, 11 and 13 (cf. Fig. 1 for Liker's description of Lean). Therefore, new items were developed to cover these principles. The next step was to translate the questionnaire from English to Swedish, which was done by the first author, and a back translation was subsequently carried out by a bilingual professional translator. Discrepancies between the versions were discussed and accounted for by our research group in collaboration with the translator [33,34]. The resulting questionnaire prototype was called Lean in Healthcare Questionnaire (LiHcQ). Fig. 2 The stepwise process used in the study. The qualitative process is described in Step 1 and 2, and the quantitative process in Step 3 Contextualization and assessment of the questionnaires face validity To contextualize and validate the prototype of LiHcQ, the cognitive method Think Aloud (TA) was used to explore how healthcare staff perceived and interpreted the LiHcQ [35]. A convenience sample of seven units from different regions, different healthcare settings, hospital and primary care was obtained. First-line managers at the recruited units were instructed by their manager to ask staff with different professions, sex and age about their interest in participating. All participants in the TA had experience of Lean. A purposive sample of 12 staff with different professions (nurses, managers, physicians, physiotherapists, administrators/secretaries), sex and age participated in this step; the number of participants selected was based on suggestions made by Beatty and Willis [35]. Three participants worked in hospital and nine in primary care; both public non-profit and private for-profit providers were represented. Eleven were women, mean age 46 years (SD 10), and the most common profession was registered nurses; mean years worked at the present unit was 10 (SD 9) and mean years worked in the profession was 16 (SD 13). The TA interviews were held by the first author in a private room at the participant's respective workplaces during January and February 2015. Prior to the TA interviews, and again in connection with them, the participants received both written and verbal information about the study. At the beginning of the TA interview, participants were instructed to "think aloud" while they read the items in the LiHcQ prototype [36]. An initial sample of seven staff participated in the first rounds of TA sessions; based on their comments, the text in the LiHcQ was adjusted, and a new TA session with five other participants was conducted. Whenever the participant hesitated or reacted in any way while reading the LiHcQ, the researcher intervened, asking questions such as "I can see that you reacted to the statement, what are your thoughts about it?" [35]. The TA interview was completed by asking the participant to give his/her overall opinion about the questionnaire. The TA interview was terminated when no additional new information was obtained [36]. The interviews were audiotaped and transcribed verbatim [35]. The data were analyzed deductively, following Tourangeau's [37] approach to TA data analysis. Thus, responses and comments were organized into four categories: comprehension, retrieval, judgment and response. According to Tourangeau, the category comprehension concerns whether words and phrases are difficult or impossible to understand; retrieval concerns whether responding is difficult because the needed information is not available; judgment concerns whether it is difficult to put information together to make a judgment and thereafter respond; response concerns difficulties in selecting a response option, e.g. if a participant hesitates to select between two response alternatives and would like to give an intermediate answer. This deductive analysis was conducted after both TA rounds. Adjustments to the questionnaire based on the analyses were made by the first author and discussed among all authors until consensus was reached. The adjusted version of the LiHcQ was thereafter tested for construct validity, internal consistency and stability. Construct validity, internal consistency and stability of the LiHcQ questionnaire In this step, we recruited a convenience sample of staff working in public non-profit or private for-profit primary care; the primary care sector was selected due to the lack of research on Lean in this sector [10,38]. All 52 primary care units, both public non-profit and private for-profit, in one region in central Sweden were asked to participate; 42 of the units wished to participate. Additionally, to increase the participation of private forprofit units, all 85 primary care units in one of the largest private for-profit healthcare providers in Sweden were asked to participate; six units agreed to participate. Included were units in primary care, with the exception of specialized units; those excluded focused on dermatology, nutrition, administration, or they were units with inpatients or call centers with telenurses. To be included, the units should have implemented Lean to some degree. Concerning the participant's inclusion criteria, staff should have worked at least three months at their unit prior to data collection. The first-line manager at the units provided information about the study at their regular meetings, and all staff received written information from the researchers together with the questionnaire. The staff was also informed in writing that their consent to participate in the study would be given by their responding to the questionnaire. The adjusted and contextualized LiHcQ developed through the TA interview process was sent out in spring 2015, and 1040 staff members were eligible for inclusion. It was embedded in a larger questionnaire that also included items on, for instance, job satisfaction, general health and satisfaction with the care provided (data not presented here). During this phase, the LiHcQ was webbased, but those not responding on the web were sent a paper version. Two reminders were sent out. The response rate was 46% (481 of 1040). Of the 481 respondents, 386 had answered at least 50% or more of the LiHcQ items; further analyses used the data from these 386 respondents. An analysis of the non-respondents showed no significant difference between them and participants in sample concerning age, sex, years worked at the present unit and years worked in the profession, which indicates that the answers are representative. Most participants were female (n = 333), with a mean age of 50 years (SD 10); the most common profession was registered nurse (n = 150), and the mean number of years worked at the present healthcare unit was 9 (SD 9) (see Table 1 for sample characteristics). When testing construct validity, a confirmatory factor analysis (CFA) was employed on data from participants with complete data in all LiHcQ items (n = 243); using only complete data is common when conducting a CFA [39]. The data were analyzed using IBM SPSS Statistics, version 22. To investigate the construct validity of the LiHcQ, a CFA was performed using AMOS. CFA requires, as a rule of thumb, ten participants per variable [40]. The LiHcQ comprised 16 variables and, thus, the number of participants was sufficient. Among a large array of parameters describing goodness-of-fit, we selected the Chi-square test, the Root Mean Square Error of Approximation (RMSEA), the Comparative Fit index (CFI) and the Standardized Root Mean square Residual (SRMR), as recommended by Kline [40]. Kääriäinen [39] organizes goodness-of-fit metrics in two groups: absolute parameters and relative parameters. Chi-square and RMSEA are called absolute parameters, which indicate how well the hypothetical relationships between the variables match the observed relationships, i.e., how the model fits compared to no model at all. The Chi-square goodness-of-fit indicates that the model is acceptable when the relative Chi-square (Chi 2 /d.f.) is less than 3 and the p-value is larger than 0.05. However, the test has been criticized and other tests have been developed. RMSEA is one of them, and values for RMSEA below 0.08 may be considered acceptable [39]. In addition, Kline [40] recommends SRMR, i.e. the difference between the residuals in the covariance matrix of the employed sample and a hypothesized model. A good model has values less than 0.5 (theoretical range for values 0 to 1) [40]. Relative parameters test the adequacy of a theoretical model by comparing the sample covariance matrix to a null model where all variables are uncorrelated. One of the most common relative parameters is CFI, which we included in our study. A good fit is suggested if the value is greater than 0.90 [39]. We assessed internal consistency using Cronbach's alpha coefficient, where values larger than 0.70 show acceptable performance. Stability in terms of test-retest reliability was evaluated through intra-class correlation coefficients (ICC) with 95% confidence intervals (CI). According to Cicchetti [41], ICC values can be considered poor if <0.40, fair when between 0.40 and 0.59, good between 0.60 and 0.74 and excellent if the value is ≥ 0.75. P-values less than 0.05 (two-tailed) were considered to indicate statistically significant results. Contextual adjustments and face validity of the preliminary questionnaire The qualitative analyses of data from the first round of TA revealed that comments mostly concerned the category comprehension. The TA participants commented that some of the words employed did not fit into a healthcare context or that they had difficulties understanding certain words and phrases. Words and phrases that needed to be contextualized included e.g.: enabler, innovative, expert practitioner, standardized, infrastructural factors, to create flow in the processes, to level out the workload, and proactive planning. In the (12) 18 (11) Participants in the validity and reliability analysis of the Lean in Healthcare Questionnaire (LiHcQ), as well as for non-responders, i.e. responders with missing answers to more than 50% of the LiHcQ items. Md Median, Q quartiles, SD standard deviation. When numbers do not add up to 386, 95 and 43, respectively, concerning professions this is because some participants have multiple functions category retrieval there were no comments; in the categories judgment and response there were comments regarding a few items. Comments on the questionnaire as a whole concerned the opinion that it was too comprehensive and time consuming, and some participants mentioned that duplicate items seemed to occur. One participant in the first TA round expressed the need for contextualizing the questionnaire: "It feels like difficult language that I don't really understand. And also it feels like a literal translation from English, a little stilted and strange, …" Another participant in the first round expressed the need for a shorter and contextualized questionnaire, however the participant stated that the questionnaire was relevant: "It's comprehensive and sort of difficult to respond to sometimes, to think about care and not factory production on some of them. I thought others were very good." Adjustments after the first TA round mainly focused on changing the identified problematic words and phrases to everyday language in order to contextualize the questionnaire to the healthcare sector. The adjusted 31-item questionnaire was thereafter used in a second round of TA interviews. Comments concerning comprehension were now found to a much less extent, but a few words and phrases still needed attention. Regarding judgment, the participants expressed the need for additional information or clarification for some items. Comments concerning retrieval and response were few. Both TA rounds revealed that it was common for participants to fail to read or notice the information given on how to respond. Thus, the participants requested information that was, in fact, available in the written instructions, or they needed to read the information text repeatedly. Participants also expressed their lack of familiarity with maturity levels and statements. Another opinion expressed by most of the participants was, as in the first TA round, the need to reduce the number of items. One participant in the second round expressed an overall feeling about the 31-item questionnaire; "It feels a bit long. It can be hard to maintain your focus on each question all the way through. But otherwise there's a lot that makes you think, we should deal with this or I'd like to do that, or be there. Lots of feelings like that, a lot, we have a long way to go." Like after the first round, adjustments after the second TA round focused on re-phrasing some sentences using everyday words, to contextualize the questionnaire to the healthcare sector, and on writing clearer instructions. Mostly we chose words the participants themselves used in their context, expressed during the TA interviews. After the second TA round, the number of items in the LiHcQ was reduced based on both the theoretical framework by Liker and information given by several respondents in both TA rounds. A common statement from the participants was that the instrument was too comprehensive; they wondered who would have time to complete it. In this reduction process, we decided to retain at least one item for each of the 14 Liker principles. The philosophy domain is represented by only one principle in Liker's description (see Fig. 1). However, to allow for better statistical assessments, three items were retained to represent this domain. In this process, 15 items were removed, and the resulting LiHcQ, shown in Additional file 2 (in English) and Additional file 3 (in Swedish), contains 16 items with five statements constructed as a maturity scale for each item. Testing the construct validity, internal consistency and stability of the questionnaire Table 2 presents descriptive data for the items and the factors in the LiHcQ, including results on internal consistency and test-retest reliability. Internal missing values for the items varied from 0.7 to 17%, with two items having 10% or more missing answers (Table 2). Mean values for each item ranged from 1.6 to 3.5. To test the construct validity of the LiHcQ and its correspondence with a model based on Liker's 4P, a CFA was conducted on data from 243 respondents. The Chisquare test showed significance (x 2 = 221,625, d.f. = 95, p < 0.001), which is not desirable in this case; however, the other fit indices showed an acceptable model fit: the relative Chi-square was 2.33, RMSEA 0.07 (90%CI 0.06 to 0.09), SRMR 0.048 and CFI 0.93. The modification index suggested correlations between Item 3 and 4, as well as between Item 4 and 5. Item 3 belong to the factor philosophy, 4 and 5 to people and partners. The model also revealed a correlation between Item 15 and 16, Item 15 belonging to processes and 16 to people and partners. Correlations between the latent variables and the error terms for the above mentioned items were allowed in our model (see Fig. 3). The internal consistency, measured using the Cronbach's alpha coefficient for the total questionnaire, was 0.93 (philosophy 0.75, processes 0.86, people and partners 0.60, problem-solving 0.81) ( Table 2). Stability, measured using ICC, showed acceptable values for all four factors; philosophy 0.80; processes 0.77; people and partners 0.88 and problem-solving 0.79. Discussion Using a stepwise procedure, we developed a questionnairethe LiHcQthat measures staff perceptions of Lean adoption in the healthcare sector, based on Liker's description of Lean. Validity and reliability, measured as face validity, construct validity, internal consistency and test-retest reliability, were acceptable on the whole. The theoretical development of the questionnaire Recent reviews [3,10,38,42] of Lean approaches in the healthcare sector have provided no clear candidate to use as a theoretical foundation when developing a questionnaire; they have largely focused on describing applied tools and techniques. Other descriptions of Lean were offered by Liker [14], Womack, Jones and Roos [15] and Shah and Ward [17]. Womack, Jones and Roos have received criticism for the lack of focus on people and partners in their description [16], and their framework was therefore excluded. Shah and Ward's [17] view of Lean lacks decentralized decision-making and a longterm perspective, both of which are relevant to healthcare. Having respect for people and focusing on enabling their development is a central aspect in the theory of person-centered care [19,43], which is emphasized in healthcare [21,44]. These aspects, respecting and enabling people, are also included in Liker's description of Table 2 Descriptive data for LiHcQ, internal consistency and test-retest Lean and, therefore, constitute essential reasons for selecting his description of Lean as a basis for our instrument, despite the fact that Liker's [14] description originated in the automobile industry. The results show that Liker's framework is generic enough to be used when adapting Malmbrandt and Åhlström's [28] instrument in the context of healthcare. The participants' responses show that Lean, as described by Liker, can be understood by staff in healthcare and it is already being in use. The contextual adjustments and face validity of the questionnaire The qualitative method of TA interviews gave useful results in terms of contextualizing and validating the questionnaire for use in healthcare. When adjusting the questionnaire, words and phrases suggested by the participants were used. The strength of this procedure was that the participants came from different regions, different healthcare settings and had different professions. These variations reduce the risk of employing words and phrases in the LiHcQ that will only be understood by a limited group of healthcare staff. When reducing the size of the questionnaire, theoretical reasoning and empirical data from the TA were used to determine which items to discard, as recommended by Hox [45]. After finalizing the shorter version, the LiHcQ still represented all of Liker's 14 principles [14] in form of a 16-item questionnaire with response alternatives as statements. The statements are constructed as a maturity scale influenced by the capability maturity model used in earlier studies of both Lean [24,27] and in other areas [46]. One advantage of the LiHcQ is that it consists of only 16 items and takes approximately 15 min to complete, compared with Roszell's [29] 110-item questionnaire. A common factor that affects response rate is the size of the questionnaire [47]. Conducting two rounds of TA interviews with different participants was another strength, as this procedure gave information on whether or not the initial adjustments were satisfactory. Previous studies have often failed to present the number of rounds that have been performed [48][49][50]. One difficult part of this process is to know when to terminate the TA. We conducted, as suggested, a total of twelve interviews and terminated when new insights ceased to emerge from the interviews [35]. Construct validity, internal consistency and stability of the questionnaire The construct validity of the LiHcQ, based on goodnessof-fit indices, was generally acceptable, and similar to values observed by Shah and Ward [17], who developed an instrument to measure Lean in industry. When conducting the CFA, we allowed the latent variables and error terms for some items in the model to correlate (Fig. 3). Correlations were allowed between error terms for Item 3 and 4. Item 3 belongs to the factor philosophy and focused if time for continuously improvements is approved, Item 4 belongs to people and partners; if a specific person is designated to encourage and support staff adopting Lean. The similarity between Item 3 and 4, which theoretically justifies the association, is that both items focus on to what extent the organization allocates time and resources to Lean. The correlation between Item 4 and 5, both in the factor people and partners, can be explained by the mutual focus on showing respect to the staff by involving them in Lean adoption and letting them grow through challenges. The model also showed a correlation between Item 15, belonging to processes, and Item 16, to people and partners, the similarity being that both items concern whether the staff are trusted and able to participate in or make decisions. The difference between them is that the focus of Item 15 is on improving the processes, while Item 16 primarily concerns staff having relations based on showing respect for partners and suppliers, the aim being to enable all involved to grow. The internal consistency assessed by Cronbach's alpha showed acceptable values for three factors (philosophy, processes and problem-solving), while the α-value for people and partners was 0.60. Items in the factor people and partners are 4, 5 and 16. The low α-value can be Table 2). ICCs showed an acceptable stability for all factors. Some participants have not responded to some items, which can be explained by lexical problems; certain words are familiar or have a meaning for one group, but not for others [51]. However, results from the TA interviews show that the LiHcQ was not generally difficult to understand. Another reason for missing values may be that the LiHcQ was placed at the end of a longer questionnaire with a total of 77 items, which could have lessened participants' enthusiasm for completing the LiHcQ. The items with most missing values were 2 (17%) and 9 (10%) (see Table 2). Item 2 concerned the first-line manager's commitment to Lean. One reason for not responding to this Item could be that participants felt they did not have firsthand information about their first-line manager's opinions about Lean. Item 9 concerned the extent to which the healthcare unit had automatic quality controls. When conducting the TA interviews, some participants expressed that they or their colleagues e.g. secretaries, worked more isolated from the rest. This could also explain some of the missing data. However, according to Liker [14], the whole unit should have knowledge about what aspects of Lean are being adopted. In the present study, we used a convenience sample for testing the construct validity, internal consistency and stability of the LiHcQ questionnaire, which limits the generalizability of the findings. When recruiting primary care units, only 6 of 85 units from one of the largest private for-profit healthcare providers in Sweden wished to participate. The reason for this has not been analyzed. However, non-participation could be steered by that the units did not consider themselves to have adopted Lean or that they feel they have only adopted parts of Lean mixed with other improvement strategies. We did ask for units that had implemented Lean to some degree. Another reason could be that the healthcare staff are strained and need to reduce the number of extra commitments. Another important factor impacting the result is the low response rates, and missing data in the LiHcQ which may indicate possible non-response bias [33]. When conducting a CFA it is recommended to use cases with complete data on all items [39], consequently the number of cases in this study decreased. However, no differences were found between responders and not responders. Analyses of the 481 responders and non-responders regarding age, sex, years worked at the present unit and years worked in the profession showed no significant differences between the groups, indicating that the results are not biased as regards these factors. The fact that nursing was the profession most represented in the study is also a factor that limits the generalizability. On the other hand, nurses are the largest licensed group in the healthcare sector [52,53]. Strength in the study is that the staff varied in terms of profession (nurses, managers, physicians, physiotherapists, administrators/secretaries, Licensed Practical Nurses (LPNs), dieticians, social welfare officers, psychologists and occupational therapists), age, geographic location, unit size and public non-profit vs. private forprofit providers [54].
2018-04-03T01:35:33.576Z
2017-03-24T00:00:00.000
{ "year": 2017, "sha1": "6b534ac733504b9fffe116b7a1f7b171a51bb59f", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-017-2163-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b534ac733504b9fffe116b7a1f7b171a51bb59f", "s2fieldsofstudy": [ "Business", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263669123
pes2o/s2orc
v3-fos-license
Moderate intensity continuous versus high intensity interval training: Metabolic responses of slow and fast skeletal muscles in rat The healthy benefits of regular physical exercise are mainly mediated by the stimulation of oxidative and antioxidant capacities in skeletal muscle. Our understanding of the cellular and molecular responses involved in these processes remain often uncomplete particularly regarding muscle typology. The main aim of the present study was to compare the effects of two types of exercise training protocol: a moderate-intensity continuous training (MICT) and a high-intensity interval training (HIIT) on metabolic processes in two muscles with different typologies: soleus and extensor digitorum longus (EDL). Training effects in male Wistar rats were studied from whole organism level (maximal aerobic speed, morphometric and systemic parameters) to muscle level (transcripts, protein contents and enzymatic activities involved in antioxidant defences, aerobic and anaerobic metabolisms). Wistar rats were randomly divided into three groups: untrained (UNTR), n = 7; MICT, n = 8; and HIIT, n = 8. Rats of the MICT and HIIT groups ran five times a week for six weeks at moderate and high intensity, respectively. HIIT improved more than MICT the endurance performance (a trend to increased maximal aerobic speed, p = 0.07) and oxidative capacities in both muscles, as determined through protein and transcript assays (AMPK–PGC-1α signalling pathway, antioxidant defences, mitochondrial functioning and dynamics). Whatever the training protocol, the genes involved in these processes were largely more significantly upregulated in soleus (slow-twitch fibres) than in EDL (fast-twitch fibres). Solely on the basis of the transcript changes, we conclude that the training protocols tested here lead to specific muscular responses. Introduction Regular practice of a physical activity is recognized to induce beneficial health effects by decreasing risk factors associated with metabolic diseases (such as cardiovascular diseases, type 2 diabetes, metabolic syndrome or cancers) or their progression [1,2].In these diseases, metabolic impairments such as decreased oxidative capacity and mitochondrial dysfunction often occur in skeletal muscle [3,4]. Although the health benefits of regular exercise are now well documented, a proportion of the population remains inactive due to lack of time and/or motivation.High-intensity interval training (HIIT) provides a way of getting some benefits (improvement of maximal oxygen consumption ( _ VO 2 max), muscle oxidative capacity, insulin sensitivity) of moderate-intensity continuous training (MICT) while spending less time doing physical exercise [2,5,6].HIIT, alternating periods of high and low intensity exercise, also offers the advantage of improving anaerobic capacity [7].In human and rodents, HIIT could be more effective than MICT in the improvement of oxidative capacities to stimulate mitochondrial biogenesis and/or oxidative phosphorylation (OXPHOS) in skeletal muscle [8,9].HIIT and MICT can also stimulate antioxidant defences but sometimes do this differently according to the training intensity or to the tissue considered [10,11].Our understanding of the cellular and molecular mechanisms underlying these beneficial training effects is often still incomplete, particularly regarding muscle typology.Indeed, the skeletal muscles differ in their fibre composition (slow-twitch or fasttwitch fibres, which are respectively rich and poor in mitochondria).The fibre type-dependent composition should also be correlated with muscle performance and muscle-associated metabolic diseases [12,13].Another recent study has shown that the expression of numerous proteins involved in the mitochondrial metabolism also adapts to training in a fibre type-specific manner [14]. In human and in rodents, the signalling pathway AMPK-PGC-1α is one of the main pathways that increases oxidative capacities in response to regular physical exercise [15,16].AMPK upregulates PGC-1α, which in turn stimulates the expression of mitochondrial and antioxidant genes [17].Among its numerous functions, PGC-1α could regulate the dynamics of mitochondrial fission and fusion [18].PGC-1α could also drive a change in fibre type from fast-to slow-twitch fibre [19].The molecular network underlying these processes involved in the improvement of oxidative capacity with MICT and HIIT is still only partially elucidated. To our knowledge, the training effects on mitochondrial biogenesis and antioxidant processes have usually been studied separately in relation to the type of muscle and/or exercise [20,21].Moreover, results often vary from one study to another [22,23].Here, the main purpose is to determine, in a same study, the effects of two common training protocols (MICT and HIIT) on aerobic and anaerobic responses in healthy Wistar rats in two muscles with different typologies over six weeks.A murine model was preferentially chosen to explore these molecular mechanisms because comparative future investigations on skeletal muscles and cardiovascular system will be made using a rat model of metabolic syndrome.Different measurements were explored at the whole animal level: maximal aerobic speed (MAS), classical morphometric and cardiovascular parameters (heart rate, arterial blood pressure and cutaneous microvascular endothelial function).Mitochondrial and antioxidant enzyme activities as well as gene expression and transcription (AMPK-PGC-1α, mitochondrial functioning and dynamics, antioxidant enzymes, lactate dehydrogenase and myosin heavy chain) were measured in soleus and extensor digitorum longus (EDL), which are mainly oxidative and glycolytic, respectively.It is known that the recruitment of fibres in each muscle depends on the intensity and the duration of exercise [24,25].Submaximal work is performed by the more aerobically efficient slow-twitch fibres while progressively increasing numbers of fast-twitch fibres are recruited to assist them as the effort increases toward the maximum [7].Because of the different typology of soleus and EDL, we hypothesized that the two training protocols MICT and HIIT could induce different metabolic adaptations in each of them. Animals Twenty-three male Wistar rats (21 days, 52.1 ± 0.6 g, Janvier Labs, Le Genest Saint Ile, France), all born on the same day, were housed at least two per cage in a light (12h:12h light/dark cycle) and temperature (21 ± 1˚C) controlled animal facility until the age of 15 weeks.The rats had access to a standard chow diet (KLIBA NAFAG1, Kaiseraugst, Germany, Mouse and Rat Maintenance, 3.152 kcal/g) and drinking water ad libitum.Body weight, food, drink and total calorie intakes per rat were measured individually once a week.Weight gain and total calorie intake were calculated for the whole training period.All experiments were approved by the Comité d'Éthique Finistérien en Expérimentation Animale n˚74 and authorized in writting by the French Ministère de l'Éducation Nationale, de l'Enseignement Supérieur et de la Recherche (APAFIS#17956-2018120517015356v4). Familiarization with the treadmill and test of maximal aerobic speed At 8 weeks of age, all rats followed a treadmill familiarization protocol during four consecutive days.Daily session duration was gradually increased over this period from 30 min to 45 min of running at speeds of 8.3 to 20.0 m/min.At the end of this week, the maximal aerobic speed (MAS) of each rat was determined.The MAS test protocol consisted of an exercise session where the starting speed of 10 m/min was progressively incremented every 60 s by 3.33 m/min until reaching 26.7 m/min, and then by 1.7 m/min until rats were unable to run anymore [1].The last speed fully completed was taken as their MAS.At 9 weeks of age, the rats were randomly assigned to one of three groups: untrained (UNTR, n = 7), moderate-intensity continuous training (MICT, n = 8) or high-intensity interval training (HIIT, n = 8).Only for MICT and HIIT groups, MAS was re-evaluated at the end of the third and last training week to adapt training intensity and evaluate exercise efficiency, respectively. MICT and HIIT protocols MICT consisted of a 10-min warm-up to 33-49% of the rat's MAS, followed by 50 min of running at 65% of their MAS.The training ended with an active recovery of 3 min at 20-30% of their MAS, giving a total of 63 min of exercise.HIIT began with a progressive 10-min warmup to progressively reach approximately 70% of their MAS, followed by 5 cycles of 5 min consisting of: 2 min at 85-90% of their MAS followed by 3 min of active recovery at 30% of their MAS, totalling 35 min of exercise.Both trainings lasted six consecutive weeks (five times per week in the morning, with two consecutive days of rest during the weekend).The UNTR group was also brought to the running room each day so that they underwent the same transport conditions (from the animal facility to the laboratory running room) and these rats were also put on the switched-off treadmill. Arterial blood pressure and heart rate measurements Arterial blood pressure (mean: MBP, systolic: SBP, and diastolic: DBP) and heart rate were determined by a non-invasive method that measures these parameters in the tail of conscious rats using volume pressure recording sensor technology (CODA1 non-invasive blood pressure system, Kent Scientific, USA).All rats were conditioned to the procedure over one week before data collection.Before making these measurements, the rats were placed in a retraining box and preheated to more than 32˚C on a specific platform to dilate the tail arteries.At least ten consecutive pressure measurements were needed to obtain representative values of SBP and DBP for each rat. Laser doppler flowmetry To assess the cutaneous microvascular endothelial function, we performed iontophoresis with pharmacological agents (acetylcholine: ACh and sodium nitroprussiate: SNP) coupled with laser doppler flowmetry (LDF).This method was previously described in Lambrechts et al. (2013) [26].During the experimental procedure, the rats were continuously anesthetized with 2% isoflurane through a nose cone (TEM Sega, Pessac, France) and their corporal temperature was maintained at 37˚C.Briefly, the cutaneous blood flow response to iontophoresis was assessed using a LD probe (Periflux PF 384; Perimed, Ja ¨rfa ¨lla, Sweden) in the previously shaved thigh.Cutaneous LDF is measured through a multifibre laser probe (780 nm) around which is placed an iontophoretic sponge connected to a distribution electrode (Periflux PF 383; Perimed) and a dispersive electrode (Periflux PF 384; Perimed) placed on the rat's paw.For endothelium-dependent and -independent vasodilation analysis, we measured blood flow changes in response to 1% ACh chloride solution (right thigh) and then to 1% SNP (left thigh), respectively, delivered through the skin using a low electrical current.Cutaneous blood flow was indexed, as was cutaneous vascular conductance (CVC), which was calculated as LD flux.Responses to ACh and SNP were presented as the percentage of CVC variation between baseline and iontophoretic response.The LDF signal intensity depends on velocity and concentration of moving blood cells in the site under examination. Sampling Between 48 and 72 hours after the end of the training period, the rats were anesthetised with ketamine (Ketamine 100, Virbac, 80 mg/kg) and xylazine (Rompun 2%, Bayer, 12 mg/kg) injected intraperitoneally.Morphometric measurements were then performed: body weight, naso-anal body length, abdominal and thoracic circumferences.Blood was collected intraventricularly into 2 mL sampling tubes (pre-coated with EDTA 5%) and hematocrit was evaluated.Plasma was obtained after centrifugation for 5 min at 3000 g at room temperature, frozen in liquid nitrogen and then stored at −80˚C.The rats were sacrificed by cervical dislocation.Adipose tissue (epididymal, omental-retroperitoneal-peritoneal, subcutaneous and total adipose tissue) was weighed before freezing.Right and left soleus and extensor digitorum longus (EDL) muscles were immediately frozen in liquid nitrogen and then stored at −80˚C for later analysis. RNA extraction, reverse transcription and real-time reverse transcriptase-PCR (RT-PCR) For each muscle type (soleus and EDL), right and left muscles were ground together in liquid nitrogen to obtain a homogeneous tissue powder.Total RNA was isolated from 30 mg of frozen muscles using the NucleoSpin 1 RNA Set for NucleoZOL (Macherey Nagel, Hoerdt, France) and stored at −80˚C as previously described in [29].RNA concentrations were measured with a SimpliNano TM spectrophotometer (Biochrom Spectrophotometers, Fisher Scientific, Illkirch, France).Their purity and their integrity were also checked.Each sample RNA was reverse transcribed with the qScript TM cDNA synthesis kit (Quanta BioSciences, VWR, Fontenay-sous-Bois, France) containing a reaction mix and reverse transcriptase.Obtained cDNA was diluted 10-fold for PCR experiments and stored at −20˚C. Real-time RT-PCR was realized with a 7500 Fast Real-Time PCR system (Applied Biosystems, Thermo Fisher Scientific, Illkirch, France) as previously described in Pengam et al. [29].Briefly, target genes were amplified and quantified by SYBR 1 Green incorporation (Eurobio-Green 1 Mix qPCR 2x Lo-Rox; Eurobio Ingen, Courtaboeuf, France) with the specific primers presented in Table 1.The cycling conditions consisted in a denaturing step at 95˚C for 2 min, followed by 40 to 50 cycles of amplification (denaturation: 95˚C for 5 s; annealing/extension: 60˚C for 30 s).A seven-point standard curve was used to determine the PCR efficiency of each primer pair (between 80% and 100%) and the transcript level of the different genes in all samples.Each gene was amplified in a single run, from triplicates for standard points and duplicates for sample points.Quantification was normalized using actin β mRNA, considered as a reference gene.This choice was validated by the absence of significant differences in actin β mRNA levels between experimental groups for each muscle (soleus and EDL; p > 0.05).All mRNA levels were first calculated with the ratio: target gene mRNA actin b mRNA and expressed as fold change compared with UNTR group, which was set at 1. The total protein concentration was measured in each muscle sample in a 96-well plate using BCA Protein Assay (Thermo Fisher Scientific).Briefly, samples were diluted 10 times and 200 μL of BC Assay reagent were added to 25 μL of diluted samples.Absorbance was read at 562 nm and total protein concentration calculated using a bovine serum albumin (BSA) standard range. Samples were diluted in Laemmli Buffer 5X (10 mM Tris-HCl pH 6.8, 1% v/v SDS 20%, 25 mM EDTA pH 7.5, 8% v/v glycerol and 0.001% v/v bromophenol blue).From each sample 15 μg of proteins were run on 8-16% SDS-polyacrylamide gels, and proteins were then semidry transferred to a 0.2 μm PVDF membrane (BioRad).The membrane was washed 3 times in TBST (25 mM Tris pH 7.5, 150 mM NaCl and 1% v/v Tween 20) for 10 min.The membrane was then cut into three parts: (1) 10 to 50 kDa (containing GAPDH), (2) 50 to 75 kDa (containing AMPKα and phospho-AMPKα Thr 172 ) and (3) 75 to 250 kDa (containing PGC-1α).Each part of the membrane was incubated in 20 mL of blocking buffer (TBST containing 5% w/v semi-skimmed milk powder) for 1h at room temperature.The membranes were then incubated with rabbit anti-PGC-1α (1:1,000) (AbClonal) or anti-AMPKα (1:1,000) (Cell Signaling Technology) in TBST containing 0.5% w/v semi-skimmed milk powder overnight or anti-GAPDH (1:10,000) (Cell Signaling Technology) for 1 h at 4˚C under stirring.Membranes were washed 3 times for 10 min in TBST containing 0.5% w/v semi-skimmed milk powder and incubated with goat anti-rabbit IgG, horseradish peroxidase (HRP)-linked antibody (1:2,000 dilution) (Cell Signaling Technology) 1 h at 4˚C.Finally, blots were washed 4 times for 10 min in TBST and rinsed twice for 10 min in TBS.They were exposed to enhanced chemiluminescence (ECL) reagents (Clarity TM Western ECL Substrate, BioRad) according to the manufacturer's protocol.The ECL signal was acquired from 20 s to 5 min.Proteins were quantified using a Vilber-Lourmat Fusion SL image acquisition system.Rabbit anti-AMPKα antibodies of the membrane (2) were stripped using a Re-blot Plus kit (Millipore) and then incubated in 20 mL of blocking buffer for 1 h at room temperature.The membrane was then incubated with rabbit anti-phospho-AMPKα Thr 172 (1:1,000) (Cell Signaling Technology) in TBST containing 0.5% w/v semi-skimmed milk powder overnight at 4˚C.The rest of the Western Blot protocol was the same as described above.Finally, the p-AMPKα/AMPKα ratio was calculated and PGC-1α protein quantification was normalized using GAPDH, considered as a reference protein. Enzymatic activities All measurements were performed at 37˚C and determined using a plate reader (SAFAS Xenius, Monaco).All samples were measured in duplicate.For each muscle (soleus and EDL), right and left muscles were ground together in liquid nitrogen to obtain a homogeneous tissue powder. Aerobic metabolism enzyme activities.Citrate synthase (CS) activity.Samples of 25 mg of frozen muscle were homogenized in 1.5 mL Tris HCl buffer (0.1 M, pH 8.1, 4˚C) with a Polytron.The homogenate was then collected and used immediately for analysis.Measurements of CS activity were realized on 6 μL of tissue extract and were made by an indirect method [31] using 5,5-dithio-bis-2-nitrobenzoic acid (DTNB).CS activity was measured at 412 nm and expressed in nmol DNTB reduced/min/mg wet tissue. Cytochrome c oxidase (COX) activity.Samples of 70 mg of frozen muscle were homogenized with a Polytron homogenizer in 1 mL of extraction buffer (100 mM Tris, 2 mM EDTA and 2 mM DTE, pH 7.4, 4˚C).The homogenate was centrifuged at 12,000 g for 20 min at 4˚C.COX activity was determined on 50 μL of supernatant at 550 nm using 2 mM reduced cytochrome c and 330 mM sodium phosphate buffer [32].COX activity was expressed in nmol cytochrome c oxidized/min/g wet tissue. Anaerobic metabolism enzyme activity.Lactate dehydrogenase (LDH) activity.Samples of 70 mg of frozen muscle were homogenized with a Polytron in 1 mL of extraction buffer (100 mM Tris, 2 mM EDTA and 2 mM DTE, pH 7.4, 4˚C).The homogenate was centrifuged at 12,000 g for 20 min at 4˚C.LDH activity was determined on 2 μL of the resulting supernatant at 340 nm using 40 mM sodium pyruvate and 40 mM nicotinamide adenine dinucleotide (NADH) [33].Activity was calculated based on oxidation of NADH and expressed in μmol oxidized NADH/min/g wet tissue. Antioxidant enzyme activities.Samples of 70 mg of frozen muscles were homogenized with a Polytron homogenizer in 1 mL of extraction buffer (75 mM Tris and 5 mM EDTA, pH 7.4, 4˚C).After a centrifugation at 12,000 g for 10 min at 4˚C, superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) activities were determined on the resulting supernatant. The hybridization temperature was 62˚C for all primers.We designed all primers except those with a superscript number.(F): Forward, (R): Reverse.Reference: 1 Hashimoto et al. (2016) [30] https://doi.org/10.1371/journal.pone.0292225.t001 SOD activity was assessed at 480 nm using an indirect method that inhibits the adrenaline to adenochrome reaction with the xanthine/hypoxanthine reaction as a superoxide anion producer [34] on 16 μL of supernatant.One unit (U) of SOD activity corresponds to the amount of sample needed to cause 50% inhibition relative to the control without tissue.SOD activity was expressed in U/g wet tissue. GPx activity was measured at 340 nm with an indirect method adapted from Ross et al. (2001) by Farhat et al. (2015) [35,36] using 25 μL or 100 μL of soleus or EDL supernatant, respectively.Briefly, activity was determined from the decrease of NADPH induced by a coupled reaction with glutathione reductase.GPx activity was expressed in μmol NADPH oxidized/min/g wet tissue. CAT activity was determined at 240 nm through its capacity to transform hydrogen peroxide (H 2 O 2 ) into water and oxygen [37].The addition of 200 mM H 2 O 2 to the 40 μL of tissular supernatant initiated the reaction.CAT activity was expressed in nmol H 2 O 2 /min/g wet tissue. Oxidative stress marker Total plasmatic 8-isoprostane (free and esterified in lipids) was measured in duplicate using an Elisa kit (Cayman Chemical, Ann Arbor, Michigan, USA) according to the manufacturer's protocol.During sampling, 0.005% of butylated hydroxytoluene (BHT) was added to all plasma collection tubes intended for this measurement to prevent oxidative formation of 8-isoprostane after collection. All plasmatic samples were hydrolysed using 15% KOH and incubated for 60 min at 40˚C and then neutralized with potassium phosphate buffer.A further step of purification was necessary with ethanol.Finally, samples were extracted using ethyl acetate containing 1% methanol and SPE Cartridges (C-18) (Cayman Chemical).After 18 h of incubation, 8-isoprostane plasmatic concentration was measured at 410 nm and expressed in pg/mL plasma. Statistics All results are given as means ± standard error of the mean (SEM).All statistics were performed using Statistica v.12 software (StatSoft, Paris, France).Normality of distributions was tested using the Shapiro-Wilk test.Adapted tests were then performed (Kruskal-Wallis, oneway analyses of variance (ANOVA), two-way ANOVA or ANOVA for repeated measures).Kruskal-Wallis, ANOVA and two-way ANOVA were followed by Mann and Whitney, Tukey and Bonferroni post-hoc tests, respectively.The significance threshold was set at p < 0.05 and differences between groups indicated on the figures by different letters (a and b) or by symbols. A principal component analysis (PCA) was performed on levels of all mRNAs using R software and the FactoMineR package.The last MAS values measured tended to be higher after HIIT than after MICT (p = 0.07). Rat monitoring, morphometric and systemic measurements Table 2 summarizes the monitoring during the experiment, morphometric and systemic measurements of the experimental groups.After six weeks of training, the training volume of HIIT was 2-fold lower than that of MICT with an approximately 1.6-fold lower cumulative running distance and time.No training effects were shown on the weight gain and total calorie intake of rats during the six weeks of training.MICT induced a decrease of the adipose index compared with UNTR and HIIT regimes.Otherwise, no significant effects of training were observed for the other measurements (body weight, naso-anal body length, BMI, circumference index and Lee index).Neither MICT nor HIIT modified heart rate, mean, systolic and diastolic arterial blood pressures, hematocrit or cutaneous vascular conductance of the rats. Proportions of myosin heavy chain isoform mRNAs in the untrained group To verify the proportions of myosin heavy chain (MHC) isoform in soleus and EDL muscles, Table 3 gives their mRNA percentages determined in the UNTR group.Soleus samples were mainly composed of slow-twitch fibre type I MHC mRNA (86.5 ± 5.0%) and less than 2% of fast-twitch fibre types (IIx and IIb), whereas EDL ones principally consisted of fast-twitch fibre type IIx (46.6 ± 1.3%) and type IIb (47.2 ± 1.7%) MHC mRNAs. Enzymatic activities In soleus, training produced no significant effects on the enzymatic activities of CS, COX, LDH, SOD, GPx or CAT (Table 4). Discussion The main aim of the present study was to compare the effects of the MICT and HIIT protocols on aerobic (AMPK-PGC-1α signalling pathway, mitochondrial biogenesis, antioxidant defences) and anaerobic (lactate dehydrogenase) metabolic processes in two muscles with different typologies: soleus and extensor digitorum longus (EDL).To make the approach integrative, training effects were also examined at the whole organism level through measurements of maximal aerobic speed (MAS), morphometric and systemic parameters. The three main findings were: 1) endurance (MAS) and oxidative capacities (muscle transcripts and proteins) were more greatly stimulated by HIIT than by MICT; 2) transcription was generally activated more in soleus muscle than in EDL in response to both MICT and HIIT; 3) solely on the basis of the mRNA results determined after training, two distinct profiles related to the muscle typology (soleus and EDL) were revealed. One of the present challenges in human health is to define a training protocol that confers overall health benefits through, in part, the stimulation of oxidative and antioxidant capacities in skeletal muscle and which fits into today's lifestyle.In human skeletal muscle, studies showed that HIIT would be more efficient than MICT for stimulating mitochondrial biogenesis [8] and functioning [38].Exercise intensities 50 and 70% of maximal oxygen uptake ( _ VO 2 max) for MICT and between 85 and 95% _ VO 2 max for HIIT are commonly applied to improve aerobic capacity in human [39].In the present study, the exercise intensities applied are in accordance with those commonly used.After 3 weeks of training, the MAS was significantly improved with both types of exercise used, showing the efficiency of our protocols.Then, during the following three weeks of training, while the MAS remained stabilized with MICT, it tended to continue to increase with HIIT (p = 0.07, MICT vs HIIT).Our results are consistent with two recent studies using work-matched training MICT and HIIT in healthy rats [9,40].Because the MAS is related to _ VO 2 max, we can suggest that HIIT is efficient for improving _ VO 2 max.However, we cannot exclude an improvement in MAS also related to increased anaerobic capacities because HIIT is characterized by repeated bouts at high intensity (85 to 90% of MAS) [9,40].In human, a greater gain in _ VO 2 max has also been reported with HIIT compared with MICT work-matched training programs [41].So, in the present study, despite a training volume (product of exercise intensity and total training duration) exactly 2-fold lower for HIIT compared with MICT, the rats' physical performance was improved.This suggests the importance of the intensity level and/or of the periods of recovery between the bouts of exercise [42]. Among the parameters examined, the adiposity index, an indicator of obesity-related disorders [43], decreased significantly with MICT but not with HIIT.It is known that prolonged moderate intensity exercise or a high training volume (as with MICT) enhance fatty acid utilization and particularly when exercise intensity is higher than 50-65% _ VO 2 max, fatty acid oxidation shifts to glucose oxidation [44].Thus, we supposed that six weeks of MICT increased lipolysis more than HIIT did by mobilizing fatty acids for ATP production.Otherwise, no effect of training was observed on cardiovascular and systemic parameters such as heart rate, blood pressures, microvascular endothelial function in peripheral circulation and hematocrit.Recent studies demonstrated that four or six weeks of MICT or HIIT had no effects on healthy Wistar rats' systolic and diastolic blood pressures or on heart rate [45,46].Few studies have explored the cutaneous microvascular endothelial function after HIIT training in murine models.In human, Lanting et al. (2017) suggested that exercise training could improve this in In EDL, GPx activity was ~26% higher after HIIT compared with UNTR (p = 0.006) (Table 4). https://doi.org/10.1371/journal.pone.0292225.t004people with microvascular disease and healthy physically inactive adults, but not in healthy adults who are physically active [47].Therefore, the absence of change in the cutaneous microcirculation of the healthy and active rats in our study seems relevant. At the skeletal muscle level, the AMPK-PGC-1α signalling pathway is recognized as one of the most potent stimulators of mitochondrial biogenesis in skeletal muscle.PGC-1α is involved in many processes by stimulating the two nuclear respiratory factors (NRF) 1 and 2, which are transcription factors involved in the regulation of mitochondrial biogenesis and antioxidant systems, respectively [48].PGC-1α is also known to be a regulator of mitochondrial dynamics including fusion and fission processes.Finally, skeletal muscle fibre type determination is also under the influence of PGC-1α function [49]. Two muscles were explored for their distinct fibre type composition: the slow-twitch soleus, composed of 80% of oxidative type I fibres and the fast-twitch EDL, composed exclusively of type II fibres using mainly anaerobic metabolism [50].These two muscles also differ in mitochondrial content because type I fibres contain a much larger volume of mitochondria than type II fibres [25].In the present study, the muscle metabolic specificities are related by the effects of training protocols on levels of transcripts involved in metabolic processes (AMPK-PGC-1α signalling pathway and antioxidant systems).As a whole, the mRNA responses clearly differentiate soleus and EDL, as shown by the two clusters of the PCA.It is important to remember that these responses are related to training effects and not to acute exercise effects because the post-training muscle samples were taken at least 48 hours after the last training session. One of the important findings of our study is that soleus muscle had far more numerous responses to training than EDL.Because soleus muscle is mainly oxidative, we can suggest that both the MICT and HIIT training protocols used here required more slow-twitch fibres than fast-twitch fibres.Kryściak et al. (2018) also showed that four and eight weeks of endurance training stimulated more transcripts and proteins related to mitochondrial biogenesis in slow gastrocnemius fibres than in fast gastrocnemius fibres in rats [51]. In soleus, HIIT upregulated most of the studied mRNAs involved in the AMPK-PGC-1α signalling pathway, mitochondrial functioning and dynamics, and antioxidant defences compared with MICT. Surprisingly, despite the numerous modified transcripts in soleus, the PGC-1α transcript was unchanged after training exercise.Some authors showed that Pgc-1α mRNA levels increase during the first exercise session and decrease with each subsequent session despite maintaining exercise intensity [52].Western blot results showed that neither PGC-1α protein content nor p-AMPKα/AMPKα ratio changed significantly with training, while the AMPK transcript was increased with HIIT.Miller et al. (2016) has shown that exercise-induced mRNAs do not necessarily translate into proteomic changes [53].Moreover, post-transcriptional or -translational regulatory processes may have already occurred, as mentioned by Robinson et al. (2017) [54].One of these processes, or a combination of them, could explain the activation of the transcripts involved in mitochondrial functioning and antioxidant responses without there being a change in proteins. In soleus, MICT and HIIT differentially stimulate genes involved in mitochondrial fusion (Mfn1 and 2 and Opa1), fission (Fis1 and Drp1) and functioning (Cs and OXPHOS subunit complexes encoded by nuclear, Cox4, or mitochondrial, Nd1, Cox2, Atp synthase 6, genomes).Transcript responses in terms of mitochondrial dynamics conform with those of Perry et al. (2010), who showed that two weeks of high-intensity training increased MFN1, Fis-1, DRP-1 and COX4 protein expressions, and Cs and Cox4 transcript levels in the human vastus lateralis muscle [52].Taken together, these observations suggest that high-intensity training regulates mitochondrial quantity and quality so as to increase oxidative capacity in skeletal muscle.However, no changes were observed in CS or COX enzymatic activities.In the same way as for PGC-1α, it is possible that regulatory have processed after mRNAs production that could explain a training effect on mRNA levels but not on enzymatic activities [54,55].In the literature, a concomitant increase of Cs mRNA and its activity was shown in rat gastrocnemius after only 5 and 10 days of moderate training [38].The difference in CS adaptations between the studies could also be explained by differences in training duration, time before sampling and, particularly, muscle type.The gastrocnemius muscle is often studied because it is greatly involved in treadmill running.Its composition of mixed fibres certainly facilitates its metabolic adaptations, which may be greater than in soleus (mainly composed of type I fibres). During physical exercise, mitochondria are among the main sources of reactive oxygen species (ROS; [56]).These molecules are necessary for cellular process regulation but are harmful at high levels.ROS levels, therefore, need to be regulated by antioxidant mechanisms [57].Enzymatic antioxidant defence mRNAs (Cat and mitochondrial Sod2) were increased by HIIT in soleus but without significant changes in antioxidant enzyme activities (SOD, GPx and CAT).Moreover, no oxidative stress occurred suggesting that the training protocols had no deleterious effects. HIIT also induced a rise in myosin heavy chain (MHC) aerobic type I and anaerobic type IIx and IIb mRNA levels in soleus.The transcription stimulation of MHC isoforms type IIx and IIb can be surprising but it could be related to the relatively higher plasticity of soleus compared with EDL.At the muscle level, these modifications represent few changes because these two MHC isoform mRNAs represented less than 2% of the soleus total MHC isoforms.However, we should be cautious about these interpretations only based on mRNAs. Concerning lactate dehydrogenase isoenzymes, LDH is a tetrameric enzyme composed of the subunits M and/or H encoded by Ldh-a and Ldh-b genes, respectively.LDH4M reduces pyruvate to lactate in tissues dependent on anaerobic glycolysis, whereas LDH4H permits lactate oxidation in tissues dependent on aerobic metabolism [58].HIIT increased Ldh-b mRNA content with no changes in LDH activity (conversion of pyruvate to lactate) in soleus, suggesting higher aerobic capacities.In human, three weeks of training (70-85% _ VO 2 max) induced an increase of Ldh-b mRNA levels and tended to decrease Ldh-a transcription in the vastus lateralis muscle [59].It is important to remember that the PCA analysis, particularly in soleus, also highlights important correlations between the actors involved in AMPK-PGC-1α signalling pathway, mitochondria functioning and dynamics, antioxidant defences and LDH subunit mRNAs. Because of the anaerobic phenotype of the EDL muscle, we could suppose that the anaerobic pathway might be activated during HIIT sessions.A higher LDH activity in EDL was observed compared with soleus.However, neither muscle showed an effect of training protocol on LDH enzymatic activity.Kristensen et al. (2015) also showed a greater increase in glycogen utilisation (preferential substrate for anaerobic glycolysis) in fast-twitch fibres after HIIT than in slow-twitch fibres whereas MICT induced no fibre type dependent difference [23]. In EDL muscle, contrary to soleus, Pgc-1α mRNA level was stimulated by HIIT and PGC-1α protein content tended to increase (p = 0.07) compared with MICT, suggesting an activated aerobic process in this muscle after six weeks of training.Otherwise, HIIT improved antioxidant defences in EDL, as shown by GPx activity stimulation.Although the studied transcripts were largely increased with HIIT in soleus, the sole change in protein content (PGC-1α content and GPx activity) was not observed in soleus but in EDL.This suggests that these two muscles could have different transcriptomic and/or post-transcriptomic and/or proteomic response kinetics during training exercise. Few significant responses with HIIT were however observed in EDL compared with soleus (only Pgc-1α mRNA and GPx activity).EDL is mostly composed of type IIx and IIb MHC (more than 90%) and these fibres types would not contribute substantially to effort until an intensity of 100% _ VO 2 max was reached [7].We can, therefore, suppose that 85-90% MAS might not be enough to totally recruit these fast-twitch fibre types. One limitation of our study is the low number of protein measurements (enzymatic activity or Western Blot) after the training exercise.Indeed, to have a complete functional approach at the muscle level, it would have been interesting to make more protein quantifications and activity measurements to complement the transcript level results.In addition to transcriptional analysis, we chose to focus on some key elements involved in muscle adaptation to training (PGC-1α, AMPKα protein contents and antioxidant enzymatic activities) but were unfortunately limited to a restricted number of analyses by the quantity of tissue available.Another limitation is related to the choice of the training protocol parameters.The two training volumes were intentionally unmatched, as in the protocol of Brown et al. (2017) [60].The HIIT training volume was deliberately chosen to be 2-fold lower than the MICT training volume, with running distance and time approximatively 1.6-fold lower.Such a protocol more accurately reflects how HIIT is performed in clinical practice, which has the main objective of maintaining or improving beneficial effects with a significantly reduced session duration.Nevertheless, it would be interesting in a future study to match volumes between training protocols to confirm that training intensity is a more important parameter than training volume for increasing mitochondrial oxidative capacity, a question that remains highly debated [61,62].Finally, for untrained rats, we made the choice to perform only one MAS test before the start of the training protocol.Indeed, for untrained rats, each MAS determination would require a familiarization protocol on the treadmill susceptible to induce adaptation to exercise.So, this would not be consistent for so-called untrained animals to repeat this MAS determination protocol. In conclusion, this study provides new insights regarding training-induced oxidative capacities and skeletal muscle fibre-dependent adaptations.Regarding transcript results, the HIIT protocol clearly induced an important mitochondrial functional plasticity stimulating aerobic metabolism in soleus muscle compared with EDL in Wistar rats.The stimulation at the protein level was only observed in EDL, suggesting muscle-dependent kinetics of transcripts and protein regulatory processes.At the organism level, both HIIT and MICT protocols increased MAS, suggesting an increase in oxidative capacity.The present study could contribute to the improvement of exercise programs adapted to muscle type-dependent responses in order to help to prevent metabolic diseases often associated with mitochondrial dysfunction [49]. ■ðgÞ 3 panal length ðcmÞ � 10 ■ Body mass index BMI; g=cm 2 ð Þ ¼ Body weight ðgÞ NasoÀ anal length ðcmÞ ■ Circumference index = Abdominal circumference Thoracic circumference Lee index ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Body weight NasoÀ Adiposity index ¼ Total adiposity mass Body weight � 100 To normalize individual training and to evaluate training efficiency, the MAS of trained animals (MICT and HIIT) was measured before training and after three and six weeks of training (Fig 1).Two-way ANOVA revealed a significant time effect (p < 0.001), but no training effect (p = 0.07) and no interaction between time and training (p = 0.59).Before starting the treadmill training, both MICT and HIIT groups had similar MAS values: 32.3 ± 0.03 m/min and 34.6 ± 1.2 m/min, respectively.For the MICT group, MAS was only significantly increased after three weeks of training (38.8 ± 1.7 m/min; p = 0.009) and stabilized Fig 1 . Fig 1.Effect of MICT and HIIT on maximal aerobic speed (MAS) as a function of training duration.Values of MAS are means ± SEM.In a same experimental group (MICT or HIIT), * indicates a significant difference from the MAS before starting the training (p < 0.05) and $ indicates a significant difference from the MAS after three weeks of training (p < 0.05).No significant differences were observed between MICT and HIIT.https://doi.org/10.1371/journal.pone.0292225.g001 AMPK-PGC-1α signalling pathway.In soleus, HIIT increased Ampkα1, Nrf1 and Nrf2 mRNA levels compared with UNTR, but no effects of training protocol were observed on Pgc-1α mRNA (Fig 3A).In EDL, in contrast, only Pgc-1α mRNA content was significantly up-regulated by HIIT compared with UNTR (p = 0.0007) and MICT (p = 0.045) (Fig 3B).Mitochondrial functioning.In soleus, the Cs mRNA content was significantly increased by almost 2-fold (p = 0.0005) by HIIT.HIIT also increased Nd1 and Cox2 mRNA contents compared with UNTR and MICT, and transcription of Cox4 and Atp synthase 6 were stimulated by HIIT compared with UNTR (Fig 4A).Neither MICT nor HIIT had significant effects on the mRNA levels related to mitochondrial functioning in EDL (Fig 4B).Mitochondrial dynamics.In soleus, Mfn2, Opa1 and Drp1 mRNA levels in the HIIT group were at least 50% higher (p < 0.05) than in the UNTR group.HIIT up-regulated Fis1 transcription compared with MICT (Fig 5A).No effects of training protocol were observed on these mitochondrial dynamics genes in EDL (Fig 5B).Antioxidant defences.In soleus, Sod2 mRNA level was higher in the HIIT group than in the UNTR (p = 0.004) and MICT (p = 0.04) groups.HIIT increased the Cat mRNA content compared with UNTR (p = 0.02) (Fig 6A).In EDL, training had no effects on antioxidant defence mRNAs (Fig 6B).Myosin heavy chain.In soleus, HIIT increased the transcription of types I, IIx and IIb myosin heavy chain (MHC) mRNAs compared with UNTR.MICT stimulated the mRNA levels of MHC types IIx and IIb compared with UNTR (Fig 7A).Neither MICT nor HIIT induced significant changes in MHC I, IIa, IIx or IIb mRNA compositions in EDL (Fig 7B).Lactate dehydrogenase subunits.In soleus, HIIT stimulated Ldh-b transcription compared with UNTR (p = 0.01) but had no effect on Ldh-a mRNA content (Fig 8A).In EDL, neither type of training modified either of these two gene contents (Fig 8B). Table 2 . Monitoring, morphometric and systemic parameters of UNTR, MICT and HIIT experimental groups. Based on the mRNA data from the two training protocols, the principal component analysis (PCA) indicated two distinct clusters that correspond globally to soleus and EDL muscles (Fig2).The first principal component accounted for 35.1% of the transcriptomic variability among the genes and the second principal component accounted for 19.7%. Total training volume was calculated as the product of exercise intensity (% of MAS) and total training duration over the six weeks.Weight gain and total calorie intake of each rat were measured during the six weeks of the experiment.Morphometric and systemic parameters were measured at the end of the experiment.Values are means ± SEM.Different letters indicate significant differences between groups (p<0.05).UNTR: untrained; MICT: moderate-intensity continuous training; HIIT: highintensity interval training; LDF: laser doppler flowmetry; CVC: cutaneous vascular conductance.https://doi.org/10.1371/journal.pone.0292225.t002mRNA correlations in soleus and EDL muscles
2023-10-06T05:07:58.248Z
2023-10-04T00:00:00.000
{ "year": 2023, "sha1": "b32662fb7aada82679b05026f64630479d1f3351", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0292225&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b32662fb7aada82679b05026f64630479d1f3351", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55780772
pes2o/s2orc
v3-fos-license
Influence of Selected Alkoxysilanes on Dispersive Properties and Surface Chemistry of Titanium Dioxide and TiO2–SiO2 Composite Material The paper reports on characterisation of titanium dioxide and coprecipitated TiO2–SiO2 composite material functionalised with selected alkoxysilanes. Synthetic composite material was obtained by an emulsion method with cyclohexane as the organic phase, titanium sulfate as titanium precursor, and sodium silicate solution as precipitating agent were applied. Structures of titania and composite material samples were studied by the wide angle X-ray scattering method. The chemical composition of TiO2–SiO2 composite material precipitated was evaluated based on the energy dispersive X-ray spectroscopy technique. The functionalised TiO2 and TiO2–SiO2 composite material were thoroughly characterised to determine the yield of functionalisation with silanes. The characterisation included determination of dispersion and morphology of the systems (particle size distribution, scanning electron microscope images), adsorption properties (nitrogen adsorption isotherms), and electrokinetic properties (zeta potential). Introduction An increased interest in inorganic oxide systems has prompted the dynamic development of methods for their synthesis and functionalisation.This interest stems from their specific physicochemical properties such as specific surface area or stability, which are vital for the production of composite systems, for example, TiO 2 -SiO 2 composite materials [1][2][3][4][5].The stability of unmodified and modified commercial and synthetic oxide systems depends significantly on the character of their surface (especially the surface groups).Changes in the chemical structure depend mainly on the type of functional group introduced on the surface of the support and are mainly responsible for the nature of chemical interactions [6,7].Specific applications of such oxide systems or their derivatives require well-defined physicochemical parameters, especially electrokinetic behaviour (zeta potential), specific surface area, low tendency to form agglomerate structures, and hydrophobic/hydrophilic surface character [8][9][10].The physicochemical properties of the functionalised commercial and synthetic oxide systems depend mainly on the effectiveness of the modification process and its implementation [11,12].The effectiveness of inorganic oxide systems' surface functionalisation is evaluated on the basis of adsorption properties, dispersion and morphological characterisation, hydrophobic/hydrophilic properties, and chemical interactions, as well as electrokinetic measurements [8][9][10][11][12][13][14]. Titanium dioxide is mainly used as a pigment, adsorbent, semiconductor, ceramic material, and catalytic support [2].Titania is regarded as the best photocatalyst for the oxidation of organic pollutants in water and air [15][16][17][18].The photocatalytic properties of titania are affected by several factors, such as crystal structure, morphology, specific surface area, and porosity [19].It is well known that titania occurs in three different types of crystal phases: anatase, rutile and brookite.Anatase has the highest photoactivity.However, the photocatalytic properties of TiO 2 occurred as a mixture of anatase and, rutile, with appropriate ratio are higher than that of the pure anatase [20][21][22]. Titanium dioxide can be synthesised by various methods, such as solvothermal [23][24][25][26], hydrothermal technique [27], precipitation [28,29], reverse micelle or microemulsion systems [30,31], sol-gel [2,[32][33][34][35], and thermal decomposition of alkoxides [36].The properties of TiO 2 synthesised by different methods vary in terms of crystal structure, chemical composition, surface morphology, crystal defects, and specific surface area [37].The sol-gel method is widely used to prepare nanosized TiO 2 ; the precipitated powders obtained are amorphous in nature and further heat treatment is required for their crystallisation.This calcination process will inevitably cause grain growth and reduction in surface area of particles, and even induce phase transition.Hydrothermal synthesis, in which chemical reactions can occur in aqueous or organic media under self-produced pressure at low temperature (usually lower than 250 • C), can solve the problems encountered during the sol-gel process.This automatically raises the effective boiling point of the solvent, which in a decisive manner helps manage the entire process.This technique is also called solvothermal [24,38,39], while in the special case where the solvent is water, it is often called hydrothermal.The solvothermal method is an alternative route for onestep synthesis of pure nanosized anatase [40].Particle morphology, grain size, crystalline phase, and surface chemistry of the solvothermal-derived TiO 2 can be easily controlled by regulating precursor composition, reaction temperature, pressure, solvent properties, and aging time [40].Preparation of inorganic composite materials on laboratory scale allowed control of their physicochemical properties and also gives a possibility of their surface functionalisation with selected organic compounds [41,42]. The stability of inorganic particles in the aqueous phase is of significant importance for their applications.Physical properties of particle suspensions depend on the behaviour of aqueous dispersions, which is especially sensitive to the electrical and ionic structure of the particle/liquid interface.Relationships between surface charge or zeta potential and stability of nanoparticles in water have been studied in a variety of systems.However, the role of ions specifically adsorbed on nanoparticles is not yet well understood.For a suspension, zeta potential is an important parameter which reflects the intensity of repulsive force among particles and stability of dispersion [43].Zeta potential is crucial for stability control of TiO 2 nanoparticles in suspensions and for the adsorption properties of TiO 2 nanoparticle surfaces.Many authors have shown that the zeta potential of particles depends on several factors, such as the chemical composition of particle surfaces, the composition of the surrounding solvent, pH value, and the presence of ions in the suspension [44][45][46][47][48][49][50].Titania nanoparticles show a wide range of surface adsorption and optical properties which depend on their shapes and sizes and which correlate to photocatalytic activity [51][52][53][54][55]. Determination of zeta potential will help establish the effect of preparation conditions on the electrokinetic behaviour of TiO 2 nanoparticles [56]. Knowledge of the oxide/water interface structure is important to understand a large number of properties of oxide-rich porous media and colloid suspensions of oxides [43][44][45][46][47][48][49]57].Electrokinetic properties of fine particles in an aqueous solution, such as the isoelectric point (IEP) and potential determining ions (PDI), are essential in order to understand the adsorption mechanism of inorganic and organic species at the oxide/solution interface.They also govern the phenomena of flotation, coagulation, and dispersion in suspensions [58]. Electrochemical properties are frequently characterised in terms of zeta potential and isoelectric point [48,59].The zeta potential is the potential at the shear plane (located approximately between the compact and diffuse layers) between a charged surface and liquid moving with respect to each other.The isoelectric point is the pH at which the zeta potential is zero, that is, the pH value at which the net charge of the membrane is globally zero.There are several procedures, including microelectrophoresis, streaming potential measurements, and electroosmosis, that allow determination of the zeta potential [60]. The most important problem studied was the surface functionalisation of commercial titanium dioxide and TiO 2 -SiO 2 composite material with selected alkoxysilanes.The silane-grafted titanium dioxide and TiO 2 -SiO 2 were thoroughly characterised to determine the yield of functionalisation with silanes.The study was undertaken mainly to evaluate the effectiveness of surface character changes on the basis of measurements of dispersion, morphology, adsorption capacity, and zeta potential of the functionalised titania and, TiO 2 -SiO 2 composite material. Experimental Section 2.1.Materials.The materials studied were commercial pigments of titanium dioxide under the name Tytanpol, made by Chemical Works Police SA: A11-anatase with untreated surface, R001-rutile with surface treated with aluminium compounds (3% Al 2 O 3 ) and hydrophilic organic compounds, R213-rutile with surface deeply grafted with alumina and silica (4.7% Al 2 O 3 , and 8.3% SiO 2 ), and hydrophilic organic compounds, produced by the sulfate process.In the sulfate method TiO 2 is obtained from ilmenite ore treated with a concentrated solution of sulfuric acid.Another material studied was TiO 2 -SiO 2 composite material.The composite material was coprecipitated, using a method proposed by the authors, in the emulsion system with the use of cyclohexane (made by POCh SA, analytical grade) as the organic phase.The titanium precursor was titanium sulfate (made by Chemical Works Police SA) with the following physicochemical parameters: concentration 80-90 g TiO 2 /dm 3 , density 1250-1270 g/dm Titanium Dioxide and TiO 2 -SiO 2 Composite Material Modification.Functionalisation of TiO 2 and TiO 2 -SiO 2 was performed using the so-called dry technique [41,42,61].Surface modification of titanium dioxide and TiO 2 -SiO 2 composite material was carried out in a reactor of 500 dm 3 in capacity.The silane coupling agents were hydrolysed in the methanol/water system (4/1 v/v) and from this solution they were deposited directly onto the surface of titanium dioxide and TiO 2 -SiO 2 .The solution contained a given silane coupling agent in the amount of 0.5, 1, or 3 weight parts by mass of TiO 2 or TiO 2 -SiO 2 (100.0 g).Then the system was stirred for 1 hour to homogenise the sample with the solution of the modifying agent, and the solvent was distilled off.The silane-grafted samples were dried at 105 • C for 2 hours.The obtained samples were subjected to characterisation.On the surface of the TiO 2 or TiO 2 -SiO 2 support modified with aminosilane, a condensation reaction takes place between hydrolised ≡Si-OH groups of the aminosilane and the silanol, aluminol or ≡Ti-OH from the inorganic support see Figure 2. Determination of Physicochemical Properties.Determination of certain physicochemical parameters was undertaken to verify the effectiveness of TiO 2 or TiO 2 -SiO 2 surface modification with selected alkoxysilanes.For the TiO 2 and TiO 2 -SiO 2 composite material samples, the particle size distributions were determined using a Zetasizer Nano ZS, made by Malvern Instruments Ltd., permitting measurements of particle diameters in the range of 0.6-6000 nm (noninvasive backscattering technique-NIBS).The measurement involves passing through the material a red laser beam of wavelength 663 nm.During measurement the intensity of fluctuations of scattered light is identified, these representing illuminated particles of the sample.The particles within the fluid exhibit Brownian motion, which makes the measurement possible.Each sample was prepared by dispersing 0.01 g of the tested product in 25 cm 3 of isopropanol.The system was stabilised in an ultrasonic bath for 15 minutes, and then it was placed in a cuvette and analysed.Cumulant analysis gives a width parameter known as the polydispersity, or the polydispersity index (PdI).The cumulant analysis is actually the fit of a polynomial to the log of the G1 correlation function [62]: The value of b is known as the second order cumulant, or the z-average diffusion coefficient.The coefficient of the squared term, c, when scaled as 2c/b 2 , is known as the polydispersity. The surface morphology and microstructure of the TiO 2 or TiO 2 -SiO 2 samples were examined on the basis of the SEM images recorded from an EVO40 scanning electron microscope made by Zeiss.Before testing, the samples were coated with Au over a period of 1 minute using a Balzers PV205P coater. In order to characterise the adsorption properties, nitrogen adsorption/desorption isotherms at 77 K and parameters such as surface area (A BET ), total volume (V p ), and mean size (S p ) of pores were determined using an ASAP 2020 instrument (Accelerated Surface Area and Porosimetry-Micromeritics Instrument Co.).All samples were degassed at 120 • C for 4 hours prior to measurement.The surface area was determined by the multipoint Brunauer-Emmett-Teller method using the adsorption data as a function of relative pressure (p/ p 0 ).The Barrett-Joyner-Halenda method was applied to determine the pore volume and the average pore size. The TiO 2 and TiO 2 -SiO 2 composite material were also subjected to crystalline structure determination using a wide angle X-ray scattering method.The results were analysed employing XRAYAN software.The diffraction patterns were taken using a TUR-M62 horizontal diffractometer, equipped with an HZG-3 type goniometer.Nickel-filtered Cu Kα radiation (λ = 1.5418Å) was used in the measurements.The measurement conditions were as follows: anode voltage 30 kV, anode current 15 mA.The samples were scanned at a rate of 0.04 • over an angular range of 3-60 • . Moreover, the surface composition of TiO 2 -SiO 2 (contents of Ti and Si) was analysed by energy dispersive Xray spectroscopy (EDS) using a Princeton Gamma-Tech unit equipped with a prism digital spectrometer.Representative parts (500 μm 2 ) were analysed for proper surface composition evaluation.EDS technique is based on an analysis of Xray energy values using semiconductor.Before the analysis, samples were placed on the ground, with a carbon paste or tape.The presence of carbon materials is needed to create a conductive layer which ensure the delivery of electric charge from the sample. Selected samples of TiO 2 and TiO 2 -SiO 2 composite material were subjected to elemental analysis using an Vario EL Cube apparatus made by Elementar Analysensysteme GmbH.A 10 mg portion of the sample was placed in an 80-position autosampler.After that the sample was moved to the instrument in which it was combusted in an oxygen atmosphere.After passing through appropriate catalysts in a helium stream, the resulting gases were separated in a adsorption column, and then recorded using a katharometer.The results are given as an average of three measurements with ±0.001% each. Dependences of the zeta potential versus pH for TiO 2 and TiO 2 -SiO 2 samples, both unmodified and subjected to surface modification with selected alkoxysilanes, were established to check the effect of the modifier and its quantity on the zeta potential.Using a Zetasizer Nano ZS equipped with an autotitrator (Malvern Instruments Ltd.) it was also possible to measure electrophoretic mobility and indirectly the zeta potential, using laser Doppler velocimetry (LDV).The electrokinetic potential was measured in the presence of a 0.001 M NaCl electrolyte over the whole considered pH range (2)(3)(4)(5)(6)(7)(8)(9)(10)(11), which enabled determination of the electrokinetic curves.To perform the measurements, 0.01 g of a sample was dispersed in 25 cm 3 of electrolyte.Then 10 cm 3 of the so prepared sample was placed in a titrator enabling automatic titration of the system either with an acid (0.2M HCl) or with a base (0.2M NaOH).The measurements gave the dependence of the zeta potential versus pH.The accuracies of the measurements were ±0.01 mV (zeta potential) and ±0.01 (pH).To avoid possible measurement errors, every sample was measured three times.The standard deviation of the zeta potential at a given pH was ±1.7 mV or less, and the error in the pH was estimated to be 0.03 pH units or lower. Dispersive and Morphological Properties of Unmodified Titania and TiO 2 -SiO 2 Composite Material.The aim of the first stage of the study was the characterisation of morphology and dispersive properties of the pigments based on commercial TiO 2 and synthetic TiO 2 -SiO 2 composite material.The particle size distribution according to volume contribution obtained for Tytanpol A11 is presented in Figure 3(a), and shows one band covering the particle diameters from 342 to 6440 nm; the maximum volume contribution of 10.6% comes from particles of 825 nm diameter.The polydispersity index of this pigment is 0.218.The particle size distribution of Tytanpol R001, Figure 3(b), reveals one band corresponding to primary particles and secondary agglomerates of diameters from 255 to 6440 nm. The maximum volume contribution of 12.3% comes from agglomerates of 5560 nm diameter.As follows from this distribution, primary particles and aggregates account for 32.5%, and secondary agglomerates 67.5%, of the sample volume.The polydispersity index of this sample is 0.242.The particle size distribution of Tytanpol R213 (Figure 3(c)) shows one broad band, corresponding to primary and secondary agglomerates with diameters ranging from 190 to 6440 nm (the maximum volume contribution of 10.2% corresponds to agglomerates of 5560 nm diameter).The polydispersity index of this pigment is 0.233.The particle size distribution of the TiO 2 -SiO 2 composite material, sample TP10 (see Figure 3(d)), shows one relatively narrow band covering the diameters from 342 to 1110 nm, with the maximum volume contribution of 22.8% coming from aggregates 712 nm in diameter.The polydispersity index of TP10 is 0.197, which means that this sample is rather homogeneous. The results presented prove that all the samples studied have similar homogeneities (almost the same polydispersity index values).It is worth noting that the synthetic composite material sample contains particles of smaller diameter than those in the commercial TiO 2 .The SEM microphotographs of the samples studied presented in Figure 4 confirm the presence of particles of small diameter (corresponding to those indicated in particle size distributions), high homogeneity, almost spherical shape, and showing little tendency to form agglomerate structures. Structural Characteristic of Unmodified Titania and TiO 2 -SiO 2 Composite Material.Characterisation of the adsorption properties of TiO 2 -and TiO 2 -SiO 2 -based pigments included determination of the nitrogen adsorption/desorption isotherms and calculation of the surface area, pore size and volume.The isotherms measured for TiO 2 pigments and TiO 2 -SiO 2 composite material were classified as type II with hysteresis loops type H3 (R213) and type H4 (A11, R001, and TP10), indicating the nonporous solids, with large secondary slit-like pores formed between small particles aggregates [63]; see Figure 5.The greatest surface areas (BET), of 35 and 36 m 2 /g, were found for Tytanpol R213 and TiO 2 -SiO 2 (sample TP10), respectively.In few papers has been reported that the addition of silica or alumina to titanium dioxide not only improves the mechanical properties, as well as abrasion resistance of the system, but gives products of highly developed specific surface area [64][65][66][67][68][69][70][71][72][73][74].For TP10 this observation can be explained by the dominant contribution of SiO 2 (70.16%), as proved by chemical composition analysis by the EDS method, see Figure 6.This analysis also confirmed that the content of titanium dioxide in the composite material obtained reached almost 29.84%.For the R213 sample the large surface area is related to the surface modification with alumina and silica, that is, Al 2 O 3 4.7%, and SiO 2 8.3%.The inorganic treatment with aluminium oxide and silica considerably increases the surface area as both these substances, and silica in particular, have a well-developed surface.The influence of silica on the surface area of titanium dioxide depends on its physicochemical properties.The hysteresis loop of R213 TiO 2 covers the relative pressure range p/ p 0 = 0.6-0.99.The mean pore diameter of this substance is 9.8 nm and the total pore volume is 0.09 cm 3 /g.The nitrogen volume adsorbed on R213 titania reaches 75 cm 3 /g at p/ p 0 = 0.99.For TiO 2 -SiO 2 composite material, the nitrogen volume adsorbed at p/ p 0 = 0.99 is much lower (36 cm 3 /g), its mean pore diameter is 4.6 nm, while the total pore volume is 0.04 cm 3 /g, much lower than for sample R213.The samples Tytanpol A11 and R001 show low specific surface area; their surface areas (BET) are 10 and 14 m 2 /g, respectively.For these two samples the amount of nitrogen adsorbed for relative pressure in the range p/ p 0 = 0-0.8slowly increases; above p/ p 0 = 0.8 the amount of nitrogen adsorbed rapidly increases to reach a maximum value of 26 cm 3 /g at p/ p 0 = 0.99.For A11 the mean pore diameter is 7.6 nm and the total pore volume is 0.02 cm 3 /g, while for R001 these parameters are 7.6 nm and 0.03 cm 3 /g respectively.In contrast to the results of dispersive characteristics, determination of the adsorption properties confirmed that specific surface areas increase with the corresponding increase in volume contribution of the primary particles in the sample.Parameters, such as the specific surface area of inorganic oxide systems, play an important role in the adsorption of selected organic compounds (functionalisation of TiO 2 surface with inorganic oxides and its effectiveness). The crystalline structures of selected samples were studied by the WAXS method.The structural character of pigments determines their suitability for particular applications (e.g., photocatalysis, paints and lacquers industry).Titanium dioxide of a given crystalline structure can be identified by the WAXS for certain values of 2Θ. Figure 7 presents the WAXS patterns of selected samples showing that A11 has the anatase structure, while R213 has the rutile structure.WAXS analysis of synthetic composite material (TP10) confirmed that titanium dioxide occurs mainly as the rutile form, with a small contribution of anatase.surface properties and stability of dispersion.Changes in the zeta potential with pH and the isoelectric point values strongly depend on the type and amounts of inorganic substances used for surface modification.In many papers has been noted that the isoelectric point for unmodified titanium dioxide is at pH 4, for silica it is pH 2 [48,[75][76][77], while for aluminium oxide it is pH 9 [48,77,78].The plots of zeta potential versus pH determined for the commercial titanium dioxide pigments and TiO 2 -SiO 2 composite material are shown in Figure 8. Wilhelm and Stephan [79] mentioned that the isoelectric point of the titania particles appears at 4.4-7.0,depending on the method of synthesis.The IEP of A11 sample occurs at a pH of 3.42, its maximum zeta potential is 39.0 mV, while the minimum one is −56.0 mV.For R001, whose surface is modified with aluminium oxide, IEP is shifted towards a higher pH (7.78).The maximum zeta potential is 53.4 mV at pH 1.67, while its minimum value is −51.8 mV at pH 11.8.The electrokinetic curve for R213 has a different character.Its IEP occurs at pH 5.07, its maximum zeta potential is 23.2 mV, while the minimum one is −49.7 mV.For R213 with surface modified with aluminium oxide and silica (4.7% Al 2 O 3 and 8.3% SiO 2 ) the IEP is shifted towards a lower pH than the IEP of R001 sample.Synthetic TiO 2 -SiO 2 composite material (TP10) composed of 70.16% silica has the IEP value shifted towards 2.16 (confirmed by Urbanus et al. [80], who demonstrated that the IEP of TiO 2 -SiO 2 is approximately 2.5), its maximum zeta potential value is 4.2 mV, while the minimum one is -60.5 mV.TiO 2 nanoparticles with different surface properties were obtained by Liao et al. [56] by a method in which surfactants were introduced during synthesis.They confirmed that the zeta potential values of TiO 2 nanoparticles differed depending on the use of different titanium precursors and introduction of different surfactants. Dispersive and Morphological Properties of Modified Titania and TiO 2 -SiO 2 Composite Material.At the next stage of the study, the physicochemical properties of titanium dioxide and TiO 2 -SiO 2 composite material functionalised with selected alkoxysilanes were characterised.The main aim of the study was to evaluate the efficiency of the functionalisation process of commercial titanium dioxide as well as TiO 2 -SiO 2 composite materials and determination of the effect of this process on the fundamental physicochemical properties of the systems obtained.Table 2 gives the dispersive characterisation of the modified TiO 2 and TiO 2 -SiO 2 samples. The substantial differences in the mean diameters of TiO 2 particles modified with three different modifying agents in different amounts imply that the silanes used have a great effect on the dispersive parameters of the final products. The dispersive characteristics (Table 2) show that noticeable changes in the particle size of modified TiO 2 appear independently of the type and quantity of the modifying agent.According to the results, by far the best dispersive properties are shown by TiO 2 -SiO 2 composite material functionalised with selected alkoxysilanes (irrespective of the quantity of silane used for functionalisation).All samples of the silane grafted composite material had particles of smaller diameter than those determined in the samples based on the commercial titanium dioxide.TiO 2 and TiO 2 -SiO 2 surface functionalisation with selected alkoxysilanes was not observed to have any significant influence on the dispersive characteristics of the composite systems obtained.Surface modification of A11 titanium dioxide with the silanes significantly enhanced the tendency of the sample particles to agglomerate, manifested by an increased volume contribution from secondary agglomerates.In most samples, the functionalisation of inorganic support with selected silane coupling agents (in relation to the amount of silane used) contributed to a decrease in the sample's homogeneity (higher polydispersity index)-see Table 2-compared to that of unmodified support. Figures 9 and 10 present selected particle size distributions and SEM microphotographs of TiO 2 and TiO 2 -SiO 2 composite material functionalised with different silanes, confirming the data presented in Table 2. Structural Characteristic of Functionalised Titania and TiO 2 -SiO 2 Composite Material.At the next stage of the study, the adsorption properties of the modified titanium dioxide and TiO 2 -SiO 2 samples were characterised.The fundamental parameters determining the surface activity of the modified samples, specific surface area (BET) and pore size distribution, are given in Table 3. Analysis of the data presented in Table 3 inform that the greater the amount of the modifying agent, the smaller the surface area (BET).Most probably it is a consequence of the fact that the active centres (silanol, aluminol, ≡Ti-OH groups) on the surface of TiO 2 and TiO 2 -SiO 2 are blocked by the modifier molecules.A considerable decrease in the surface area relative to that of the unmodified sample was observed for all modified samples.Modification with N-2-(aminoethyl)-3aminopropyltrimethoxysilane (U-15D) was more effective compared with samples modified with U-511 and U-611.Addition of any of the modifiers resulted in a decrease in the pore diameters relative to those in unmodified TiO 2 and TiO 2 -SiO 2 , irrespective of the quantity of modifier. In contrast to the results for dispersive characteristics, determination of the adsorption properties confirmed the effectiveness of modification and revealed that modification induced changes in the character of the samples' surfaces.Direct evidence of the efficiency of the modification process comes from elemental analysis, which results in permitted estimation of the coverage degree of TiO 2 and TiO 2 -SiO 2 samples with selected alkoxysilanes.The number of surface functional groups N R (nm −2 ), which informs us about the density of modifier grafted on the TiO 2 or TiO 2 -SiO 2 surface, was calculated from the results of elemental analysis and BET measurement.N R is defined as the number of methacryloxy, vinyl, propyl, aminoethyl, or aminopropyl groups on TiO 2 or, TiO 2 -SiO 2 , surface per 1 nm −2 and is expressed using (2) presented below: where C is the carbon content obtained from the result of elemental analysis for the analysis sample, n is the number of carbon in the silane coupling agents except methoxy groups, N A is Avogadro's number, and S is the specific surface area of the analysed sample [81].Table 4 gives the concentration of the modifier and the results obtained from elemental analysis and BET measurement.The content of carbon, hydrogen, and nitrogen increased and surface area decreased, with increasing concentration of the modifier. For the samples modified with U-15D silane, the C/N values, defined as molar ratio of carbon to nitrogen, were close to 3.This indicates that most of the methoxy groups in U-15D have hydrolysed and the aminopropyl groups remain on the TiO 2 or TiO 2 -SiO 2 surface.The value of N R for samples R001 and A11 modified with U-15D increased to 19 and 21, respectively, while that of samples R213 and TP10 increased to 5. The N R value of the samples modified with U-15D was different from those of samples modified with U-511 and U-611 at the same concentration of modifier, because of the different reaction mechanisms.It is well known that silane coupling agents are first hydrolysed to silanols, and then condensation reactions between the silanols and surface hydroxyl groups on the substrate take place.However, special interaction between aminosilane and the TiO 2 or TiO 2 -SiO 2 surface also occurs, causing this higher reactivity than in the U-511 and U-611 case, observed in the N R value, see Table 4. Various types of interactions between aminosilane and the TiO 2 surface have been proposed in the literature [82,83]. The degrees of coverage of the TiO 2 or TiO 2 -SiO 2 with modifiers were also evaluated on the basis of the Berendsen and de Golan equation [84], using the results of elemental analysis: where P is the degree of coverage, C the carbon content of the sample, N C the number of carbon atoms in the attached molecule, M the molar mass of the attached compound, and A the specific surface area of the support.With increasing amount of appropriate silane used for the modification the increase in the elemental content of the analysed elements was noted along with a significant increase in the degree of coverage values.The greatest degrees of coverage were found for the samples modified with U-15D.The degree of coverage values for the samples modified with U-611 were lower from those of samples modified with U-15D and U-511, at the same amount of modifier. 3.6.Electrokinetic Properties of Modified TiO 2 and TiO 2 -SiO 2 Composite Material.The efficiency of inorganic oxides surface modification with selected organic compounds can be readily estimated by electrokinetic tests, that is, measurements of zeta potential versus pH.Thus in the next step samples of titanium dioxide and TiO 2 -SiO 2 composite material functionalised with alkoxysilanes were subjected to tests of their electrokinetic properties.Figures 11 and 12 present the zeta potential versus pH dependencies evaluated for composite systems prepared using titanium dioxide or TiO 2 -SiO 2 . Figure 11 presents the electrokinetic curves estimated for aminosilane-grafted commercial titanium dioxide. Surface modification of A11 titanium dioxide with different amounts of N-2-(aminoethyl)-3-aminopropyltrimethoxysilane (U-15D) (see Figure 11(a)) resulted in significant changes in the character of the electrokinetic curves.These plots differ considerably from the reference plot obtained for unmodified titanium dioxide A11 (its IEP is 3.42, Roessler et al. [85] reported that IEPs for anatase vary between 3 and 6.6).After modification with 0.5, 1, and 3 wt./wt. of U-15D silane, the isoelectric points were 5.12, 6.72, and 9.40, respectively, so the IEP values increased with an increasing amount of the modifying agent used for surface functionalisation.This significant increase in IEP values is attributed to the strong ionisation effect of -NH 2 groups.Ionisation of these groups also plays an important role in changes in the surface charge of TiO 2 .When the density of H + ions is high, NH 3 + groups start to form and hence the positive charge of modified TiO 2 appears.With increasing concentration of H + ions, the process of ionisation is restricted and the surface charge decreases.For the A11 sample modified with U-15D silane, the zeta potential takes positive values in almost the entire acidic pH range.Cai et al. [86] confirmed the isoelectric point of pH = 7 for titania film functionalised with (3-aminopropyl) triethoxysilane. Modification of anatase surface with the other two silanes studied did not result in considerable changes in the character of the relevant electrokinetic curves; they were similar to that recorded for the unmodified sample.This observation was confirmed by the isoelectric points of the modified samples.For A11 modified with 0.5, 1, and 3 wt./wt. of U-511 silane, the IEP takes the values of 3.21, 3.30, and 3.63, and for titanium dioxide modified with 0.5, 1, and 3 wt./wt. of U-611 silane the isoelectric points occur at lower pH, that is, at 3.42, 3.22, and 4.75, respectively.also for the R001 sample modified with N-2-(aminoethyl)-3aminopropyltrimethoxysilane the electrokinetic curves were shifted towards higher pH values.The isoelectric point of TiO 2 modified with 0.5 wt./wt. of U-15D silane is 7.92, while its values for the samples modified with 1 and 3 wt./wt.are 8.42 and 9.56, respectively.The shift of IEP towards higher pH is caused by a strong ionisation effect of -NH 2 groups coming from the modifying agent.Again the other silanes did not cause significant changes in the surface charge of the composite systems obtained.Functionalisation of R001 titanium dioxide surface with 3-methacryloxypropyltrimethoxysilane caused a small shift of the electrokinetic curves towards more acidic pH.For R001 modified with 0.5 and 1 wt./wt. of U-511, the isoelectric points are at 7.00 and 6.40, respectively, whereas for R001 sample modified with 3 wt./wt. of U-511 silane the IEP is at 5.15.Such differences were not observed if vinyltrimethoxysilane (U-611) was used for TiO 2 surface modification.For TiO 2 which surface was functionalised with this silane in the amounts of 0.5, 1, and 3 wt./wt., the isoelectric points occur at 7.39, 7.98, and 6.72, respectively.Similar observations were made analysing the electrokinetic results for R213 titanium dioxide modified with selected alkoxysilanes.For titanium dioxide R213 modified with N-2-(aminoethyl)-3-aminopropyltrimethoxysilane in different amounts, the electrokinetic curves were observed to be shifted towards higher pH (see Figure 11(c)).The IEP of the unmodified R213 sample occurs at a pH of 5.07.For R213 modified with 0.5, 1 and 3 wt./wt. of U-15D silane, the IEP is observed at 6.53, 7.56, and 9.32, respectively.Surface modification of R213 titanium dioxide with U-511 silane, similarly as that of R001, caused a shift of the electrokinetic curve towards lower pH, with respect to that recorded for unmodified TiO 2 .The shift was also confirmed by changes in IEP, which for R213 sample modified with 0.5 wt./wt. of U-511 was at pH 4.79, for R213 modified with 1 wt./wt. of U-511 was at 4.66, and for R213 modified with 3 wt./wt. of U-511 was at 2.68.Again, application of vinyltrimethoxysilane for TiO 2 surface modification did not cause significant changes in the electrokinetic character of the products obtained.When the R213 sample was modified with 0.5, 1, and 3 wt./wt. of U-611 silane, the IEP occurred at pH 4.83, 5.04, and 4.39, respectively. At the subsequent stage of the study, zeta potential was measured for TiO 2 -SiO 2 composite material modified with three different alkoxysilanes in different amounts.Figure 12 presents the electrokinetic curves of TiO 2 -SiO 2 composite material modified with U-15D silane.The IEP of unmodified TiO 2 -SiO 2 is at 2.16.Surface modification of synthetic composite material with this silane caused significant changes in its electrokinetic properties, observed also for A11 titanium dioxide.The changes were manifested as the electrokinetic curves and IEP shifts towards higher pH in comparison to those of the unmodified TiO 2 -SiO 2 sample.For TiO 2 -SiO 2 modified with 0.5, 1, and 3 wt./wt. of U-15D, the IEPs appear at 6.02, 7.78, and 9.81, respectively.This significant shift of IEP towards higher pH values is caused by the strong ionisation effect of -NH 2 groups originating from the modifying agent (U-15D).It is worth mentioning that for TiO 2 -SiO 2 modified with 0.5 wt./wt. of U-511 silane the isoelectric point is at 1.88, but for the samples modified with 1 and 3 wt./wt. of U-511, IEP was not obtained as the measurements of zeta potential versus pH in 0.001 M NaCl for the TiO 2 -SiO 2 modified with U-611 silane in different amounts did not permit exact determination of IEP.The other silanes do not contain in their structure the functional groups that are able to change the surface charge of the composite systems obtained and hence influence the electrokinetic characteristics. The probable mechanism of surface charge changes of TiO 2 or TiO 2 -SiO 2 surface modified with U-15D silane as a function of pH of the medium is presented in Figure 13. Conclusions According to the results presented and discussed above, the character of the surface of inorganic oxide systems like TiO 2 or TiO 2 -SiO 2 can be modified by simple chemical methods.No significant effect of the silanes used on the dispersive characteristics and morphology of the composite materials obtained was noted.However, significant changes were found in the adsorption properties of the modified samples.The specific surface areas of TiO 2 and TiO 2 -SiO 2 composite material, modified with the selected silanes, decreased with increasing amount of the silane deposited.The smallest was the specific surface area of TiO 2 (Tytanpol A11 and R001) modified with 3 weight parts by mass of aminosilane U-15D.Specific surface areas of the all samples varied from 4.91 m 2 /g to 5.64 m 2 /g.The greatest specific surface areas of 35 and 36 m 2 /g were determined for the Tytanpol R213, commercial TiO 2 and unmodified sample TP10 (TiO 2 -SiO 2 synthetic composite material), respectively. The efficiency of selected alkoxysilanes' grafted onto titania or TiO 2 -SiO 2 synthetic composite support was indirectly confirmed by elemental analysis, proving that the degree of surface coverage with a modifying agent increases with increasing concentrations of the silanes used for inorganic surface functionalisation. The zeta potential changes as a function of pH as well as the IEP values of the commercially available titanium dioxides strongly depend on the type and amount of the inorganic agents used for modification of their surfaces, as well as on the amount and type of alkoxysilane used.For titanium dioxide and TiO 2 -SiO 2 composite material modified with 3-methacryloxypropyltrimethoxysilane (U-511) and vinyltrimethoxysilane (U-611), the IEP values showed a tendency to shift towards lower pH with increasing amount of the modifying agent used.For the samples modified with N-2-(aminoethyl)-3-aminopropyltrimethoxysilane (U-15D) the reverse tendency was noted.The most pronounced changes in the electrokinetic properties were observed for titanium dioxide and TiO 2 -SiO 2 composite material modified with N-2-(aminoethyl)-3-aminopropyltrimethoxysilane.These changes were attributed to specific interactions of -NH 2 groups in acidic or alkaline enviroment, that is, to their ability to attach or abstract potential forming ions, such as H + . Figure 2 : Figure 2: Mechanism of condensation reaction between hydrolysed aminosilane and the surface of the unmodified support. Figure 11 ( b) presents the zeta potential versus pH for TiO 2 samples (R001) modified with U-15D silane.The reference sample was the unmodified TiO 2 sample-R001with IEP at pH = 7.78.Similarly as for A11 titanium dioxide, Table 2 : Dispersive properties of TiO 2 and TiO 2 -SiO 2 samples modified with different silane coupling agents. Table 4 : The degree of surface coverage of TiO 2 and TiO 2 -SiO 2 modified with selected modifying agent.
2018-12-06T14:53:42.478Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "48d4f843785fa0c406277062491379aab1742369", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2012/316173.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48d4f843785fa0c406277062491379aab1742369", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
52080223
pes2o/s2orc
v3-fos-license
Acute paracoccidioidomycosis with duodenal and cutaneous involvement and obstructive jaundice Paracoccidioidomycosis (PCM) is the most widespread endemic mycosis in LatinAmerica. If PCM is not diagnosed and treated early and adequately, the endemic fungal infection could result in serious sequelae. We report a case of PCM with duodenal and cutaneous involvement simulating cholangitis that was initially misdiagnosed as a lymphoproliferative disease. Clinicians should consider acute paracoccidioidomycosis in the differential diagnosis of jaundice and/or signs/symptoms of cholangitis developing in young patients from paracoccidioidomycosis endemic regions. Introduction The genus Paracoccidioides spp. contains a species complex, comprising at least 2 species, Paracoccidioides braziliensis and P. lutzii, which are the etiological agents of paracoccidioidomycosis (PCM). This complex has a geographically restricted habitat [1], and is thermally dimorphic, as it switches from the non-pathogenic mycelial form at ambient environmental temperatures to the pathogenic multiple-budding yeast form when exposed to temperatures similar to those of the mammalian host [2][3][4]. Infection of the host is thought to occur via the inhalation of infective airborne propagules like conidia or, possibly, mycelial fragments from the environment. Inhaled propagules then differentiate in the lungs into the pathogenic yeast form, after which the fungus disseminates to other organs of the host [5]. PCM is a neglected health-threatening human systemic mycosis endemic to Latin America where up to ten million people are thought to be infected [2,6]. Disease can progress slowly, with roughly five new cases of disease per million infected individuals per year, with a male to female ratio of 13 to 1. About 80% of PCM cases occur in Brazil, followed by Colombia and Venezuela [2,[6][7][8]. The genus Paracoccidioides encompasses two distinct species, P. brasiliensis and P. lutzii. P. lutzii is a single monophyletic and recombining population reported to date in central, southwest, and north Brazil and Ecuador [3,4]. P. brasiliensis is monophyletic and comprised of distinct lineages classified as S1, PS2, PS3, and PS4. The S1 lineage is associated with the majority of PCM cases and is widely distributed in South America. PS2 has been identified to date only in Brazil and Venezuela, whereas PS3 is mainly found in regions of endemicity in Colombia. Recently, a novel lineage, PS4, was described from a region of Venezuela [4]. Isolates from each of these phylogenetic lineages of Paracoccidioides can infect humans; however they may vary in virulence and culture adaptation and can elicit different immune responses by the host. One feature that is correlated with different rates of infection is variation in the number of infective conidia [4,5]. The successful invasion of host tissues by the fungus is a complex event, usually involving various regulatory mechanisms of cellular homeostasis and the expression of different virulence factors during infection that allows the fungi to spread to different organs in the host [4,5]. This parasite shows tropism towards the monocyte-macrophage system in acute-subacute forms of the infection, and paracoccidioidomycosis may occur in mucosa-associated lymphoid tissues (MALT) in the gastrointestinal tract (Peyer's patches) [7]. Human NK cells have been divided into CD56 and CD56 subsets possessing either lytic or IFN-γ secretory function. A subset of tonsillar NK cells was shown to express the receptor NKp44, which is not present on blood NK cells unless they are activated in vitro with IL-2 or IL-15. NKp44 NK cells are present in tonsils and in Peyer's patches of the ileum and the appendix [3,8]. NK-22 cells are also found in mouse MALT and appear in the small intestine lamina propria during bacterial infection suggesting that NK-22 cells provide an innate source of IL-22 that may help constrain inflammation and protect mucosal sites [3,8]. Intra and extra peritoneal lymph nodes may facilitate spread of infection to the psoas muscles in juvenile paracoccidioidomycosis [7]. We report on a rare case of PCM with duodenal and cutaneous involvement simulating cholangitis that was initially misdiagnosed as a lymphoproliferative disease. The aim of this case report is to strengthen awareness for acute paracoccidioidomycosis in the differential diagnosis of jaundice and/or signs/symptoms of cholangitis developing in young patients from paracoccidioidomycosis endemic areas. Case A 19-year-old male patient, born and raised in the urban area of São Sebastião do Paraiso, Minas Gerais, Brazil, with no history of traveling to the countryside complained of low back pain for 10 days, that evolved to severe epigastric pain in two days later. Upper gastrointestinal endoscopy (UGE) was performed and showed two duodenal ulcers, measuring approximately 1.5 cm each, located in the anterior wall and bulb in the transition to the second portion of the duodenum, with regular edges, and covered by fibrin (Fig. 1); Culture for Helicobacter pylori was performed and it yielded negative results; an ulcer biopsy was not performed at that time. Omeprazole 40 mg/day per oris was prescribed, with no improvement of the symptoms. Six days later he developed jaundice, choluria, nausea, abdominal pain, fever, diarrhea and generalized lymphadenopathy. The patient was referred to São Francisco Hospital, Ribeirão Preto, São Paulo (day 0), with the presumptive diagnosis of cholangitis. On physical examination, the patient presented with a low-grade fever (38°C), jaundice, distended abdomen painful on palpation, and hepatosplenomegaly. There was cervical, subclavian and inguinal lymphadenopathy. Multiple acneiform lesions on the face and brownish maculopapular lesions with low flaking on the face and abdomen were observed (Fig. 2). Laboratory tests from day 0 revealed anemia, leukocytosis (white blood cell count at 18.6 × 10 3 /µL with a left shift), increased liver enzymes (aspartate aminotransferase = 156 U/L, alanine aminotransferase = 99 U/L, phosphatase alkaline = 81 U/L, gamma-glutamyl transferase = 92 U/L), total bilirubin 6.28 mg/dl (direct = 4.98 mg/dl and indirect = 1.3 mg/dl), and increased inflammatory activity test (C-reactive protein = 168 mg/dl). Serological tests for hepatitis A, B and C, syphilis, HIV, mononucleosis, toxoplasmosis, cytomegalovirus and brucellosis were negative. Computed tomography (CT) of the abdomen on day +2 showed hepatosplenomegaly, retroperitoneal and periportal coalescent lymphadenopathy, intra-abdominal free fluid; intrahepatic and extrahepatic biliary ductal dilatation was not observed. These data were confirmed by abdominal magnetic resonance imaging (MRI), and diagnostic hypothesis of lymphoproliferative disease was suggested (Fig. 3). On chest radiography, prominent pulmonary hilum and slight reduction in the transparency of the right hemithorax base with apparent elevation of the diaphragm were observed. Chest CT showed centrilobular micronodules scattered in the lungs, predominantly in the middle lobes and right, mediastinal lymphadenopathy and right pleural effusion (Fig. 4). Biopsies of cervical and inguinal lymph nodes, and two abdominal papules were performed (day +4). Microscopic examination of cervical and inguinal lymph nodes and skin biopsies demonstrated multiple, narrow base, budding yeast cells the "steering wheels" of Paracoccidioides spp on Grocott's methenamine silver (GMS) staining (Figs. 5 and 6). The diagnosis of acute/subacute paracoccidioidomycosis, juvenile type, was established. Treatment with intravenous amphotericin B (0.75 mg/kg/day) was started and a cumulative dose of 1.3 g was reached on day +40 with relevant clinical improvement. He was discharged (day +50) with instructions to return for follow-up in the outpatient clinic and amphotericin B was replaced for itraconazole 200 mg/day. Complete blood count, liver enzymes and inflammatory activity tests have remained in normal levels. UGE and abdominal CT performed in the sixth month of follow up were normal. The patient has been treated with itraconazole 200 mg/day for one year and remains asymptomatic. Discussion PCM is an endemic infection in Latin America, of special importance due to severity and clinical impact of some of its clinical forms [7,9]. In this report, we describe the diagnostic challenge of a severe and atypical acute systemic disease caused by Paracoccidioides spp. identified through biopsies of cervical and inguinal lymph nodes, and two abdominal papules. Paracoccidiodomycosis is predominantly a rural infection but the patient came from an urban area, and denied visiting or previously having lived in rural areas. Paracoccidioides species complex causes a localised infection that may progress to systemic granulomatous disease with tegumentary and visceral disease [2,7]. Agents of systemic mycoses, such as P. brasiliensis and P. lutzii, express factors that facilitate their survival in severe conditions inside the host cells and tissues, and as such, benefit the disease's development (fungal dimorphism, alpha glucan in the yeast cell wall, etc). PCM is considered a neglected infectious disease, despite being the first cause of death among all systemic mycoses in immunocompetent patients and eighth among chronic or recurrent infectious and parasitic diseases in Brazil [2,7]. The acute/subacute paracoccidioidomycosis (juvenile type) clinical presentation is responsible for 3-5% of cases of the disease, predominantly in children and adolescents. Eventually it may affect individuals up to 35 years old [2,9]. This clinical form is characterized by fast onset and progress of the mycosis. Patients usually seek medical attention between 4 and 12 weeks of illness [7,9]. In decreasing order of frequency, the presence of enlarged lymph nodes, digestive symptoms, hepatosplenomegaly, osteo-articular involvement and skin lesions are the main forms of presentation of this systemic mycosis [7,[9][10][11]. Skin lesions include papules, papules and crusts, acneiform lesions with papules and pustules. The face is the most frequently affected site. Scattered lesions all over the body suggest severe disease. Nodular, verrucous lesions with or without ulceration may occur as well as elevated plaques. Bowel involvement (duodenal ulcers) observed in this case also occurred in 15% of subjects in a cohort of 20 children with paracoccidioidomycosis [7]. The increase in lymph nodes near the hepatic hilum can lead to obstructive jaundice [12][13][14]. Liver involvement in PCM is frequently seen in chronic multifocal (disseminated) forms. The extrinsic compression of common bile duct by lymph nodes is followed by jaundice. Other causes of jaundice are intraluminal granulomatous lesions in the common bile duct, hepatitis caused by Paracoccioides spp. complex or pancreatic PCM [13,14]. A variety of diseases are included under the umbrella term cholangitis, including hepatobiliary diseases with an autoimmune pathogenesis and disease processes associated with intraductal stones and infectious etiologies [15]. Although PCM rarely involves intrahepatic bile ducts, in endemic areas it should be considered in the differential of obstructive jaundice. In contrast to other fungal infections such as cryptococcosis, histoplasmosis and disseminated candidiasis, PCM is not usually associated with immunosuppressive diseases [2,7]. However, acute and sub acute profile of paracoccidioidomycosis is described in HIV co-infected patients [6]. The gold standard for PCM diagnosis is the detection of fungal elements suggestive of Paracoccidioides spp in fresh examination of sputum samples or other clinical specimens (lymph node aspirate) and/or biopsy specimens as well as mycologic culture with fungal isolation. Serological diagnosis (anti-Paracoccidioides spp specific antibodies) has limited value because of cross-reactivity with other fungi. It may have value in assessing prognosis. Serologic tests are useful for therapeutic follow up. Criteria for cure and discharge include clinical, serological and radiologic evaluation [2,8]. Endoscopic evaluation in this case showed two lesions in the intestinal mucosa presumably duodenal PCM. Therapeutic response to amphotericin B, demonstrated by follow up endoscopy showing involution of ulcers, further supports this diagnosis. The PCM is the main systemic mycosis in South America, with heterogeneous distribution and must be included in the differential diagnosis of patients with lymphadenopathy associated with systemic symptoms. Highlights of this case report are acute onset with epigastric pain followed by rapid health deterioration, the initial clinical picture suggesting cholangitis, with associated ascites, pleural and pericardial effusions, and cutaneous papular lesions of juvenile paracoccidioidomycosis. The authors emphasize that clinicians should consider acute paracoccidioidomycosis in the differential diagnosis of jaundice and/or signs/symptoms of cholangitis developing in young patients from paracoccidioidomycosis endemic regions. CT scan images after administration of intravenous contrast -reconstruction in the coronal plane in pulmonary window (B), axial section with mediastinal window (C) and maximum intensity projection in the axial plane with pulmonary window (D) -demonstrating the centrilobular micronodules scattered in the lungs (B, D), pulmonary hilum and infracarinal lymphadenopathy (arrowheads) and a little right-sided pleural effusion (C).
2018-08-28T09:51:17.052Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "1f5451306c81ab75833de40e73ac5c4f26571c4c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.mmcr.2018.01.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f5451306c81ab75833de40e73ac5c4f26571c4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8073668
pes2o/s2orc
v3-fos-license
Novel RGD-lipid conjugate-modified liposomes for enhancing siRNA delivery in human retinal pigment epithelial cells Background Human retinal pigment epithelial cells are promising target sites for small interfering RNA (siRNA) that might be used for the prevention and/or treatment of choroidal neovascularization by inhibiting the expression of angiogenic factor; for example, by downregulating expression of the vascular endothelial growth factor gene. Methods A novel functional lipid, DSPE-PEG-RGD, a Arg(R)-Gly(G)-Asp(D) motif peptide conjugated to 1, 2-distearoyl-sn-glycero-3-phosphoethanolamine- N-[maleimide (polyethylene glycol)-2000], was synthesized for the preparation of siRNA-loaded RGD-PEGylated liposomes to enhance uptake of encapsulated siRNA in retinal pigment epithelial cells. Various liposomes, with 1 mol% and 5 mol% PEGylated lipid or 1 mol% and 5 mol% RGD-PEGylated lipid, were fabricated. Results Characterization of the liposomes, including siRNA entrapment efficiency, average particle size and ζ-potential, were determined to be as follows: >96%, 129.7 ± 51 to 230.7 ± 60.7 nm, and 17.3 ± 0.6 to 32 ± 1.3 mV, respectively. For the in vitro retinal pigment epithelial cell studies, the RGD-PEGylated liposomes had high delivery efficiency with siRNA delivery, about a four-fold increase compared with the PEGylated liposomes. Comparison of the various liposomes showed that the 1 mol% RGD-modified liposome had less cytotoxicity and higher siRNA delivery efficiency than the other liposomes. The antibody blocking assay confirmed that uptake of the 1 mol% RGD-PEGylated liposome was via integrin receptor- mediated endocytosis in retinal pigment epithelial cells. Conclusion The results of this study suggest that RGD-PEGylated liposomes might be useful for siRNA delivery into retinal pigment epithelial cells by integrin receptor-medicated endocytosis. Introduction Gene therapy is thought to have great potential for treatment of acquired diseases, as well as inherited genetic diseases. 1 Many human diseases, eg, cancer and neovascularization, have been associated with overexpression of specific genes. The delivery of a normally functioning gene to the nucleus or cytoplasm of malfunctioning cells, may restore the targeted abnormal gene to a normal state. 2,3 Human retinal pigment epithelial cells play an important role in angiogenesis of the choroid by overexpression of angiogenic factors. Small interfering RNA (siRNA) is a double-stranded structure with 21-23 base pairs that can target its complementary mRNA, thereby hastening mRNA degradation, and inhibiting translation of a targeted gene. [4][5][6] With high specificity and easy application, siRNA has gradually replaced more conventional gene knockdown techniques for target-specific gene silencing. The ability of siRNA to silence gene expression has become a useful tool for investigating gene function; in addition, it is used in the pharmaceutical industry to develop pharmacological agents. 7 Thus, retinal pigment epithelial cells might be a promising target site for siRNA used to prevent or treat choroidal neovascularization in patients with agerelated macular degeneration by inhibiting expression of angiogenic factor. However, use of siRNA as a therapeutic agent has been complicated by problems such as poor formulation stability, difficulty penetrating biological barriers, and low delivery efficiency at the target site. Therefore, it is important to develop a delivery system that serves as an effective vector for siRNA and provides optimal delivery to the targeted site. 6,8 Cationic liposomes are a promising nonviral delivery system, and have been extensively studied for siRNA and plasmid DNA delivery. 9 The addition of polyethylene glycol (PEG) to the liposomes can decrease reticuloendothelial system recognition, thereby prolonging the circulation time in biological fluids. Although PEGylated liposomes provide a more stable preparation, the steric barrier of the grafted PEG moiety on the surface of the liposomes reduces interaction between the delivery system and the cell surface. Many strategies have been successfully implemented to improve cell delivery efficiency of the PEGylated nanoparticles including transient PEG coating, attachment of targeting ligands at the distal end of PEG moieties, and the incorporation of cell-penetrating peptides. 10,11 Currently, liposome studies are focusing on targeting delivery to maximize the delivery of the carried therapeutic agent to the targeting site and minimizing potential side effects. The addition of specific antibodies or a biological adhesive cell ligand to the liposome can enhance its binding ability in specific tissues, cells or pathogens, and consequently achieve higher delivery efficiency. For enhancing siRNA delivery into retinal pigment epithelial cells, the ligand of the surface receptors in the retinal pigment epithelial cells might be used to modify the PEGylated lipid of the liposome. Previous studies have shown that integrin receptors are upregulated in ocular neovascularization. Specifically, integrin α V β 3 is overexpressed in the ocular tissues of patients with age-related macular degeneration, whereas, α V β 3 and α V β 5 are also found in the vascular tissues of patients with proliferative diabetic retinopathy. 12 Other studies showed that, in retinal pigment epithelial cells, the α V β 3 and α V β 5 receptors were expressed at the apical side, while integrin α 5 β 1 receptors were expressed at the basolateral side. 13 Expression of α 5 β 1 integrins on the basolateral side of retinal pigment epithelial cells has been characterized and shown to be important for attachment of cells to Bruch's membrane. Expression of α V β 3 and α V β 5 integrins at the apical side has been associated with maintenance of diurnal phagocytosis of the shed photoreceptor outer segment fragment. 14 In order to enhance the delivery of siRNA-loaded liposomes to retinal pigment epithelial cells, liposomes with the integrin receptor ligand might be used to target delivery of siRNA to retinal pigment epithelial cells for the prevention and/or treatment of choroidal neovascularization in patients with age-related macular degeneration. Moreover, peptides with the RGD motif have been shown to bind specifically to α V -or α 5 -associated integrins. 15,16 Thus, modified liposomes with the RGD motif peptide might be constructed to carry siRNA for targeted delivery. Preparation of sirNA-loaded liposomes PEGylated cationic liposomes, containing DC-cholesterol, DOPE, and DSPE-PEG(2000) carboxy with molar ratios of 50/49/1 and 50/45/5, were prepared by the thin-film hydration method according to a method previously reported. 18 Briefly, the lipids were dissolved in chloroform in a rotary bottle (total lipids 3.6 µmol). After evaporation, a thin lipid film was formed and further dried under a vacuum for eight hours to remove residual solvent. The dry lipid film was hydrated for eight hours using 2.4 mL sterile phosphatebuffered solution containing siRNA or FAM-labeled siRNA. The dispersion was sonicated for 10 minutes and then passed through a polycarbonate membrane 10 times (100 nm pore size) using an extruder (Avanti Polar Lipids, Mini-Extruder, Birmingham, AL). The total lipid concentration of the prepared liposomes was 1.5 mM. Characterization of sirNA-loaded liposomes Particle size The prepared siRNA-loaded liposomes (1.5 mM) were diluted to 1 mM using an appropriate volume of sterile phosphate-buffered solution (pH 7.3). The volume-average hydrodynamic diameter of various siRNA-loaded liposomes was determined by a dynamic light scattering method using a particle analyzer (Horiba Instruments Limited, LB-500, Tokyo, Japan). A laser diode was used for dynamic light scattering, with a LB-550 software system from Horiba Instruments Limited. The apparatus consisted of a digital correlator and a signal processor incorporated in a computer. Measurements were made at 650 nm, at a 90° angle for the detector (photo multiplier tube). The mean particle size and size distribution of the siRNA-loaded liposomes were obtained for each sample measurement at room temperature. Zeta potential The prepared siRNA-loaded liposomes (1.5 mM) were diluted to 1 mM using an appropriate volume of sterile phosphate-buffered solution (pH 7.3). The zeta potential of the siRNA-loaded liposomes was measured by determining the electrophoretic mobility using the zeta potential analyzer (Brookhaven Instruments, ZetaPlus, Long Island, NY) at room temperature. Triplicate measurements were performed for each sample. Transmission electron microscopy For the transmission electron microscopy, the siRNAloaded liposome solution (8 µL) was placed on a formvar/ carbon film-coated copper grid, obtained from Ted Pella (Redding, CA). After removing excess sample with filter paper, the sample was air-dried for 10 minutes at room temperature. The siRNA-loaded liposomes were then visualized using 0.5% uranyl acetate negative staining for 15 seconds at room temperature and imaged on a transmission electron microscope (Hitachi, H-7650, Tokyo, Japan). Images were taken at 50,000× magnification. Entrapment efficiency For determination of the entrapment efficiency of the PEGylated liposomes and RGD peptide-modified PEGylated liposomes, the amount of negative control siRNA was quantified in the external phase, according to methods previously reported. 19,20 The liposomes were spun down at 20,000 × g for 30 minutes; the supernatants were diluted (if necessary), and stained with SYBR Gold dye and analyzed for siRNA content by measuring the intensity of fluorescence with an enzymelinked immunosorbent assay reader (Biochrom Ltd, Anthos 2010, Cambridge, UK) at 537 nm against a standard curve. A standard curve was established from analyzing a series of siRNA reference solutions with 200 µL measured by an enzyme-linked immunosorbent assay reader, and determined fluorescence intensity of 537 nm. The regression equations were as follows: y = 11.343x − 221.91, r square = 0.9992. The regression lines had good linearity, ranging from 31.25 to 2000 ng/mL for siRNA. The entrapment efficiency of the siRNA-loaded liposomes was calculated as the ratio of siRNA used for preparation of the initial mixture as follows: Entrapment efficiency (%) = (siRNA t − siRNA f )/ siRNA t × 100; where siRNA t is the total amount of siRNA used for preparation of the initial mixture and siRNA f is the free siRNA amount recovered in the supernatant. All the measurements were done in triplicate. ArPe-19 cell cultures ARPE-19 cells were obtained from the American Tissue Culture Collection and cultured in a 1:1 mixture of Dulbecco's modified Eagle's medium (DMEM) and Ham's F12 with 15 mM HEPES buffer, 2 mM L-glutamine, 56 mM sodium bicarbonate, and 10% fetal bovine serum. The cells were maintained at 37°C in a humidified atmosphere with 5% CO 2 . 21 Cells in passages 25 to 32 were used in the experiments, and seeding cells were counted in a hemocytometer with Trypan blue staining. Liposomal cytotoxicity ARPE-19 cells were seeded in 96-well plates at a density of 1.6 × 10 4 cells per well and incubated for 18 hours. Before the MTS assay, cells were washed with phosphate-buffered solution 100 µL. A 1.5 mM total lipid concentration of the prepared liposomes was diluted to 51.2, 12.8, and 3.2 µM with serumfree culture medium. The siRNA-loaded liposomes were then added to 100 µL of serum-free culture medium. After four hours, the cells were washed with phosphate-buffered solution 100 µL, and 100 µL of the cultured media was then added. Next, the cells were added to 20 µL of MTS and incubated for two hours at 37°C. An enzyme-linked immunosorbent assay reader, set at 490 nm, was used to read the absorbance of formazan in each well to determine the quantity of mitochondrial dehydrogenases in the viable cells. The percent cell viability was calculated as: cell viability (%) = [(ABS sample − ABS blank )/ (ABS control − ABS blank )] × 100, where ABS sample , ABS control , and ABS blank represent the absorbance of wells exposed to the liposomal dispersions, treated with serum-free culture medium, and treated with serum-free culture medium but without cells, respectively. 22 Confocal laser scanning microscopy The ARPE-19 cells were seeded in six-well plates using DMEM medium at a density of 2 × 10 5 cells per well and incubated for 24 hours to achieve 50% confluence. Before the uptake study, the culture media was removed, and the cells were washed with phosphate-buffered solution. The phosphate-buffered solution was then removed, and the prepared FAM-siRNA loaded liposomes, in 1 mL of serum-free culture media, were added. After four hours, the cells were washed three times with phosphate-buffered solution. The cells were then fixed with 3.7% formaldehyde for 10 minutes and washed three times with 1 mL of phosphate-buffered solution. Finally, cellular permeability was increased by adding 0.1% Triton for five minutes, and washing three times with 1 mL of phosphatebuffered solution. The cells were incubated in Hoechst 33342 for five minutes, which was then replaced by BODIPY phalloidin for 20 minutes. Confocal laser scanning microscopy (Jeol Ltd, Leica TCS SP5, Tokyo, Japan) was performed to obtain optical images of the distribution of FAM-siRNA in the ARPE-19 cells. Images obtained from the bottom of the coverslip to the top of the cells were recorded by confocal laser scanning microscopy (CLSM). The number of speckled green fluorescence images of the FAM-siRNA located within the cytoplasm was counted from the Z-series images at every 1 µm of depth. For determination of the FAM-siRNA localized within the cell, the number of speckled green fluorescence images counted was divided by total cell number observed in each image. The figure was determined by the distance of the Z-axis from the bottom of the coverslip versus the number of speckled green fluorescence (FAM-siRNA) images counted in the cytoplasm of each cell. Flow cytometry The ARPE-19 cells were seeded in six-well plates using DMEM medium at a density of 3 × 10 5 cells per well and incubated for 24 hours to achieve 75% confluence. Before the uptake study, the culture media was removed and the cells were washed with phosphate-buffered solution. FAM-siRNA loaded liposomes in 1 mL of serum-free culture media were then added. After four hours, the cells were washed three times with phosphate-buffered solution and then detached using 0.05% trypsin/0.02% EDTA, washed with phosphate-buffered solution, and resuspended in 1 mL phosphate-buffered solution for flow cytometric assay. The FAM-siRNA loaded liposome uptake was measured using a flow cytometer (Becton Dickinson, FACScan flow cytometer, Heidelberg, Germany). The ARPE-19 cells were then analyzed at 488 nm excitation and 530 nm band-pass filter in the emission path. Forward and side light scatter was used to gate the desired scattered events of the normal cells, dead cells, and cell debris. Previous studies have shown that ARPE-19 overexpresses various integrins and serves as an in vitro cell model for gene delivery. 23,24 In order to confirm that receptor-bearing cells take up the RGD-modified PEGylated liposome, ARPE-19 was incubated with FAM-siRNA-loaded 1 mol% RGD-PEGylated liposomes at 37°C. To determine that specific receptors were used in the uptake of 1 mol% RGD-PEGylated liposomes by ARPE-19, the cells were pretreated with antibodies that block α V β 3 (MAB1976), α 5 β 1 (MAB1969), and α 5 (CBL497) integrins. All antibodies were used at 5 µg/mL. 25 The ARPE-19 cells were seeded in six-well plates using DMEM medium at a density of 3 × 10 5 cells per well and incubated for 24 hours to achieve 75% confluence. Before treating the FAM-siRNA loaded liposomes, the cells were preincubated separately with α V β 3 antibody, α 5 antibody, or α 5 β 1 antibody, for one hour and then FAM-siRNA loaded liposomes were added. After two hours, the cells were washed three times with phosphate-buffered solution and then detached using 0.05% trypsin/0.02% EDTA, washed with phosphate-buffered solution, and resuspended in 1 mL of phosphate-buffered solution for the flow cytometry assay. statistical analysis The results are expressed as the mean ± standard deviation. The Student's t-test for the comparisons between two means was used to determine statistical significance. Differences were considered to be significant at P , 0.05. Synthesis and identification of DsPe-Peg-rgD Under the reaction conditions of 12 hours at 4°C, pH 8, and gentle mixing, DSPE-PEG(2000)-RGD (molecular weight 3741.69 Da) was successfully synthesized using DSPE-PEG(2000) maleimide (molecular weight 2922.79 Da) and thiolated RGD peptide (molecular weight 818.9 Da), by a Michael additive reaction between activated maleimide and a thiol group ( Figure 1A). Formation of the desired compound was further confirmed by the MALDI-TOF mass. Peaks of parental DSPE-PEG(2000) maleimide completely vanished and shifted to the right side, at regions approximating the molecular weight of the thiolated RGD peptide ( Figure 1B Characterization of sirNA-loaded liposomes The characterization of the liposomes, including particle sizes, zeta potential, and entrapment efficiency, is summarized in Table 1. The mean particle sizes for the 1 mol% and 5 mol% PEGylated liposomes with siRNA were 129.7 nm and 147.2 nm, respectively. Mean particle size increased slightly by increasing the molar ratio of the PEGylated lipids. In addition, the RGD-PEGylated liposomes with siRNA had larger mean particle sizes than the PEGylated liposomes. The transmission electron microscopic images of the 1 mol% PEGylated liposomes and 1 mol% RGD-PEGylated liposomes showed a similar particle size distribution as measured by the dynamic light scattering method (Figure 2A and B). The siRNA-loaded 1 mol% PEGylated liposome had the highest positive zeta potential (32 ± 1.34 mV) of all the liposomes studied. When a high molar ratio of DSPE-PEG(2000) carboxy was incorporated, the zeta potential was reduced to 25.7 ± 1 mV for the 5 mol% PEGylated liposomes. In addition, incorporation with different molar ratios of the synthesized lipid, DSPE-PEG(2000)-RGD, reduced the zeta potential to 24.9 ± 1.5 for the 1 mol% RGD-PEGylated and to 17.3 ± 0.6 mV for the 5 mol% RGD-PEGylated liposomes. The PEGylated liposomes and RGD-PEGylated liposomes showed high-level siRNA entrapment efficiencies of more than 96% by the thin film hydration method. Liposome cytotoxicity The cytotoxicity of the liposome formulations was evaluated in the ARPE-19 cells with total liposomal lipid concentrations of 3.2, 12.8, or 51.2 µM for the cells incubated with serum-free medium alone, no treatment with liposome as a control, and cell viability set at 100% (Figure 3). The viability of ARPE-19, the group treated with 12.8 µM of 1 or 5 mol% RGD-PEGylated liposomal suspensions, showed a relative cell viability that remained unchanged. However, there was significantly reduced relative cell viability when the cells were treated with 12.8 µM of 1 mol% and 5 mol% PEGylated liposomes. There was no cytotoxicity among the groups treated with 3.2 µM, but significant cytotoxicity was observed with the 51.2 µM treatment. These results show that RGD peptide modification of cationic PEGylated liposomes was less cytotoxic than the cationic PEGylated liposomes. CLsM imaging study The distribution of FAM-siRNA-loaded liposomes was studied to determine their intracellular distribution with or without DSPE-PEG-RGD. Figure 4 shows the confocal 16 (OH 2 CH 2 C) 45 H 3 C(H 2 C) 16 Liposome uptake by ArPe-19 cells The FAM-siRNA uptake intensity in the ARPE-19 cells was studied to evaluate the PEGylated liposomes and RGD-PEGylated liposomes; flow cytometry was used for the analysis. Comparison in the presence and absence of RGD-PEGylated effects in the formulation of 1 mol% and 5 mol% RGD-PEGylated liposomes, showed FAM-siRNA delivery efficiencies of about 3.6 times and 4.2 times greater than for 1 mol% and 5 mol% PEGylated liposomes, respectively ( Figure 6B and C). For the 1 mol% and 5 mol% PEGylated liposomes, the FAM-siRNA delivery efficiency decreased as the percentage of PEGylated lipids increased in the formulation. In the 1 mol% and 5 mol% RGD-PEGylated liposomes, the FAM-siRNA delivery efficiency was also decreased as the percentage of DSPE-PEG-RGD increased in the formulation. Similar trends were observed in the results of the CLSM imaging study. These findings might be explained by the RGD peptide modification of the PEGylated liposomes enhancing the delivery efficiency of the cargo gene, FAM-siRNA, into the ARPE-19 cells. From the image of the forward and side light scatter, cell debris was significantly observed in the 5 mol% PEGylated liposomes, but was not obvious in the 5 mol% RGD-PEGylated liposomes. These findings suggest that the cationic PEGylated liposome with a higher percentage of PEGylated-lipids might have damaged the cell integrity ( Figure 6A). However, RGD peptide modification of the cationic PEGylated liposomes appears to have reduced damage to the integrity of the cell. These findings suggest that PEGylated lipid interferes with the integrity of the cell and that DSPE-PEG-RGD is less prone to induce cell damage. Antibody blocking assay Comparison of the uptake of FAM-siRNA loaded 1 mol% PEGylated liposomes of cells with or without antibody pretreatment; the group without antibody pretreatment was used as a control, and the FAM-siRNA uptake intensity was set at 100%. The pretreatment of anti-α V β 3 , -α 5 β 1 , or -α 5 antibody uptake percentage was decreased to 23.2% (P , 0.01), 31.9% (P , 0.001), and 34.1% (P , 0.001), respectively (Figure 7). When pretreated with a mixture have also been studied. 29 However, previous studies have usually prepared the nanoparticles first, and then added a targeting molecule for conjugation on the surface of the nanoparticle. 17,29,30 This method is limited by the difficulty in controlling the number of targeting moieties conjugated to the constructed nanoparticles. In this report, synthesis of the ligand-conjugated lipid, DSPE-PEG-RGD, was performed by Michael addition reactions between activated maleimide and thiol groups prior to preparation of the liposomes ( Figure 1A). The DSPE-PEG-RGD was easy to synthesize, allowing precise addition of the targeting molecule in a variety of liposome preparations. In this study, MALDI-TOF mass spectrometry, a powerful tool for identification of a compound's molecular weight, was used to determine the molecular weight of DSPE-PEG maleimide and DSPE-PEG-RGD. MALDI-TOF mass spectrometry showed that the DSPE-PEG-RGD peaks all increased approximately 818.9 Da, which is the molecular weight of thiolated RGD peptide, compared with the relative DSPE-PEG maleimide peaks ( Figure 1B). After reaction, the parental DSPE-PEG maleimide peaks were almost gone, and instead DSPE-PEG-RGD peaks were observed. These results showed that the DSPE-PEG maleimide reacted quantitatively with one thiolated RGD peptide. of -α V β 3 , -α 5 β 1 , and -α 5 antibodies, the uptake percentage was decreased to 49.5% (P , 0.001) compared with no pretreated antibody control. These data further confirmed that the RGD-PEGylated liposomes were internalized by integrin receptor-mediated endocytosis in ARPE-19 cells. Discussion Because it is difficult for free siRNA to enter into cells, lipid-based carriers are often used to improve cellular uptake of siRNA. 26 Previous studies have reported that cationic liposomes can be used for gene delivery; they are easy to prepare, reasonably inexpensive, and can achieve ideal transfection. 27,28 Currently, the targeting liposome design is of great interest for the transport of genes into desired target cells; this method has the potential to produce greater and more selective therapeutic activity. This approach involves the coupling of targeting moieties capable of recognizing target cells, binding to them, and internalization of the liposomes and encapsulated gene. For this process, a variety of targeting liposomes can be developed. For example, an antibody-mediated liposome, or an immunoliposome, can be constructed to target HER2-overexpressing tumors using anti-HER2 liposomes. Other ligands such as folate, transferrin, RGD peptide, and the epidermal growth factor receptor Dovepress Dovepress 2575 rgD-Pegylated lipid conjugate for delivery of sirNA The PEG moiety of the liposome has been demonstrated to be more stable and remains in the circulation for a longer period of time in vivo, because the PEG moiety provides an efficient steric hindrance that prevents the binding of plasma proteins with the liposomes and thereby also interferes with uptake by the reticuloendothelial system. 10,29,31 In this study, the zeta potential of the siRNA-loaded 1 mol% PEGylated liposome was highly positive (32 ± 1.3 mV). However, when a high molar ratio of DSPE-PEG was incorporated, it was reduced to 25.3 ± 1 mV for the 5 mol% PEGylated liposomes. Incorporating different molar ratios of DSPE-PEG-RGD resulted in further reductions in the zeta potential to 24.9 ± 1.5 for the 1 mol% RGD-PEGylated liposomes and to 17.3 ± 0.6 mV for the 5 mol% RGD-PEGylated liposomes (Table 1). These findings show that the presence of a lipid-bound PEG moiety shielded the positive charge from the cationic lipid, and the presence of a RGD peptide at the end of PEG moiety shielded the molecule from a more positive charge. In 1 mol% PEGylated liposomes and 1 mol% RGD-PEGylated liposomes, the hydrodynamic size was in the range of 129.7 to 156.4 nm, associated with a narrow size distribution of 51 to 37.5 nm, respectively. The presence of the RGD peptide on the liposome surface slightly increased the size compared with liposomes without the RGD peptide. Theoretically, for a 100 nm liposomal particle modified with DSPE-PEG2000, the PEG moiety is arranged in the mushroom mode in the presence with ,4 mol% DSPE-PEG2000; the PEG moiety is configured in phase transition by the presence of a 4 to 8 mol% modification, and in the brush mode with .8 mol% PEGylation. 32 The effect of grafted PEG on liposome size is mainly a change in the spatial structure of the DSPE-PEG molecule, which is contingent on the grafted PEG in the mushroom or brush configuration. 33 The transmission electron microscopic assay showed clearly the image of the small unilamellar vesicles that comprise the lipid bilayer, with diameters in the range of 50-250 nm; a narrow size distribution was observed ( Figure 2). The small unilamellar vesicles contain a large aqueous core and are preferentially used for encapsulated water-soluble drugs. 10 These findings indicate that the DSPE-PEG-RGD incorporated into the liposome resulted in sterically stabilized liposomes as well as PEGylated liposomes. In this study, siRNA entrapment efficiencies were all higher than 96%. This was primarily due to the electrostatic interaction between the dimethylaminoethane of the DC-cholesterol and the phosphate groups of the siRNA in a +/− charge ratio of 4, consistent with previously reported findings from Zhang et al. 34 Liposomes are composed of phospholipids and cholesterols, which are components of the cell; these compounds are biocompatible and nontoxic particulates that can be used for gene carrier constructs. However, there are some components reported that tend to be associated with cytotoxicity. For example, positively charged liposomes containing DC-cholesterol had increased toxicity when their dimethylaminoethane content was increased; 35 the liposome formulations included a lipid holding a PEG moiety to reduce the surface charge density of the cationic lipid. However, as the PEG molecular weight of the PEGylated nanoparticle increased, the cytotoxicity also increased. 36 In this study, the cytotoxicity caused by the RGD-PEGylated liposome was lower than for the PEGylated liposome; these findings suggest that RGD peptide modification reduced the toxicity of the PEG portion of the liposome structure. With the concentration of the RGD-PEGylated liposome in the culture medium less than 12.8 µM, the biosafety of the delivery system was improved in ARPE-19 cells (Figure 3). Hence, this safe concentration was used in the subsequent siRNA delivery assay. The liposomal surface charge is considered one of the most important parameters governing cellular uptake of nanoparticles, which occurs by the electrostatic interactions of oppositely charged cells and liposomes. 37,38 PEGylated liposomes decreased the uptake of the liposome and its cargo gene, FAM-siRNA ( Figure 4A and C); this resulted from not only a lower positive charge on the liposome surface, but also the ability of the PEG moiety of the liposome to prevent contact with the cell surface. These properties minimize the nonspecific binding of liposomes to the cell surface. 39,40 However, in order to accumulate more liposomal siRNA in the retinal pigment epithelial cells, with the goal of producing greater and more selective therapeutic activity, use of active targeted liposomes with the RGD peptide has been suggested. This involves coupling RGD moieties capable of recognizing and binding to the integrin receptor of the retinal pigment epithelial cells, and then inducing liposome or loaded siRNA uptake. The expression of α 5 β 1 integrins on the basolateral side of retinal pigment epithelial cells has been characterized, and shown to be important for attachment of cells to Bruch's membrane, while expression of the α V β 3 and α V β 5 integrins on the apical side maintains diurnal phagocytosis of the shed outer segment fragment of the photoreceptor. 14 The RGD-containing peptide could be used as a specific ligand for integrin α V β 3 , α V β 5 , or α 5 β 1 . Therefore, the synthesized DPSE-PEG-RGD was used to formulate targeted liposomes in this study by covalent binding of the active maleimide of the PEG end and the specific thiol group of ligands. For the siRNA, the interference mechanism is necessary for successful delivery of siRNA into the cytoplasm of target cells. In further analysis of the CLSM images, comparing the 1 mol% RGD-PEGylated liposomes with the 1 mol% PEGylated liposomes, the total number of 1 mol% RGD-PEGylated liposomes localized within the cells was more than for the 1 mol% PEGylated liposomes ( Figure 5A and B). Therefore, a large amount of the FAM-siRNA-loaded 1 mol% RGD-PEGylated liposomes was confirmed to provide effective delivery into the cytoplasm of the ARPE-19 cells. However, siRNA delivery is a multistep process, which has several potential problems, including biological barriers to penetration, cellular internalization, endosomal escape, and cytoplasm trafficking. 5,34 In the future, the knockdown effect of targeted liposomes encapsulating a functional siRNA and the therapeutic effects in vivo of targeted delivery will require further evaluation. Previous reports have shown that a suitable size and positive zeta potential are essential for enhanced delivery of particles into cells. 41 The results of this study show that RGD-PEGylated liposomes have a larger size and smaller zeta potential, but increased siRNA delivery efficiency in ARPE-19 cells, compared with PEGylated liposomes ( Figure 6). Demonstrating further that binding was mediated by RGD-associated integrin receptors, the data from the integrin -α V β 3 , -α 5 β 1 , and -α 5 antibody blocking assay showed that RGD-modified PEGylated liposomes were internalized by integrin receptor-mediated endocytosis in the retinal pigment epithelial cells. However, the fluorescence of FAM-siRNA-loaded 1 mol% RGD-PEGylated liposomes was not completely inhibited by pretreating the cells with blocking antibodies against α V β 3 , α 5 β 1 , and α 5 integrins (Figure 7). The integrin family includes not only α V β 3 and α 5 β 1, but also α 8 β 1 , α 11b β 3 , α V β 6 , and α V β 8 , which recognize RGD-containing peptides. 42 Conclusion The results of this study showed successful synthesis of the functional lipid, DSPE-PEG-RGD, which could be incorporated into a liposome formulation that was prepared and loaded with siRNA for effective delivery into ARPE-19 cells. The prepared siRNA-loaded 1 mol% RGD-PEGylated liposomes demonstrated a positive charge (24.9 ± 1.5 mV), a nanosize of 156.4 nm (with narrow distribution width of 37.5 nm), entrapment efficiency (98.83% ± 0.01%), low cytotoxicity, and a high level of siRNA delivery efficiency compared with other investigated liposome vehicles. The antibody blocking assay results confirm that receptor-mediated endocytosis was involved in the cell uptake of the RGD-PEGylated liposomes and the loaded siRNA. These results provide a methodology for siRNAspecific delivery to retinal pigment epithelial cells using RGD-modified PEGylated liposomes. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/international-journal-of-nanomedicine-journal The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch®, Current Contents®/Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2014-10-01T00:00:00.000Z
0001-01-01T00:00:00.000
{ "year": 2011, "sha1": "f138246cd0f2be6742993397c1ac8313aa776fad", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=11261", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2b3a9fc26f8dac8c5080f8886e92eb0c6625bc6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
110099653
pes2o/s2orc
v3-fos-license
Comparitive Loss Evaluation of Si IGBT Versus Sic Mosfet ( Silicon Carbide ) for 3 Phase Spwm Inverter In 19921 UMOSFET was first among sic mosfet technology that was developed by cree. Later in 1997 Accufet (Accumulation-channel fet) was developed in NC State University by Dr. Baliga's group using 6h-sic dmos geometry and it was the first high voltage (350V) planar vertical sic (Accufet) developed with buried implanted region for shielding the gate oxide2. Later in 2001 2.4 KV 4h-sic Dimosfet having specific on state resistance of 42mΩcm2 was demonstrated by cree3. In 20044,5 Dr. James Coopers group of Purdue University developed 3KV, 5KV Umosfet of 4h-sic having junction termination extension (jte) and trench oxide protection. In 20046 10kv, 4h-sic was developed by cree. Large effort is put into sic research and development in recent years as the critical electric field of 4h-sic is 8.2 times larger than that of si. So there are more advantages in using sic devices as their electric break down field, electron saturated drift velocity, thermal conductivity, irradiation tolerance making sic available for high voltage, high temperature and frequency and also combining low power loss8,9. Considering case of 22kw inverters used to drive motors they conventionally employ si igbts that operate at max temperatures of 125oc. That has to be mounted on larger heat sink. As they produce more losses on increasing the switching frequency and has to be cooled using forced air cooling or water cooling as their performance is limiting sic has gained more attention. Introduction In 1992 1 UMOSFET was first among sic mosfet technology that was developed by cree. Later in 1997 Accufet (Accumulation-channel fet) was developed in NC State University by Dr. Baliga's group using 6h-sic dmos geometry and it was the first high voltage (350V) planar vertical sic (Accufet) developed with buried implanted region for shielding the gate oxide 2 . Later in 2001 2.4 KV 4h-sic Dimosfet having specific on state resistance of 42mΩcm2 was demonstrated by cree 3 . In 2004 4,5 Dr. James Coopers group of Purdue University developed 3KV, 5KV Umosfet of 4h-sic having junction termination extension (jte) and trench oxide protection. In 2004 6 10kv, 4h-sic was developed by cree. Large effort is put into sic research and development in recent years as the critical electric field of 4h-sic is 8.2 times larger than that of si. So there are more advantages in using sic devices as their electric break down field, electron saturated drift velocity, thermal conductivity, irradiation tolerance making sic available for high voltage, high temperature and frequency and also combining low power loss 8,9 . Considering case of 22kw inverters used to drive motors they conventionally employ si igbts that operate at max temperatures of 125ºc. That has to be mounted on larger heat sink. As they produce more losses on increasing the switching frequency and has to be cooled using forced air cooling or water cooling as their performance is limiting sic has gained more attention. 2 Comparitive Loss Evaluation of Si IGBT Versus Sic Mosfet (Silicon Carbide) for 3 Phase Spwm Inverter Heat Sink Higher melting point and higher band gap of the sic based device would allow higher temperature operation enabling the use of smaller heat sink compared to si based device. Device Count Higher break down field would result in having higher break down voltage reducing the device count. Efficiency Higher band gap and higher break down field and higher thermal conductivity are the reasons for sic having lower losses and higher efficiency. Speed Higher thermal conductivity and higher saturation carrier velocity would result in smaller seize and high speed operation. System Specification Two 3 phase spwm inverters are designed basing on si igbt and sic mosfets. Figure1 shows the circuit for the si igbt base 2 level 3 phase spwm inverter. Figure 2 shows the circuit for sic mosfet based 2 level 3 phase spwm inverter. For si igbt 3 phase inverter six 1200V, 40A single si igbts are considered. The inverter operated at 3 ranges of switching frequencies 5khz, 10khz and 15khz. The inverter is controlled using spwm technique. For easy evaluation the power factor is taken as unity and the modulation index is also taken unity the system specification is in Table 1. Device Paramaters The Figures were taken from the data sheets of respective si igbt and sic mosfet. Figure 3 shows the forward characteristics of the si igbt used for the evaluation Loss Calculation The conversion losses in the inverter can be divided in two categories. Conduction Losses The conduction losses are due to device on-state voltage drop. They calculated by averaging the conduction losses in each switching cycle as shown in below equation: Where P C is the total device conduction losses, switching period is T. V f (wt) is the forward voltage of the device, i (wt) is the current flow through the device in the conduction period. The value of V f (wt) is calculated as follow: Where V f0 is the device forward voltage at no load and device forward resistance is r f . The values of V f0 and r f are calculated using the datasheet of device characteristics provided by manufacturing companies as shown in Figure 3 Switching Losses The switching losses are the total sum of on-state switching losses and turn-off switching losses. They depend on the device characteristics, switching frequency and device current. The switching energy is expressed as a function of the device current as: Evaluation of Conversion Losses in Two-Level Converters If the load current is assumed as I a (wt) = I m Sin (wt-0) then the leg phase voltage is defined as V a (wt) = V m Sin (wt-0) and the duty cycle for the device switches is: The average and rms currents for IGBTs T1 and T2 are calculated using respective formulae and the duty cycle defined as below: The average and rms currents for the lower freewheeling diode are similar to that of the upper IGBT device but in opposite direction therefore: The free-wheeling diode is switched on/off very fast compared to the IGBT so its switching losses are relatively small compared to that in an IGBT, therefore are not considered in the calculation. The switching losses for the IGBT are calculated using below equation as: Comparisions By substituting the values of the device parameters in the loss evaluation formulae the respective values of the losses have been tabulated in the below Tables 4,5,6,7 for the respective switching frequencies of 5khz, 8khz, 10khz, 15khz. The modelled si igbt based 2 level three phase spwm inverter with modelled switches was simulated and compared with sic mosfet based 2 level three phase spwm inverter using orcad pspice as simulation tool. Figures 5 and 6 show the 2 scenarios one with si igbt and sic mosfet at switching frequency of 10khz. Power loss is compared in orcad pspice at various switching frequencies and average power loss is measured across each switch for 20ms and compared. Conclusion In sum, a 22kw sic mosfet base 3 phase spwm inverter system is designed and compared with that of si igbt base three phase spwm inverter at switching frequency ranges of 5khz, 8khz, 10,khz, 15khz and it had been observed that the losses of the 3 phase spwm sic mosfet inverter where 29% less for 5khz switching frequency, 34% less for switching frequency of 8khz, 37% less for switching frequency of 10khz, 42% less for the switching frequency of 15khz over the 3 phase spwm inverter based on si igbt which in turn shows that the sic mosfet can be replaced by si igbt in 3 phase spwm inverter system for better efficiency and the pspice simulations showed similar results showing that sic mosfet is efficient than si igbt based three phase spwm inverter.
2019-04-13T13:13:17.817Z
2015-10-18T00:00:00.000
{ "year": 2015, "sha1": "f9f99bb99bebeb525c8df02a195b31aef304feff", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2015/v8i28/71928", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "36c36235d14aa59002505aedb1e6623a21e34766", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
257056761
pes2o/s2orc
v3-fos-license
Use of a language intervention to reduce vaccine hesitancy Vaccine hesitancy is a major global challenge facing COVID-19 immunization programs. Its main source is low public trust in the safety and effectiveness of the vaccine. In a preregistered experimental study, we investigated how using a foreign language when communicating COVID-19 vaccine information influences vaccine acceptance. Hong Kong Chinese residents (N = 611) received COVID-19 vaccine information either in their native Chinese or in English. English increased trust in the safety and effectiveness of the vaccine and, as a result, reduced vaccine hesitancy. This indicates that language can impact vaccine attitudes and demonstrate the potential of language interventions for a low cost, actionable strategy to curtail vaccine hesitancy amongst bilingual populations. Language interventions could contribute towards achieving the United Nations Sustainable Development Goal of health and well-being. Scientific Reports | (2022) 12:253 | https://doi.org/10.1038/s41598-021-04249-w www.nature.com/scientificreports/ recycled wastewater, when these were described in a foreign language rather than in their native tongue 17 . The foreign language promoted more positive feelings towards these products, which resulted in higher acceptance. Other studies have also shown that how people feel about novel products is driven by differences in trust 18,19 . For example, enhanced social trust increased positive feelings and decreased negative feelings towards the novel avian flu vaccine, which, in turn, increased intentions to get the vaccine 18 . Given that trust and feelings are closely related concepts 19 , and that people particularly rely on trust when judging things that are novel to them 20 , these results raise the possibility that communicating COVID-19 vaccine information in a foreign language might reduce mistrust in the vaccine and, therefore, decrease vaccine hesitancy. Hong Kong Chinese bilinguals provided an ideal opportunity to test this theory for a couple of reasons. First, COVID-19 vaccine hesitancy was relatively high in Hong Kong compared to other countries 10,12,21 . Second, Chinese and English are official languages in Hong Kong and, hence, many government and healthcare resources are readily available in both languages. Therefore, the language manipulation represents an actionable intervention to reduce COVID-19 vaccine hesitancy in Hong Kong. To study the impact of foreign language use, we provided COVID-19 vaccine information to unvaccinated Hong Kong residents and randomly assigned them to receive the information either in their native Chinese or in English. We then asked them whether they intend to get the vaccine and how much they trust the vaccine, among other questions. Results The data, analysis script and materials are publicly available online on Open Science Framework. The study design and number of participants was preregistered on www. AsPre dicted. org. Descriptive data of vaccine hesitancy. Dovetailing with existing research, we found a high degree of COVID-19 vaccine hesitancy. Out of the 611 participants, only 36.0% (220) said they plan to get vaccinated, 45.2% (276) indicated that they were unsure, and 18.8% (115) indicated that they would not get vaccinated. Most importantly, vaccine hesitancy depended on language as we predicted. The use of English reduced vaccine hesitancy, with more people saying they intend to get vaccinated in the English (39.9%) than in the Chinese (32.5%) condition, and fewer saying they are unsure in English (41.2%) than in Chinese (48.8%). Language did not impact the rate of outright refusal ("No": English: 18.9%, Chinese: 18.8%). In sum, the foreign language English helped turn hesitancy into acceptance (see Fig. 1). Predicting vaccine hesitancy by language. Our main interest was to examine whether language affects vaccine hesitancy. Hence, we grouped responses into ones that indicated no hesitancy (0 = Yes) and ones that indicated hesitancy or refusal (1 = Unsure, 1 = No). A binary logistic regression was conducted examining the dichotomous COVID-19 vaccine hesitancy variable as a function of language (0 = Chinese, 1 = English), gender (0 = female, 1 = male), age, education, and general health. The logistic regression model was statistically significant, χ 2 (5) = 27.98, P < 0.001. The model correctly classified 62.4% of the cases. As anticipated, language accounted for a significant proportion of variance in vaccine hesitancy (B = 0.41, Wald = 5.43, P = 0.020, odds ratio = 0.67). Participants reading the COVID-19 vaccine information in English were less hesitant about getting the vaccine (Mean = 0.60) than were those reading the same information in their native Chinese (Mean = 0.68). Gender also accounted for significant variance in vaccine hesitancy (B = 0.54, Wald = 9.19, P = 0.002, odds ratio = 1. Discussion Vaccine hesitancy presents a major barrier to improving health and well-being around the globe. We investigated how the language used to communicate COVID-19 vaccine information influences vaccine hesitancy. We provide evidence that the use of a foreign language increased trust in the safety and effectiveness of the vaccine compared to identical information communicated in the native language. In turn, the higher trust associated with the foreign language reduced COVID-19 vaccine hesitancy. These findings suggest that feelings of trust when making health decisions depend not only on the content of health information but also on the nature of the language used to communicate it. Studies have shown that foreign language use can influence judgment and decision making in different domains including risk taking 22 and morality 23 . Here we demonstrate that language can also influence an extremely consequential health decision, namely whether to get vaccinated during a pandemic. Research has further suggested that language is a powerful social cue that can influence trust 24 . We showed that the language used in communications can influence trust and, as a result, the decision to vaccinate. Because English was a foreign language for our participants, they might have experienced more disfluency when comprehending the information in English than in Chinese. This could have influenced perceived trust in the vaccine and consequently willingness to be vaccinated. However, this account would predict the opposite of what we found. Disfluency decreases trust rather than increases it 25 . For example, in the "trust game" players show lower trust in people with disfluent names than in those with fluent names 26 . Hence, if English communication is more disfluent, then it should have prompted lower trust in the vaccine and thus increased vaccine hesitancy. However, the opposite was true. The impact of language on vaccine hesitancy should depend on how it affects trust. Here we showed that when the native language context is associated with relatively low trust in the vaccine, the use of a foreign language increases trust and, therefore, reduces vaccine hesitancy. But in situations where the native language is associated with higher trust than the foreign language, we would expect the opposite. For example, consider the case of first generation immigrant communities such as Arab immigrants in Europe. For such communities, trust in the information that is provided in their native tongue, Arabic, might be higher than in information provided in the local language. In such cases, communications through the foreign language would be predicted to lead to lower trust in the vaccine, thereby increasing vaccine hesitancy. In this sense, language interventions should consider local conditions by understanding how each language impacts trust. Other determinants of trust associated with a given language could be further explored in the service of reducing vaccine hesitancy. For example, it is possible that when one language of bilinguals has a higher status, people will trust the information provided in it more. In Hong Kong, English does not have higher status than Cantonese 27 , which suggests that the effect we found is not a function of differential language status. Yet in situations where one language has a higher status than the other, the higher status language may increase trust in the information thereby reducing hesitancy. Limitations. This study examined a particular population, and a specific native-foreign language combination. It would be important to further investigate the generalizability of the current findings in different populations and with different native-foreign language combinations. Furthermore, the intervention that we identified only applies to bilingual populations. However, estimates show that more than half of the global population uses two or more languages in everyday life 28 , suggesting that this language intervention could be widely actionable. www.nature.com/scientificreports/ In monolingual populations, other language interventions could be explored, such as using a dialect towards which people have positive attitudes that might lead to higher trust 29 and reduced vaccine hesitancy. Conclusion We provide evidence for a low cost and actionable language intervention to reduce vaccine hesitancy amongst Hong Kong Chinese residents. Such language interventions can influence other health decisions and extend to other cultures. However, the selection of language should consider the local conditions. In cases where the native language context is associated with low trust, the use of a foreign language can enhance trust and reduce vaccine hesitancy. In cases where the foreign language is associated with low trust, the native language should be preferred. Public health campaigns therefore could use such language interventions strategically to boost vaccination uptake and other beneficial preventative behaviors such as cancer screening. Such language strategies can promote the United Nations Sustainable Development Goal 3 of "good health and well-being" 30 . Methods The study design, sample size and materials were preregistered on www. AsPre dicted. org. The data and study materials are available online in the Supplementary Materials and on https:// osf. io/ bdhvx/? view_ only= 02c42 3439e ac40a f8a9a 57c58 0bb05 88. All participants provided written informed consent prior to participation. All procedures were approved by the Social and Behavioral Sciences Institutional Review Board at the University of Chicago. All methods were carried out in accordance with the Declaration of Helsinki. Materials and measures. Participants read about why they should get the vaccine, how the vaccine works, and possible side effects of the vaccine in either their native Chinese or English adapted from the Hong Kong Department of Health (for the full descriptions see Table S1 in the Supplementary Materials available online). At the time of the study, Hong Kong residents could select the type of vaccine with which to be inoculated, so we did not mention a specific vaccine in the information 31 . We measured intention to vaccinate by asking participants, "If a vaccine that protects you from COVID-19 disease was available free of charge, would you get it?" (Yes, Unsure, No). We also asked participants to evaluate their trust in the effectiveness and safety of the vaccine, "Overall, how much do you trust that the COVID-19 vaccine will be effective?" and "Overall, how much do you trust that the COVID-19 vaccine will be safe?" (1 = Do not trust at all, 2 = Hardly trust, 3 = Trust a little, 4 = Mostly trust, 5 = Completely trust). Furthermore, we collected a number of exploratory measures of secondary interest (see Supplementary Information for the full set). In order to ensure that all participants had a sufficient proficiency in English to understand the materials, at the end of the study all participants, regardless of language condition, were asked to translate three key sentences from English to Chinese. Finally, participants were asked about their general health, age, gender, and education level.
2023-02-22T15:43:09.841Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "f8a99d4ce36e2e9516ef2b6a2927b52aaf99fc28", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-04249-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f8a99d4ce36e2e9516ef2b6a2927b52aaf99fc28", "s2fieldsofstudy": [ "Linguistics", "Medicine" ], "extfieldsofstudy": [] }
50029747
pes2o/s2orc
v3-fos-license
On slope genera of knotted tori in 4-space In this note, we investigate genera for the slopes of a knotted torus in the 4-sphere analogous to the genus of a classical knot. We compare various formulations of this notion, and use this notion to study the extendable subgroup of the mapping class group of the knotted torus. Introduction In the classical knot theory, the genus of a knot in the 3-sphere is a basic numerical invariant which has been well studied. In this note, we investigate some analogous notions for the slopes of a knotted torus in the 4-sphere S 4 . These reflect certain essential difference between knotted tori and knotted spheres. Similar phenomena arise in the case of knotted surfaces in S 4 , but the discussion would require more general treatments. We focus on the torus case in this note for the sake of simplicity. A knotted torus in S 4 is a locally flat subsurface homeomorphic to the torus. Without loss of generality, we may fix a choice of marking (cf. Subsection 2.2), then throughout this note, a knotted torus in S 4 means a locally flat embedding: from the torus to the 4-sphere. By slightly abusing the notation, we often write the image of K still as K. For any slope (i.e. an essential simple closed curve) c ⊂ K, it makes sense to define the genus: of c as the smallest possible genus of all the locally flat, orientable, compact subsurfaces F ֒→ S 4 whose image bounds c and meets K exactly in c. The genus of a slope is clearly an isotopy invariant of the knotted torus, and indeed, it is invariant under extendable automorphisms. More precisely, if τ is an automorphism (i.e. an orientation-preserving self-homeomorphism up to isotopy) of T 2 that can be extended over S 4 as an orientation-preserving self-homeomorphism, then c and τ (c) must have the same genus for any slope c ⊂ K. It is clear that all such automorphisms form a subgroup: of the mapping class group Mod(T 2 ), called the extendable subgroup with respect to K. See Section 3 for more details. A primary motivation of our study is to understand E K with the aid of the slope genera. Being natural as it is, the genus of a slope of a knotted torus is usually hard to be captured. In contrast, two weaker notions yield much more interesting applications. One of them is called the singular genus of a slope c, denoted as g ⋆ K (c). It is defined by loosening the locally-flat-embedding condition on the bounding surface F above, only requiring F → S 4 to be continuous. Another is called the induced seminorm on H 1 (T 2 ), denoted as · K . This is an analogue to the (singular) Thurston norm in the classical context. In Section 4, we prove an inequality relating the seminorms associated with the satellite construction, which is analogous to the classical Schubert inequality for knots in S 3 . A simple observation at this point is that both the singular genus and the seminorm of a slope are group-theoretic notions, which can be rephrased by the commutator length and the stable commutator length in the fundamental group of the exterior of the knotted torus, respectively, (Remarks 3.3, 4.5). As an application of these results, we study braid satellites in Section 5. In particular, this allows us to obtain examples of knotted tori with finite extendable subgroups. In Section 6, we exhibit examples where the singular genus is positive for a slope with vanishing seminorm. This implies the singular genus is strictly stronger than the seminorm as an invariant associated to slopes. We also relate the vanishing of the singular genus for a slope c ⊂ K to the extendability of the Dehn twist τ c ∈ Mod(T 2 ) along c in a stable sense, (Lemma 6.2). Section 2 surveys on results relevant to our discussion. A few questions related to slope genera and the extendable subgroups will be raised in Section 7 for further studies. Acknowledgement. The second author was partially supported by an AIM Five-Year Fellowship and NSF grant numbers DMS-1021956 and DMS-1103976. The fourth author was partially supported by grant No.10631060 of the National Natural Science Foundation of China. The authors are grateful to Seiichi Kamada for clarifying some point during the devolopment of the paper and for very helpful guidance to the literature. The authors also thank David Gabai, Cameron Gordon, and Charles Livingston for suggestions and comments, and thank the referee for encouraging us to improve the structure of this paper. Background This section briefly surveys on the history relevant to our topic in several aspects. We hope that it will supply the reader some context for our discussion. However, the reader may safely skip this part for the moment, and perhaps come back later for further references. We thank the referee for suggesting us to include some of these materials. 2.1. Genera of knots. For a classical knot k in S 3 , one of the most important numerical invariant is its genus g(k), introduced by Herbert Seifert in 1935 [Se]. It is naturally defined as the smallest genus among that of all possible Seifert surfaces of k; and recall that a Seifert surface of k is an embedded compact connected surfaces in S 3 whose boundary is k. In other words, if k is not the unknot, the smallest possible complexity of a Seifert surface is 2g(k) − 1 > 0. In 3-dimensional topology, a suitable generalization of this notion for any orientable compact 3-manifold M is the Thurston norm. It was introduced by William Thurston in 1986 [Th]. Thurston discovered that the smallest possible complexity of properly embedded surface representatives for elements of H 2 (M, ∂M ; Z) can be linearly countinuously extended over H 2 (M, ∂M ; R) to be a seminorm. It is actually a norm in certain cases, for example, if M is hyperbolic of finite volume. Thurston then asked if this notion coincides with the one defined similarly using properly immersed surfaces, which was later known as the singular Thurston norm. The question was answered affirmatively by David Gabai [Ga] using his Sutured Manifold Hierarchy. As an immediate consequence, it was made clear that there is only one notion of genus (or complexity) for classical knots, whether we consider connected or disconnected, properly immersed or embedded Seifert surfaces. Generally speaking, the genus of a knot is quite accessible. For a (p, q)-torus knot, where p, q are coprime positive integers, the genus is well known to be (p−1)(q−1)/2. For a satellite knot, the Schubert inequality yields a lower bound (ĝ p +|w|·g c ) of the genus, in terms of the genus g c of the companion knot, the genusĝ p of the desatellite knot, and the winding number w of the pattern [Sc1]. Furthermore, the genus of a knot is known to be algorithmically decidable [Sc2]. In fact, certifying an upper bound is NP-complete [AHT]. The genus can also be bounded and detected in terms of other more powerful algebraic invariants, such as the knot Floer homology [OS] and twisted Alexander polynomials [FV]. 2.2. Knotting and marking. One of the classical problems in topology is the Knotting Problem, namely, "are two embeddings of a given space into the n-space isotopic?" Usually, the given space is a connected closed m-manifold M where m < n, and the embedding is locally flat, and the question can be made precise most naturally in the piecewise-linear or the smooth category. When the codimension is high enough, for example, if n = 2m + 1 and m > 1, all embeddings are isotopic to one another so they 'unknot' in this sense [Wu]. However, below the stable range the Knotting Problem becomes very interesting, as we have already seen in the classical knot case. Regarding an embedding of M m into R n as a marking of its image, the Knotting Problem may be phrased as to identify or distinguish knotting types (i.e. isotopy classes) of marked submanifolds. Somewhat more naturally, one can ask if two unmarked knotted submanifolds are isotopic to each other, or precisely, if two embeddings are isotopic up to precomposing an automophism of M in the given category. Suppose we have already solved the Knotting Problem, then the latter question amounts to asking whether two markings differ only by an extendable automorphism, cf. [DLWY,Lemma 2.5]. Therefore, with or without marking does not make a difference if M has a trivial mapping class group in the category, for example, in the cases of classical knots and 2-knots, but it does in general if the extendable subgroup is a proper subgroup of the mapping class group, cf. [DLWY,Hir1,Hir2,Mo]. We refer the reader to the survey [Sk] for the Embedding Problem and the Knotting Problem in general dimensions. 2.3. Knotted surfaces. The study of knotted surfaces can be suitably tagged as the mid dimensional knot theory. In this transitional zone between the low dimensional case and the high dimensional (2-codimensional) case, we find both geometric-topological and algebraic-topological methods with interesting interaction. For extensive references on this topic, see the books [Kaw,Hil,CS,CKS,Kam3]. With an auxiliary choice of marking, let us write a knotted surface as a locally flat embedding K : F ֒→ R 4 , where F is a closed surface. We can visualize a knotted surface by drawing a diagram obtained via a generic projection of K onto a 3-subspace, or by displaying a motion picture of links in R 3 , obtained via a generic line projection that is Morse restricted to K, cf. [CS, KSS]. The fundamental group of the exterior is called the knot group of K, denoted as π K . Similar to the classical case, π K has a Wirtinger type presentation in terms of its diagram [Ya], and π K can be isomorphically characterized by having an Artin type presentation, described in terms of 2-dimensional braids [Kam3]. Exteriors of knotted surfaces form an interesting family of 4-manifolds. The fundamental group of any such manifold is nontrivial, and it contains much information about the topology. For instance, it has been suspected for orientable knotted surfaces that having an infinite cyclic knot group implies unknotting, namely, that K bounds an embedded handlebody [HK]. By deep methods of 4-manifold topology, this has been confirmed for knotted spheres in the topological category [FQ,Theorem 11.7A]. In earlier studies of knotted surface, a frequent topic was to look for examples with prescribed properties of the knot group, such as required deficiency [Fo, Le, Kan], or required second homology [BMS,Go2,Lit2,Mae]. In some other constructions of particular topological significance, combinatorial group theory again plays an important role in the step of verification [Go1,Kam1,Liv1,Liv2]. Many of these constructions implement satellite knotting on various stages. The idea of such an operation is to replace a so-called companion knotted surface with another one that is embedded in the regular neighborhood the former, often in a more complicated pattern. Basic examples of satellite knotting include the knot connected sum of knotted surfaces, and Artin's spinning construction [Ar], as well as its twisted generalizations [Ze,Lit1]. Generally speaking, satellite knotting would lead to an increase of genus, under certain natural assumptions such as nonzero winding number. However, this can be avoided if we are just concerned about knotted spheres or tori (cf. Subsection 4.2). Like in the classical case, satellite knotting only changes the knot group by a van Kampen type amalgamation. Therefore, it is usually an approach worth considering if one wishes to maintain some control on the group level during the construction. As far as we are concerned, the first explicit formulation of the satellite construction of n-knots in literature was due to Yaichi Shinohara, in his 1971 paper [Sh] about generalized Alexander polynomials and signatures; and the satellite construction of knotted tori in R 4 first appeared in Richard Litherland's 1981 paper [Lit2], where he studied the second homology of the knot group. Genera of slopes In this section, we introduce the genus and the singular genus for any slope of a knotted torus K in S 4 . We provide criteria about finiteness associated to the extendable subgroup E K and the stable extendable subgroup E s K of Mod(T 2 ) in terms of these notions. 3.1. Genus and singular genus. Let K : T 2 ֒→ S 4 be a knotted torus in S 4 , i.e. a locally flat embedding of the torus into the 4-sphere. Let: be the exterior of K obtained by removing an open regular neighborhood of K. Lemma 3.1. Let F 2 g be the closed orientable surface of genus g, and Y be a simplyconnected closed 4-manifold. Suppose K : F 2 g ֒→ Y is a null-homologous, locally flat embedding. Write X = Y − K for the exterior of K in Y . Then ∂X is canonically homeomorphic to F 2 g × S 1 , up to isotopy, such that the homomorphism H 1 (F 2 g ) → H 1 (X) induced by including F 2 g as the first factor F 2 g × pt is trivial. In particular, every essential simple closed curve c ⊂ F 2 g bounds a locally flat, properly embedded, orientable compact surface S ֒→ X K with ∂S embedded as c × pt. Proof. This is well-known, following from an easy homological argument. In fact, since K is null-homologous, the normal bundle of K in Y is trivial, so ∂X has a natural circle bundle structure p : ∂X → F 2 g over F 2 g which splits. The splitting are given by framings of the normal bundle, which are in natural bijection to all the homomorphisms ι : is the identity. Using the Poincaré duality and excision, it is easy to see H 1 (X) ∼ = Z and H 1 (X, ∂X) = 0. Thus the homomorphism H 1 (X) → H 1 (∂X) is injective, and the generator of H 1 (X) induces a homomorphism α : H 1 (∂X) → Z. It is straightforward to check that α sends the circle-fiber of ∂X to ±1, so the kernel of α projects isomorphically onto H 1 (F 2 g ) via p * . This gives rise to the canonical splitting ∂X = F 2 g × S 1 . It follows clearly from the construction that H 1 (F 2 g ) → H 1 (X) is trivial. Moreover, if c × pt is an essential simple closed curve on K × pt, it is homologically trivial in X, so it represents an element [a 1 , b 1 ] · · · [a k , b k ] in the commutator subgroup of π 1 (X). We take a compact orientable surface S ′ of genus k with exactly one boundary component, and there is a map j : S ′ → X sending ∂S ′ homeomorphically onto c × pt. By a general position argument we may assume j to be a locally flat proper immersion, and doing surgeries at double points yields a locally flat, properly embedded, orientable compact surface S ֒→ X bounded by c × pt. This allows us to make the following definition: Definition 3.2. Let K : T 2 ֒→ S 4 be a knotted torus. For any slope, i.e. an essential simple closed curve, c ⊂ K, the genus: of c is defined to be the minimum of the genus of F , as F runs over all the locally flat, properly embedded, orientable, compact subsurfaces of X K bounded by c × pt ⊂ ∂X K , (cf. Lemma 3.1). The singular genus: of c is defined to be the minimum of the genus of F , as F runs over all the compact orientable surfaces with connected nonempty boundary such that there is a continuous map F → X K sending ∂F homeomorphically onto c × pt. Remark 3.3. Recall that for a group G and any element u in the commutator subgroup [G, G], the commutator length: cl(u), of u is the smallest possible integer k ≥ 0 such that u can be written as a product of commutators [a 1 , b 1 ] · · · [a k , b k ], where a i , b i ∈ G, and i = 1, · · · , k. Note that elements of [G, G] that are conjugate in G have the same commutator length. As indicated in the proof of Lemma 3.1, it is clear that the singular genus g ⋆ K (c) is the commutator length cl(c), regarding c as an element of the commutator subgroup of π 1 (X K ). 3.2. Extendable subgroup and stable extendable subgroup. Let Mod(T 2 ) be the mapping class group of the torus, which consists of the isotopy classes of orientation-preserving self-homeomorphisms of T 2 . Fixing a basis of H 1 (T 2 ), one can naturally identify Mod(T 2 ) as SL(2, Z). We often refer to the elements of Mod(T 2 ) as automorphisms of T 2 , and do not distinguish elements of Mod(T 2 ) and their representatives. For any knotted torus K : T 2 ֒→ S 4 , an automorphism τ ∈ Mod(T 2 ) is said to be extendable with respect to K if τ can be extended as an orientation-preserving self-homeomorphism of S 4 via K. Note that this notion does not depend on the choice of the representative of τ , cf. [DLWY,Lemma 2.4]. It is also clear that all the extendable automorphisms form a subgroup of Mod(T 2 ). Definition 3.4. For a knotted torus K : T 2 ֒→ S 4 , the extendable subgroup with respect to K is the subgroup of Mod(T 2 ) consisting of all the extendable automorphisms, denoted as: The extendable subgroup E K reflects some essential difference between knotted tori and and knotted spheres (i.e. 2-knots) in S 4 . For instance, it is known that E K is always a proper subgroup of Mod(T 2 ), of index at least three, ( [DLWY], cf. [Mo] for the diffeomorphism extension case). Moreover, index three is realized by any unknotted embedding, namely, one which bounds an embedded solid torus S 1 × D 2 in S 4 , ( [Mo], cf. [Hir2] for the general case of trivially embedded surfaces). In [Hir1], E K has been computed for the so-called spun T 2 -knots and twisted spun T 2 -knots. It is also clear that taking the connected sum with a knotted sphere in S 4 does not change the extendable subgroup. However, for a general knotted torus in S 4 , the extendable subgroup E K is poorly understood. In the following, we introduce a weaker notion called the stable extendable subgroup. From our point of view, the stable extendable subgroup is more closely related to the singular genera than the extendable subgroup is, cf. Subsection 6.2. Suppose K : T 2 ֒→ S 4 is a knotted torus in S 4 , and Y is a closed simply connected 4-manifold. There is a naturally induced embedding: obtained by regarding Y as the connected sum S 4 #Y and embedding T 2 into the first summand via K. This is well defined up to isotopy, and we call This means stably extendable automorphisms also form a subgroup of Mod(T 2 ). Definition 3.5. For a knotted torus K : T 2 ֒→ S 4 , the stable extendable subgroup with respect to K is the subgroup of Mod(T 2 ) consisting of all the stably extendable automorphisms, denoted as: E s K ≤ Mod(T 2 ). Proposition 3.6. Let K : T 2 ֒→ S 4 be a knotted torus. Then the following statements are true: (1) If the singular genus g ⋆ K (c) takes infinitely many distinct values as c runs over all the slopes of K, then the stable extendable subgroup E s K is of infinite index in Mod(T 2 ); (2) If there are at most finitely many distinct slopes c ⊂ K with the singular genus g ⋆ K (c) at most C for every C > 0, then the stable extendable subgroup E s K is finite. Remark 3.7. Hence the same holds for the extendable subgroup E K . Using a similar argument, one can also show that the statements remain true when replacing g ⋆ K with g K , and E s K with E K . Proof. First observe that the singular genus of a slope is invariant under the action of a stably extendable automorphism, namely, if τ ∈ E s K , then g ⋆ K (c) = g ⋆ K (τ (c)) for every slope c ⊂ K. This is clear because by the definition, τ extends over X ′ K = X K #Y as a homeomorphismτ : X ′ K → X ′ K , for some simply connected closed 4-manifold Y . This induces an automorphism of π 1 (X ′ K ) ∼ = π 1 (X K ), which preserves the commutator length of c, or equivalently, the singular genus g ⋆ K (c), (Remark 3.3). To see Statement (1), note that Mod(T 2 ) acts transitively on the space C of all the slopes on T 2 . It follows immediately from the invariance of singular genera above that the cardinality of value set of g ⋆ K is at most the index [Mod(T 2 ) : To see Statement (2), suppose τ ∈ E s K . By the assumption and the invariance of the singular genus under τ , for any slope c ⊂ K, there are at most finitely many distinct slopes in the sequence c, τ (c), τ 2 (c), · · · . Thus for some integers k > l ≥ 0, As c is arbitrary, τ is a torsion element in Mod(T 2 ), so E s K is a subgroup of Mod(T 2 ) consisting purely of torsion elements. It follows immediately that E s K is a finite subgroup from the well-known fact that Mod(T 2 ) ∼ = SL(2, Z) is virtually torsionfree. Indeed, the index of any finite-index torsion-free normal subgroup of Mod(T 2 ) yields an upper bound of the size of E s K . 4. Induced seminorms on H 1 (T 2 ; R) In this section, we introduce the seminorm · K on H 1 (T 2 ; R) induced from any knotted torus K : T 2 ֒→ S 4 . This may be regarded as a generalization of the (singular) Thurston norm in 3-dimensional topology. We prove a Schubert-type inequality in terms of seminorms associated with satellite constructions. 4.1. The induced seminorm. There are various ways to formulate the induced seminorm, among which we shall take a more topological one. Suppose K : T 2 ֒→ S 4 is a knotted torus in S 4 . We shall first define the value of · K on H 1 (T 2 ; Z) then extend linearly and continuously over H 1 (K; R). Recall that for a connected orientable compact surface F , the complexity of F is defined as χ − (F ) = max {−χ(F ), 0}. In general, for an orientable compact surface F = F 1 ⊔ · · · ⊔ F s , the complexity of F is defined as: For any γ ∈ H 1 (T 2 ), identified as an element of H 1 (∂X K ), there exists a smooth immersion of pairs (F, ∂F ) (X K , ∂X K ) such that F is a (possibly disconnected) oriented compact surface, and that ∂F represents γ. We define the complexity of γ as: where F runs through all the possible immersed surfaces as described above. The fact below follows immediately from the definition. (1) nγ K = n γ K , for any γ ∈ H 1 (T 2 ) and any integer n ≥ 0; ( Proof. This follows from Lemma 4.1 and some elementary arguments. For any ǫ > 0, there is some m > 0 such that γ K > x(mγ) m − ǫ, which by Lemma 4.1, Moreover, for any Let ǫ → 0, we see nγ K ≥ n γ K . This proves the first statement. To prove the second statement, for any Provided Lemma 4.3, we can extend · K radially over H 1 (T 2 ; Q), then extend continuously over H 1 (T 2 ; R). This uniquely defines a seminorm: Definition 4.4. Let K : T 2 ֒→ S 4 be a knotted torus, and c ⊂ T 2 be a slope. Then the seminorm c K is defined as Remark 4.5. Recall that for a group G and any element u in the commutator subgroup [G, G], the stable commutator length: where cl(·) denotes the commutator length (Remark 3.3). It is not hard to see that for any slope c ⊂ K, the seminorm c K equals scl(c), regarding c as an element of the commutator subgroup of π 1 (X K ), (cf. [Ca,Proposition 2.10]). The observation below follows immediately from the definition and Proposition 3.6: The satellite construction. The satellite construction for knotted tori is analogous to that of classical knots in S 3 , cf. Subsection 2.3 for historical remarks. Fix a product structure of T 2 ∼ = S 1 × S 1 . We shall denote the standardly parametrized thickened torus as: The standard unknotted torus T std : T 2 ⊂ S 4 is known as a smoothly embedded torus such that T std bounds two smoothly embedded solid tori D 2 × S 1 and S 1 × D 2 in S 4 , respective to factors. It is unique up to diffeotopy of S 4 . Let K c : T 2 ֒→ S 4 be a knotted torus. There is a natural trivial product structure on a compact tubular neighborhood N (K c ) ∼ = T 2 × D 2 of K c , so that c × * is homologically trivial in the complement X Kc for any slope c ⊂ T 2 . Thus there is a natural isomorphism: up to isotopy, as we fixed the product structure on T 2 . Definition 4.7. A pattern knotted torus is a smooth embedding K p : Definition 4.8. Let K c : T 2 ֒→ S 4 be a knotted torus and K p : T 2 ֒→ Θ 4 be a pattern knotted torus. After fixing a product structure on T 2 , the satellite knotted torus, denoted as: is the composition: We call K c the companion knotted torus. The desatelliteK p : For any element γ ∈ H 1 (T 2 ) and a pattern K p : T 2 ֒→ Θ 4 , there is a pushforward element γ c ∈ H 1 (T 2 ) under the composition: where the isomorphism respects the choice of the product structure on T 2 , and the last map is the projection onto the T 2 factor. If K = K c · K p is a satellite with pattern K p , one should regard γ as an element of H 1 (K), and γ c as an element of H 1 (K c ). 4.3. A Schubert type inequality. The theorem below is analogous to the Schubert inequality in the classical knot theory ([Sc1, Kapitel II, §12]). Theorem 4.9. Suppose K = K c · K p is a satellite knotted torus in S 4 . Then for any γ ∈ H 1 (T 2 ; R), Moreover, if the winding number w(K p ) is nonzero, then: We prove Theorem 4.9 in the rest of this subsection. Let X K be the complement of the satellite knot K = K c · K p in S 4 . The satellite construction gives a decomposition: glued along the image of ∂Θ 4 . Y is diffeomorphic to the complement of K p in Θ 4 , so it has two boundary components, namely the satellite boundary ∂ s Y which is ∂X K , and the companion boundary ∂ c Y which is the image of ∂Θ 4 . Similarly, the complement XK p can be decomposed as Y ∪ X Tstd . The first inequality is proved in the following lemma: Proof. We equip X Kc with a finite CW complex structure such that there is only one 0-cell and the 0-cell is contained in ∂X Kc , which is a subcomplex of X Kc . Let X (q) Kc be the union of ∂X Kc and the q-skeleton of X Kc . We may extend the identity map on Y to a continuous map f : Y ∪ X Kc → XK p . To see this, note that the inclusion map ∂X K → X K induces a surjective map on H 1 for any K : T 2 → S 4 , so the identity map on ∂X Kc induces a natural isomorphism H 1 (X Kc ) ∼ = H 1 (X Tstd ). Every 1-cell in X Kc represents a 1-cycle, we can extend id ∂cY to a map f | : X (1) Kc → X Tstd , so that the induced map H 1 (X (1) Kc ) → H 1 (X Tstd ) agrees with the map on the first homology induced by X Kc ֒→ X Kc . It is easy to see X Tstd ≃ S 1 ∨ S 2 ∨ S 2 , so π 1 (X Tstd ) ∼ = Z. Hence the previous f | can be further extended as f | : X (2) Kc → X Tstd as the boundary of any 2-cell is mapped to a null-homotopic loop in X Tstd by the construction. Thus we obtain a map f : Y ∪ X Kc → XK p by the map above and the identity on Y . Let j : F X K be an immersed compact orientable surface such that j(∂F ) ⊂ ∂X K . We may assume F meets ∂ c Y transversely. We homotope j to Kc . Then we obtain a map f • j ′ : F → XK p which may be homotoped to an immersion. As F is arbitrary, this clearly impies γ K ≥ γ K p by the definition of the seminorm. Now we proceed to consider the case when w(K p ) = 0. The image of pt × pt × ∂D 2 ⊂ Y under the natural inclusion Y ⊂ X K will be denoted µ c . We call µ c the companion meridian. The following lemma follows immediately from the construction: Lemma 4.11. Identify H 1 (X Kc ) ∼ = Z and H 1 (X K ) ∼ = Z, then H 1 (X Kc ) → H 1 (X K ) is the mulplication by w(K p ). Proof. Note µ c represents a generator of H 1 (X Kc ). By definition of w(K p ), µ c is homologous to w(K p ) times the meridian of K. The lemma follows as the meridian of K generates H 1 (X K ) ∼ = Z by the Alexander duality. Proof. By the long exact sequence: By the Poincaré-Lefschetz duality and excision, The long exact sequence: is induced by the inclusion K p ⊂ Θ 4 , (or equivalently by K p : T 2 ֒→ Θ 4 ). Since Θ 4 ≃ T 2 , K p induces a map h : T 2 → T 2 . It is also clear that w(K p ) is the degree of h. Since w(K p ) = 0, it is clear that the map h * : H * (T 2 ) → H * (T 2 ) is injective on all dimensions, so must be H * (Θ 4 ) → H * (K p ). Thus H 2 (Θ 4 , K p ) is finite from the long exact sequence. We conclude H 2 (Y, ∂ c Y ) is finite as desired. Note it suffices to prove Theorem 4.9 for γ ∈ H 1 (T 2 ; Z). Remember that we regard γ as in H 1 (K), identified as the kernel of H 1 (∂X K ) → H 1 (X K ). For any ǫ > 0, let j : F X K be a properly immersed orientable compact (possibly disconnected) surface, i.e. j −1 (∂X K ) = ∂F , such that j * [∂F ] = m γ for some integer m > 0, and that: We may assume F has no disk or closed component, so the complexity x(F ) = −χ(F ). We may also assume F intersects ∂ c Y transversely, so Proof. We may take a collection of embedded arcs u 1 , · · · , u n whose endpoints lie on ∂V , cutting V into a disk D. This gives a cellular decomposition of V . We may first extend the map j| ∂V : Lemma 4.14. We may modify j : F X K within the interior of F , so that every component of j −1 (∂ c Y ) that is inessential on F bounds a disk component of j −1 (X Kc ). Proof. Let a ⊂ j −1 (∂ c Y ) be a component inessential on F , and D ⊂ F be an embedded disk whose boundary is a. Suppose D is not contained in F c , then D ∩ F p = ∅. Any component of D ∩ F p must have all its boundary components lying on j −1 (∂ c Y ). By Lemma 4.13, we may redefine j on these components relative to boundary so that they are all mapped into X c . After this modification and a small perturbation, either a disappears from Thus the number of inessential components of j −1 (∂ c Y ) decreases strictly under this modification. Therefore, after at most finitely many such modifications, every inessential component of Without loss of generality, we assume that j : F X K satisfies the conclusion of Lemma 4.14. Lemma 4.15. There is a finite cyclic covering κ :F → F such that for every essential component a ∈ j −1 (∂ c Y ) with [j(a)] = 0 in H 1 (X K ), and every component a of κ −1 (a), the image j(κ(ã)) represents the same element in H 1 (X K ) ∼ = Z up to sign. Proof. Let a 1 , · · · , a s be all the essential components j −1 (∂ c Y ) such that [j(a i )] = 0 in H 1 (X K ) ∼ = Z. Let d > 0 be the least common multiple of all the [j(a i )]'s. Consider the covering κ :F → F corresponding to the preimage of the subgroup d · H 1 (X K ) under π 1 (F ) → π 1 (X K ) → H 1 (X K ). It is straightforward to check that κ satisfies the conclusion. Let κ :F → F be a covering as obtained in Lemma 4.15. Let d > 0 be the degree of κ, so x(F ) = d x(F ). Clearly j * κ * [∂F ] = md γ, and also: Moreover, as any inessential component of j −1 (∂ c Y ) bounds a disk component of F c , it is clear that any inessential component of (j Therefore, instead of using j : F X K , we may use j • κ :F X K as well. From now on, we rewrite j • κ as j,F as F , and md as m, so j : F X K satisfies the conclusions of Lemmas 4.14, 4.15. Let Q ⊂ F c be the union of the disk components of F c . Let F ′ c be F c − Q, and F ′ p be F p ∪ Q (glued up along adjacent boundary components). We have the decompositions: Moreover, there is no inessential component of ∂F ′ c by our assumption on F , so F ′ c and F ′ p are essential subsurfaces of F (i.e. whose boundary components are essential). Lemma 4.16. Suppose F is a compact orientable surface with no disk or sphere component, and E 1 , E 2 are essential compact subsurfaces of F with disjoint interiors such that Proof. Note χ(F ) = χ(E 1 ) + χ(E 2 ). As each E i is essential, there is no disk component of E i , and by the assumption there is no sphere component, either. The desatellite term in Theorem 4.9 comes from the following construction. Lemma 4.17. Under the assumptions above, there is a properly immersed compact orientable surfaceĵ Proof. As F has been assumed to satisfy the conclusion of Lemma 4.15, there is an ω ∈ H 1 (X K ), such that every component of ∂ c F ′ p (i.e. F ′ p ∩ j −1 (∂ c Y )) represents either ±ω or 0, and the algebraic sum over all the components is zero since they bound j(F ′ c ) ⊂ X K . Thus we may assume there are s components representing 0, t components representing ω, and t components representing −ω, where s, t ≥ 0. We constructF ′ p by attaching s disks and t annuli to ∂ c F ′ p , such that each disk is attached to a component representing 0, and each annulus is attached to a pair of components representing opposite ±ω-classes. Let D be the union of attached disks, and A be the union of attached annuli. The result is a compact orientable surfacê To constructĵ, we extend the map j| : F p → Y ⊂ XK p = Y ∪ X Tstd over F p = F p ∪ Q ∪ D ∪ A, using the fact that π 1 (X Tstd ) ∼ = H 1 (X Tstd ) ∼ = Z. Specifically, to extend the map over Q, let s be a component of ∂ c F p bounding a disk component of Q. Then j * [s] = 0 in H 1 (X K ). Hence it lies in the subgroup H 1 (T 2 × pt) of H 1 (∂Θ 4 ) ∼ = H 1 (∂ c Y ), and by the desatellite construction,ĵ(s) should also be nullhomologous in X Tstd . We can extendĵ over the disk D ⊂ Q bounded by s. After extending for every component of Q, we obtainĵ| : F p ∪ Q → XK p . Similarly, we may extendĵ| over D. To extend over A, let A ⊂ A be an attached annulus component as in the construction. Let ∂A = s + ⊔ s − , such that j * [s ± ] = ±ω in H 1 (X K ), respectively. By the desatellite construction,ĵ * [s ± ] = ±ω in H 1 (X Tstd ). Since π 1 (X Tstd ) ∼ = H 1 (X Tstd ),ĵ(s + ) is free-homotopic to the orientation-reversal of j(s − ). In other words, we can extendĵ| over A. After extending for every attached annulus, we obtainĵ : is the same as j| ∂F under the natural identification,ĵ * [∂F ′ p ] = mγ in H 1 (T 2 ), (where H 1 (T 2 ) may be regarded as either H 1 (K) or H 1 (K p ) under the natural identification). After homotopingĵ :F ′ p → XK p to a smooth immersion, we obtain the map as desired. The contribution of the companion term in Theorem 4.9 basically comes from F ′ c . However, j * [F ′ c ] does not necessarily represent mγ c , but may differ by a term of zero · Kc -seminorm. To be precise, note the image of any component of We have: We are now ready to prove Theorem 4.9. Proof of Theorem 4.9. The first inequality follows from Lemma 4.10. In the rest, we assume w(K p ) = 0. Let j : F X K be a surface that ǫ-approximates γ K as before. We may assume j satisfies the conclusion of Lemma 4.14 possibly after a modification. Possibly after passing to a finite cyclic covering of F , we may further assume j satisfies the conclusion of Lemma 4.15 as we have explained. We have the decomposition F = F ′ p ∪ F ′ c of F into essential subsurfaces, so by Lemma 4.16, Combining the estimates above, thus: We conclude: as ǫ > 0 is arbitrary. Braid satellites In this section, we introduce and study braid satellites. 5.1. Braid patterns. We shall fix a product structure on T 2 ∼ = S 1 ×S 1 throughout this section. By a braid we shall mean an embedding b : S 1 ֒→ S 1 × D 2 , whose image is a simple closed loop transverse to the fiber disks. We usually write k b for the classical knot in S 3 associated to b, namely, the 'satellite' knot with the trivial companion and the pattern b. There is a family of patterns arising from braids: Definition 5.1. Let b : S 1 ֒→ S 1 × D 2 be a braid. Define the standard braid pattern P b associated to b as: where Θ 4 = S 1 × S 1 × D 2 is the thickened torus. The standard braid torus K b associated to b is defined as the desatellite T std · P b . Remark 5.2. The standard braid torus K b is sometimes called the spun T 2 -knot obtained from the associated knot k b . In [Hir1], the extendable subgroup E K b has been explicitly computed. Proof. This follows immediately from the construction and the definition of winding numbers. Proposition 5.4. Suppose b is a braid whose associated knot k b is nontrivial. Then: Proof. For simplicity, we write K b , k b as K, k respectively. To see pt × S 1 K ≥ 2g(k) − 1, the idea is to construct a map between the complements f : X K → M k , where X K = S 4 −K, and M k = S 3 −k. Let Y ⊂ X K be the image of the complement Θ 4 −P b , and N ⊂ M k be the image of the complement of S 1 × D 2 − b. There is a natural projection map f | : Y ∼ = S 1 × N → N . As M k − N is homeomorphic to the solid torus, which is an Eilenberg-MacLane space K(Z, 1), it is not hard to see that f | extends as a map f : Provided this, for any properly immersed compact orientable surface j : F X K whose boundary represents m[c], the norm of [f • j(F )] is bounded below by the singular Thurston norm of k. As the singular Thurston norm equals the Thurston norm (cf. [Ga]), which further equals 2g(k) − 1 for nontrivial knots, we obtain pt × S 1 K ≥ 2g(k) − 1. To see pt × S 1 K = 2g(k) − 1, it suffices to find a surface realizing the norm. In fact, one may first take an inclusion ι : is an standard unknotted embedding, i.e. whose core is unknotted in D 3 and S 1 × pt ⊂ S 1 × ∂D 2 is the longitude. Then K b factorizes through a smooth embedding S 1 × D 3 ֒→ S 4 (unique up to isotopy) via ι • P b . This allows us to put a minimal genus Seifert surface of k into X K so that it is bounded by the slope pt × S 1 . Thus pt × S 1 K = 2g(k) − 1. From the factorization above, we may also free-homotope (ι is an arc whose interior lies in D 3 − k. As S 1 × {pt ′ } bounds a disk outside the image of S 1 × D 3 in S 4 , we see S 1 × pt K = 0. Braid satellites. As an application of the Schubert inequality for seminorms, we estimate · K for braid satellites of braid tori. We need the following notation. Definition 5.5. Let K : T 2 ֒→ S 4 be a knotted torus in S 4 , and τ : T 2 → T 2 be an automorphism of T 2 . We define the τ -twist K τ of K to be the knotted torus: It follows immediately that the seminorm changes under a twist according to the formula: γ K τ = τ (γ) K . Fix a product structure T 2 ∼ = S 1 × S 1 as before. We denote the basis vectors [S 1 × pt] and [pt × S 1 ] on H 1 (T 2 ; R) as ξ, η, respectively. A braid satellite is known as some knotted torus of the form K τ b · P b ′ , where b, b ′ are braids with nontrivial associated knots, and τ ∈ Mod(T 2 ). It is said to be a plumbing braid satellite if τ (ξ) = η and τ (η) = −ξ. Here g, g ′ > 0 are the genera of the associated knots of b, b ′ , respectively, and w ′ is the winding number of b ′ , and r, s are the intersection numbers ξ · τ (ξ), ξ · τ (η), respectively. Moreover, the equality is achieved if K τ b · P b ′ is a plumbing braid satellite. We remark that one should not expect the seminorm lower bound be realized in general. For instance, in the extremal case when τ is the identity, π 1 (K) is exactly the knot group of the satellite of classical knots k b · b ′ , and the lower bound for the longitude slope is given by the classical Schubert inequality, which is not realized in general. However, the plumbing case is a little special. It provides examples of slopes on which the seminorm is not realized by the singular genus. In fact, when c ⊂ K is a slope representing x ξ + y η ∈ H 1 (T 2 ), where x, y are coprime odd integers, the formula yields that c K is an even number, so the integer g ⋆ K (c) can never be . We shall give some estimate of the singular genus and the genus for plumbing braid satellites in Subsection 5.3. The corollary below follows immediately from Proposition 5.6 and Lemma 4.6: Corollary 5.7. With the notation of Proposition 5.6, if τ is an automorphism of T 2 not fixing ξ up to sign, then the stable extendable subgroup E s K of Mod(T 2 ) with respect to K, and hence the extendable subgroup E K , is finite. In the rest of this subsection, we prove Proposition 5.6 Lemma 5.8. With the notation of Proposition 5.6, Proof. By Lemma 5.3 and Theorem 4.9, Note that we are writing γ c with respect to K b · P b ′ , so the second term equals the corresponding term in Theorem 4.9 with respect to the twisted satellite K τ b · P b ′ via an obvious transformation. By Proposition 5.4, As b ′ is a braid, P b ′ : T 2 → Θ 4 ≃ T 2 implies γ c = x ξ + w ′ y η. Write τ as p q r s in SL(2, Z) under the given basis ξ, η. Note it agrees with the notation r, s in the statement. Then it is easy to compute that: τ (γ c ) = (px + qw ′ y) ξ + (rx + sw ′ y) η. By Proposition 5.4 again, Combining these calculations, we obtain the estimate as desired. Proof. Because · K is a seminorm (Lemma 4.3), it suffices to prove ξ K ≤ 2g − 1 and η K ≤ 2g ′ − 1. The complement X K is the union of the companion piece From the construction it is clear that π 1 (Y ) → π 1 (X K ) factors through the desatellite on the first factor, namely, Z × π 1 (M k b ′ ), so the commutator length of η in π 1 (X K ) is at most that of η in π 1 (M k b ′ ), which is 2g ′ . Moreover, the slope ξ ∈ ∂X K can be free-homotoped to a slope ξ c on ∂X K b since it is a fiber of Y = S 1 × R b ′ , and by the construction, it is clear that ξ c represents the longitude slope of π 1 (∂M k b ) in π 1 (M k b ) ∼ = π 1 (X K b ), so the commutator length of ξ in π 1 (X K ) is at most that of ξ c in π 1 (M k b ), which is 2g. This proves the lemma because the commutator length equals the singular genus g ⋆ K , which gives upper bounds for the seminorm · K on slopes, (Remark 3.3 and Lemma 4.6). Now Proposition 5.6 follows from Lemmas 5.8, 5.9. 5.3. On genera of plumbing braid satellites. In this subsection, we estimate the singular genera and the genera of slopes for plumbing braid satellites. While we obtain a pretty nice estimate for the singular genera, with the error at most one, we are not sure how close our genera upper bound is to being the best possible. Proposition 5.11. Suppose b, b ′ are braids with nontrivial associated knots, and K is the plumbing braid satellite K τ b ·P b ′ . Then for every slope c ⊂ K, the following statements are true: (1) The singular genus satisfies: In particular, if c represents x ξ + y η with both x and y odd, then g ⋆ K (c) = , where x, y are coprime integers, then the genus satisfies: where g, g ′ > 0 denote the genera of the associated knots k b , k b ′ in S 3 , respectively. We prove Proposition 5.11 in the rest of this subsection. We shall rewrite the slopes S 1 × pt, pt × S 1 ⊂ T 2 as c ξ , c η , respectively. We need the notion of Euler number to state the next lemma. Let Y be a simply connected, closed oriented 4-manifold, and let K : T 2 ֒→ Y be a null-homologous knotted torus embedded in Y . Let X = Y − K be the compact exterior of the knotted torus. For any locally flat, properly embedded compact oriented surface with connected boundary, F ֒→ X, such that ∂F is mapped homeomorphically onto a slope c × pt of K × pt, (which exists by Lemma 3.1,) we may take a parallel copy c × pt ′ ⊂ K × pt ′ of the slope, and perturb F to be another locally flat, properly embedded copy F ′ ֒→ X bounded by c × pt ′ , so that F , F ′ are in general position. The algebraic sum of the intersections between F and F ′ gives rise to an integer: which is known as the Euler number of the normal framing of F induced from K. In fact, one can check that e(F ; K) only depends on the class [F ] ∈ H 2 (X, K × pt). If Y is orientable but has no preferable choice of orientation, we ambiguously speak of the Euler number up to sign. Lemma 5.12. There exist two disjoint, properly embedded, orientable compact surfaces E, E ′ ֒→ X K , bounded by the slopes c ξ × p, c η × p ′ in two parallel copies of the knotted torus K × p, K × p ′ ⊂ ∂X, respectively. Moreover, the genera of E, E ′ are g, g ′ , respectively, and the Euler number of the normal framing e(E; K) = e(E ′ ; K) = 0. Proof. Regarding K as T std · P τ b · P b ′ , there is a natural decomposition: where X 0 is the compact complement of the unknotted torus T std in S 4 , and Y, Y ′ are the exteriors of P b , P b ′ in the thickened torus Θ 4 , respectively. Moreover, Y (resp. Y ′ ) has a natural product structure c η × R b (resp. c ξ × R b ′ ), where R b (resp. R b ′ ) denotes the exterior of the braid b (resp. b ′ ) in the solid torus S 1 × D 2 . As before, ∂Y (resp. ∂Y ′ ) has two components ∂ c Y and ∂ s Y (resp. ∂ c Y ′ and ∂ s Y ′ ). From the classical knot theory, there is a Seifert surface S of k b properly embedded in M k b = S 3 − k b of genus g, and one can arrange S so that it intersects S 1 × D 2 in a finite collection of n ≥ w disjoint parallel fiber disks. Thus S b = S ∩R b is a connected properly embedded orientable compact surface, so that ∂S b has one component on ∂ s R b parallel to the longitude s, and n components c 1 , · · · , c n on ∂ c R b parallel to to pt × ∂D 2 . Similarly, take a connected subsurface S b ′ ⊂ R b ′ with n ′ boundary components c ′ 1 , · · · , c ′ n ′ on the companion boundary, and one boundary component s ′ on the satellite boundary. Construct a properly embedded compact annulus Similarly, construct a properly embedded compact surface E Y in Y = c η × R b by taking a product of S b with some point in c η ; and construct a union of n ′ annuli E ′ Y by taking the product of c η with n ′ disjoint arcs α ′ 1 , · · · , α ′ n ′ in R b − S b , each of whose endpoints lie on ∂ c R b and ∂ s R b , respectively. Under the gluing, we obtain two disjoint It is not hard to see that one can cap off these other boundary components with disjoint properly embedded disks in X 0 . In fact, we may regard T std : T 2 ֒→ S 4 as the composition: T 2 ∼ = c ξ × c η ֒→ c ξ × D 3 ֒→ S 4 , where c η is a trivial knot in D 3 . Thus the components of ∂(E ′ Y ∪ E ′ Y ′ ) that lie on ∂X 0 can be capped off in c ξ × D 3 disjointly. Moreover, the components of ∂(E Y ∪ E Y ′ ) lying on ∂X 0 can be isotoped to the boundary of c ξ × D 3 , so that they are all c ξ -fibers. Because S 4 − c ξ × D 3 is homeomorphic to D 2 × S 2 , we may further cap off these fibers in the complement of c ξ × D 3 in S 4 . It is straightforward to check that capping off E Y ∪ E Y ′ and E ′ Y ∪ E ′ Y ′ result in the surfaces E and E ′ respectively, as desired. Note that e(E; K) vanishes because we can perturb the construction above to obtain a surface disjoint from E bounding a slope parallel to c ξ × pt in K × pt. For the same reason, e(E ′ ; K) = 0 as well. Proof of Proposition 5.11. (1) It suffices to show the upper bound. By Lemma 5.12, there are properly embedded surfaces E, E ′ in X K bounded by c ξ × pt, c η × pt, respectively, and the complexity of E and E ′ realizes c ξ K and c η K , respectively, (Proposition 5.6). Suppose c ⊂ K is a slope representing xξ + yη. By the main theorem of [Ma], there exists an |x|-sheet connected covering spaceẼ of E, which has exactly one boundary component if x is odd, or two boundary components if x is even. By the same method, there is alsoẼ ′ which is connected |y|-sheet covering E ′ with one or two boundary components. Since x and y are coprime, at most one of them is even, soẼ ∪Ẽ ′ have at most three components. Then there are immersions of these surfaces into X K , and by homotoping the image of their boundaries to K × pt and taking the band sum to make them connected, we obtain an immersed subsurface F X K bounding the slope c. Since we need to add up to two bands to make the boundary of F connected, this yields: ) · |x| + (−χ(E ′ )) · |y| + 2 = c K + 2. Note that the last equality follows from Proposition 5.6 as we assumed K is the plumbing braid satellite. This proves the first statement. The 'in particular' part is also clear because when x, y are both odd, c K is an even number by the formula, so c K 2 + 1 is the only integer satisfying our estimation. (2) In this case, we take |x| copies of the embedded surface E, and |y| copies of the embedded surface E ′ , in X K . Because the Euler numbers of the normal framing are zero for E and E ′ , we may assume these copies to be disjoint. Isotope their boundaries to K × pt in ∂X K , we see |x| slopes parallel to c ξ , and |y| slopes parallel to c η . As there are |xy| intersection points, we take |xy| band sums to obtain a properly embedded surface F ֒→ X K bounding the slope c. There are |x| + |y| − 1 bands that contribute to making the boundary of F connected, and each of the other |xy| − |x| − |y| + 1 bands contributes one half to the genus of F . This implies: as desired. Miscellaneous examples In this section, we exhibit examples to show difference between concepts introduced in this note. 6.1. Slopes with vanishing seminorm but positive singular genus. Note that we have already seen slopes whose singular genus do not realize nonvanishing seminorm in plumbing braid satellites, (cf. Proposition 5.6). There are also examples where the seminorm vanishes on some slope with positive singular genus, as follows. Our construction is based on the existence of incompressible knotted Klein bottles. Denote the Klein bottle as Φ 2 . A knotted Klein bottle in S 4 is a locally flat embedding K : Φ 2 ֒→ S 4 . We usually denote its image also as K, and the exterior X K = S 4 − K is obtained by removing an open regular neighborhood of K from S 4 as before in the knotted torus case. We say a knotted Klein bottle K is incompressible if the inclusion ∂X K ⊂ X K induces an injective homomorphism between the fundamental groups. There exist incompressible Klein bottles in S 4 , see [Kam1,Lemma 4]. Incompressible knotted Klein bottles give rise to examples of slopes on knotted tori which have vanishing seminorm but positive singular genus. Specifically, let K : Φ 2 ֒→ S 4 be an incompressible knotted Klein bottle. Suppose κ : T 2 → Φ 2 is a two-fold covering of the Klein bottle Φ 2 . Pertubing K•κ : T 2 → S 4 in the normal direction of K gives rise to a knotted torus: Lemma 6.1. With the notation above,K has a slope c such that c K = 0, but g ⋆ K (c) > 0. Proof. Let α ⊂ Φ 2 be an essential simple closed curve on K so that κ −1 (α) has two components c, c ′ ⊂ T 2 . Then c, c ′ are parallel on T 2 . We choose orientations on c, c ′ so that they are parallel as oriented curves. Let N (K) be a compact regular neighborhood of K so that Y = N (K) −K is a pair-of-pants bundle over K. Then c is freely homotopic to the orientation-reversal of c ′ within Y . This implies that 2 [c × pt] ∈ H 1 (XK) is represented by a properly immersed annulus A XK whose boundary with the induced orientation equals c ∪ c ′ . Therefore, c K equals zero. However, note that XK = X K ∪ Y , glued along ∂X K = ∂N (K). Since K is incompressible, ∂X K is π 1 -injective in X K . It is also clear that both components of ∂Y are π 1 -injective in Y . It follows that π 1 (Y ) injects into π 1 (XK), and also that π 1 (∂XK) injects into π 1 (XK). Therefore, the slope c × pt in ∂XK ∼ =K × S 1 is homotopically nontrivial in π 1 (XK), so g ⋆ K (c) cannot be zero. 6.2. Stably extendable but not extendable automorphisms. It is clear that the stable extendable subgroup E s K contains the extendable subgroup E K for any knotted torus K : T 2 ֒→ S 4 . They are in general not equal. In fact, we show that Dehn twist along a slope with vanishing singular genus is stably extendable (Lemma 6.2). In particular, it follows that for any unknotted embedded torus K, the stable extendable subgroup E s K equals Mod(T 2 ). However, in this case, the extendable subgroup E K is a proper subgroup of Mod(T 2 ) of index three [DLWY, Mo]. Thus there are many automorphisms that are stably extendable but not extendable for the unknotted embedding. Fix an orientation of the torus T 2 . For any slope c ⊂ T 2 on the torus, we denote the (right-hand) Dehn twist along c as: More precisely, the induced automorphism on H 1 (T 2 ) is given by τ c * (α) = α + I([c], α) [c], for all α ∈ H 1 (T 2 ), where I : H 1 (T 2 ) × H 1 (T 2 ) → Z denotes the intersection form. Note that the expression is independent from the choice of the direction of c. The criterion below is inspired from techniques in the paper of Susumu Hirose and Akira Yasuhara [HY]. However, the reader should beware that our notion of stabilization in this paper does not change the fundamental group of the complement, so it is slightly different from their definition. Lemma 6.2. Let K : T 2 ֒→ S 4 be a knotted torus. Suppose c ⊂ T 2 is a slope with the singular genus g ⋆ K (c) = 0. Then the Dehn twist τ c ∈ Mod(T 2 ) along c belongs to the stable extendable subgroup E s K . Proof. The idea of this criterion is that, for a closed simply connected oriented 4-manifold Y , to have the Dehn twist τ c extendable over Y via the Y -stabilization K[Y ] : T 2 ֒→ Y , we need c to bound a locally flat, properly embedded disk of Euler number ±1 in the complement of K[Y ] in Y . Such a Y can always be chosen to be the connected sum of copies of CP 2 or CP 2 . Recall that we introduced the Euler number of a surface bounding a slope in Subsection 5.3 before the statement of Lemma 5.12. Suppose D is a locally flat, properly embedded disk in X = Y −K[Y ] bounded by a slope c×pt on K[Y ]×pt ⊂ ∂X with e(D; K[Y ]) = ±1. We claim in this case the Dehn twist τ c ∈ Mod(T 2 ) along c can be extended as an orientation-preserving self-homeomorphism of Y . In fact, following the arguments in the proof of [HY,Theorem 4.1], we may take the compact normal disk bundle ν D of D, identified as embedded in X such that ν D ∩ (K[Y ] × pt) is an interval subbundle of ν D over ∂D. Then e(D; K[Y ]) = ±1 implies that ν D ∩ (K[Y ] × pt) is a (positive or negative) Hopf band in the 3-sphere ∂ν D , whose core is c × pt. Thus τ c extends over Y as a self-homeomorphism by [HY,Proposition 2.1]. Now it suffices to find a Y fulfilling the assumption of the claim above. Suppose c ⊂ K is a slope with the singular genus g ⋆ K (c) = 0, then there is a map j : D 2 → X K so that ∂D 2 is mapped homeomorphically onto c × pt in ∂X K ∼ = K × S 1 . We may also assume j to be an immersion by the general position argument. Blowing up all the double points of j(D 2 ), we obtain an embedding j ′ : D 2 ֒→ X K #(CP 2 ) #r for some integer r ≥ 0. Suppose e(j ′ (D); K[(CP 2 ) #r ]) equals s ∈ Z. If s > 1, we may further blow up s − 1 points in j ′ (D) ⊂ X K #(CP 2 ) #r . This gives rise to j ′′ : D 2 ֒→ X K #(CP 2 ) #(r+s−1) satisfying the assumption of the claim, so the Dehn twist τ c is extendable over X = X K #(CP 2 ) #(r+s−1) , or in other words, it is Y -stably extendable where Y = (CP 2 ) #(r+s−1) . If s < 1, a similar argument using negative blow-ups shows that τ c is Y -stably extendable, where Y = (CP 2 ) #(1−s) #(CP 2 ) #r . Further questions In conclusion, for a knotted torus K : T 2 ֒→ S 4 , the seminorm and the singular genus of a slope are meaningful numerical invariants which are sometimes possible to control using group theoretic methods. However, the genera of slopes seem to be much harder to compute. It certainly deserves further exploration how to combine the group-theoretic methods with the classical 4-manifold techniques when the fundamental group comes into play. We propose several further questions about genera, seminorm and extendable subgroups. Suppose K : T 2 ֒→ S 4 is a knotted torus. Question 7.1. When is the unit disk of the seminorm · K a finite rational polygon, i.e. bounded by finitely many segments of rational lines? (Cf. Remark 5.10.) Question 7.2. If the index of the extendable subgroup E K in Mod(T 2 ) equals three, is K necessarily the knot connected sum of the unknotted torus with a knotted sphere? Question 7.3. If the stable extendable subgroup E s K equals Mod(T 2 ), does the singular genus g ⋆ K vanish for every slope? Question 7.4. If K is incompressible, i.e. ∂X K is π 1 -injective in the complement X K , is the stable extendable subgroup E s K finite? Question 7.5. For plumbing knotted satellites, does the upper bound in Proposition 5.11 (2) realize the genus of the slope?
2013-02-07T00:23:06.000Z
2011-10-10T00:00:00.000
{ "year": 2013, "sha1": "c604f33437388c8f259579862d87079c28a3201e", "oa_license": null, "oa_url": "http://msp.org/pjm/2013/261-1/pjm-v261-n1-p07-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c604f33437388c8f259579862d87079c28a3201e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
213149283
pes2o/s2orc
v3-fos-license
It governance model for state entities, as support for compliance with the information security and privacy component in the framework of the digital government policy This article proposes an information technology governance model that facilitates the direction, control and fulfillment of the objectives of the implementation and maintenance of the information security and privacy program proposed by the Ministry of Information Technology and Government Communications Colombian to state organizations in the framework of the digital government policy. Based on the identification of the regulations of the Colombia state for the implementation of the information security and privacy model, the analysis of the existing information technology governance frameworks that converge on the governance and management objectives of the information security and privacy model is continued and finally the model is structured of governance. contributes to the technological development of the Colombia society through the construction of an innovative tool that facilitates the direction, control and rapid adoption of the management system of information security and privacy defined by the Colombia State, which results in the assurance and privacy of information of all Colombian that is in the hands of state entities, reducing frauds, the exposure of our personal information among other risks. Introduction Government information technology (IT) government, is a concept that "with the promise of making visible the value they generate, has been taking shape to be better interpreted, implemented and applied, globally" [1]. There are currently many definitions [2], frames it in a structure of relationships to direct and control the role of IT technology within an organization in order to achieve the objectives with the aggregation of value and the balance of risk, compared to IT return and its processes. Part of the IT government lies in designing, applying and evaluating a set of criteria to govern the respective function optimally [3], explains it as a set of rules, principles, policies, or organizational charts that define or limit the scope of the area managers. In parallel [4], IT governance is defined as a set of institutionalized practices or activities that minimize uncertainty and acquire better performance in terms of the outsourcing relationship between IT service providers and subcontractors. In relation [5], refers that the IT governance institute (ITGI) established five domains of coverage, strategic alignment of IT with the business, value delivery, risk management, resource management and performance measurement. "The IT government is the responsibility of the executives of the board of directors and contemplates leadership, structures and operational processes to ensure that the company's IT supports the organizational strategies and objectives". In this sense, information is the most important asset an organization has, therefore, its duty is to protect it. Your security depends on ensuring compliance with your confidentiality, integrity and availability, pillars or fundamental principles of information [6]. At present, an organization is clear that in order to remain current and improve market expansion, internet interconnectivity, business automation and online processes are relevant. This trend of "techno-dependence" as well as brings benefits, also implies risks for the information asset, as a consequence of the vulnerabilities in the software and hardware of technological products. Thus, the immersion in the network of networks, as well as expanding the business, also exposes a greater number of threats to the information asset, because it expands the area to be controlled. Therefore, the implementation of mechanisms to safeguard IT is currently a necessity for organizations [6]. In accordance with the new technological trends and with the objective of guiding the public entities of national and territorial order in the improvement of the standards of information security, the Colombia government through the Ministry of Information Technology and Communications (MinTIC), in the framework of the online government strategy (GEL), designed the information security and privacy model (MSPI), based on the references of ISO/IEC 27001 version 2013, the National Information Solutions Cooperative (NISC) cybersecurity framework , the legal basis of the law on protection of personal data [7], transparency and access to public information [8] among others, relevant in the management of the security of the assets of information [9]. In the context of international backgrounds [10], they proposed a model for cloud governance as a holistic framework that addresses IT governance, through a pattern that designs requirements management, for information security, the cycle of life, risk management and compliance, from the framework of IT governance. While in Colombia, MinTIC, designs a MSPI for online government strategy 2.0, with a sustainable approach that goes from preparing the entity to begin implementation, defining gaps, until alignment with the information security management system (ISMS) [11]. In this area, Escuela Colombiana de Ingeniería Julio Garavito presents a methodology for measuring the effectiveness of the MSPI Management indicators [12]. On the other hand [13], propose a model of integration of MECI and COBIT, for public entities; covering the relationship matrix for the components of the internal control standard model and the COBIT control objectives, in order to align the controls applicable to IT with the processes stipulated for these entities. In addition to this, it develops a maturity model that allows to know at what stage of IT governance application the organization is. On the other hand [14], refers to the design of a proposed IT government framework for the Ministry of Higher Education, Science Technology and Innovation (SENESCYT), based on best practices, whose purpose is to develop a proposal that makes use of best practices of generating optimal value from IT, in search of maintaining the balance between the institutional strategic goals and the generation of benefits. Similarly, at the Universidad de Los Llanos, an IT governance model is applied as a case, in support of administrative processes, which proposes to sensitize and raise awareness among senior managers of the educational institution the need for the effective use of technologies [15]. The great use of technological tools in fulfillment of the missionary object, obliges most organizations to implement new patterns framed in standards that allow to manage the security of their information. The MSPI, designed by MinTIC, is aligned with the IT architecture reference framework and transversally supports the components of the GEL strategy: ICT for services, ICT for open government and ICT for management [9]. Although MinTIC has been available as a guide, at the time of implementation there are difficulties in carrying out the stages; among them and possibly the main one is the lack of trained professionals with experience in implementing ISMS within public entities [11]. In addition, senior management does not assume the role of leader and sponsor that corresponds to it, so it leads that the project is affected in its development, taking into account that it is not given the required importance [11]. This allows us to deduce that the MSPI is a management-oriented model, denoting the absence of direct control that helps to exercise direction and complements the management of new technologies in public administration. In addition, the IT area is highlighted and especially in the state entities, the need for control instruments related to key IT processes, which allow senior management to monitor, prevent failures, observe trends and find possibilities for improvement [13]. In relation [16], they highlight the importance that IT managers have in order to achieve the alignment, synchronization and convergence of technology and business, as well as the ability to manage them. "Organizations are increasingly dependent on IT for decision making in order to sustain business growth" [5]. Methodology The development of the proposed characterization of an IT governance model for state entities, in support of compliance with the MSPI component in the framework of the digital government strategy, was carried out under the research approach quantitative with a positivist paradigm with descriptive study scope, supported by field activities that facilitate the collection and analysis of the information obtained regarding the different models of IT governance, which could facilitate the characterization of the model required for the effective adoption of the MSPI by the government sector entities. Taking into account the methodological support that arises from the documentary analysis as the basis of the epistemic structure to generate an approach to the phenomenon of study. Results The development of the proposed IT governance model for state entities, in support of compliance with the information security and privacy component in the framework of the digital government policy, is a process in which a series of activities. Identification of the regulatory framework of the Colombian state for the implementation of the security and privacy model The national government has established the guidelines for the government strategy in decrees 1151 of 2008, 2573 of 2014, 1078 of 2015 and the digital government policy in 1008 of 2018. In the objective the digital government policy aims to take advantage of the technology by the state, citizens and interest groups, so that they acquire the specific competences and capacities for the fulfillment of needs, as well as the solution of public problems [17]. The scope of application is maintained. Regulated principles prevail in article 3 of law [18] and article 3 of law [19], as well as that of the innovation sector, competitiveness ICT, proactivity and information security are added. The structure of the gel strategy to that of digital policy, changes from four to two components (ICT for the state and ICT for the Society). It adds a series of enabling elements that allow entities, regardless of their resources and capacity, to carry out the implementation according to their needs and characteristics. As responsible for implementation institutional development is established in the state. The assignment of roles and responsibilities is made to the actors involved in the implementation of the digital government policy. An institutional scheme is developed aligned with decree [20], in which the instances of direction and coordination of the planning and management system are adjusted and the institutional, departmental, district and municipal management and performance committees are created [21]. The monitoring and evaluation are in charge of the entities that report the achievement of the policy's proposals based on the fierce projects and initiatives that make use of icts and the progress in implementation is measured based on the goals reported in each term [15]. Existing IT governance frameworks that converge on governance and management objectives The corresponding mapping between the principles of ISO/IEC 38500 and its alignment with the processes, catalysts and other elements of the COBIT 5 reference framework is presented (Figure 1). Design of the governance model that facilitates the evaluation, direction and control of the security and privacy program of the information assets of the entities of the Colombian state The structure of the proposed model aligns the phases of the MSPI, oriented mainly to the management of information asset security, with the IT goals and processes of the COBIT 5 management domain, in order to facilitate the application of the main governance processes proposed by ISACA to the MSPI. The model consists of 3 stages that include government and management activities throughout the subsequent commissioning monitoring of the security and privacy management system proposed by the Colombia state (see Figure 2). The stages of the proposed model are: • GAP analysis • Model adoption • Monitoring and maintenance The stages are aligned with the government and IT management domains, the COBIT 5.0 IT goals and those required for the MSPI production start-up. Alignment of the processes and activities of the proposed governance model In order to identify the relevance of the governance processes proposed by COBIT 5.0 that facilitate the proposed model management, evaluation and monitoring of the implementation of the MSPI; the alignment of the processes of the selected government standard is carried out, with the activities required for the adoption of the information security and privacy model defined by the digital government policy to state organizations. Implementation plam The implementation of the IT government model in support of the MSPI for the entities of the Colombian state, begins with the creation of the appropriate environment for the application of the model, which allows the initiative to be guaranteed by stakeholders. The base implementation guide for the proposed model is that proposed by the IT [22], in which seven (7) phases are established. 3.5.1. Phase 1: Obtain the commitment of senior management. The objective of this phase is to obtain the endorsement and support of the top management of the organization for the implementation of the IT governance model in support of the MSPI. As well as dissemination among stakeholders (interested parties). Phase 2: Determine the current status. The objective of this phase is to know through a diagnosis what is the current state of the level of maturity of IT governance in the organization and what the desired one to achieve is. Phase 3: Establish the desired future state. Determine the maturity status of the IT government desired for the organization, according to the diagnosis made in the previous phase. Phase 4: Identify the gaps. According to the diagnosis made in the previous two phases, through the levers, the current and desired level of IT maturity in the organization is obtained, the identification of gaps to be closed is performed to give continuity to the implementation of the model. Phase 5: Define the implementation plan. The objective of this phase is to determine the implementation plan or program to follow to achieve the proposed objectives. 3.5.6. Phase 6: Develop the Implementation Plan. In this phase the implementation development established in the previous phase begins. 3.5.7. Phase 7: Monitor and control the performance of the implementation. Establish a periodic review program for each of the projects, which allows the validation of compliance with the proposed objectives.
2019-11-28T12:48:26.323Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "de7975998be189609875527bd3beefe127407d41", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1409/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2e703a7302bc9283dd60df2686213bdcecd03822", "s2fieldsofstudy": [ "Computer Science", "Political Science" ], "extfieldsofstudy": [ "Business", "Physics" ] }
119732622
pes2o/s2orc
v3-fos-license
Maximal $L^p$-regularity for perturbed evolution equations in Banach spaces The main purpose of this paper is to investigate the concept of maximal $L^p$-regularity for perturbed evolution equations in Banach spaces. We mainly consider three classes of perturbations: Miyadera-Voigt perturbations, Desch-Schappacher perturbations, and more general Staffans-Weiss perturbations. We introduce conditions for which the maximal $L^p$-regularity can be preserved under these kind of perturbations. We give examples for a boundary perturbed heat equation in $L^r$-spaces and a perturbed boundary integro-differential equation. We mention that our results mainly extend those in the works: [P. C. Kunstmann and L. Weis, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 30 (2001), 415-435] and [B.H. Haak, M. Haase, P.C. Kunstmann, Adv. Differential Equations 11 (2006), no. 2, 201-240]. Introduction In this paper we investigate the maximal L p -regularity of evolution equations of the type       ż (t) = A m z(t) + P z(t) + f (t), t 0, z(0) = 0, where A m : Z ⊂ X → X is a linear closed operator in a Banach space X with domain D(A m ) = Z, P : Z → X is an additive linear perturbation of A m , G, K : Z → U are linear boundary operators (U is another Banach space) and f ∈ L p (R + , X) with p 1 is a real number. Actually, we assume that A := A m with domain D(A) = ker(G) is a generator of a strongly continuous semigroup T := (T(t)) t 0 on X. The concept of maximal regularity has been the subject of several works for many years, e.g. [8,12,9,10], and the monograph [11]. The main purpose of these works is to give sufficient conditions on the operator A so as the problem (1.1) with P ≡ 0 and K ≡ 0, which can be written asż has also the same property. In addition if we assume that (A, B, P |D(A) ) generates a regular linear system on X, U, X, then P is still p-admissible observation operator for A and then the problem (1.4) is well-posed and has the maximal L p -regularity, see Theorem 3.4 and Theorem 4.15. Let us now assume that the boundary operator K is unbounded K : Z X → U. This situation is quite difficult which needs additional assumption to treat the well-posedness and maximal L p -regularity. According to [17], if we assume that (A, B, K |D(A) ) is regular on X, U, U with I U : U → U as an admissible feedback, then the problem (1.5) is well-posed on the Banach space X. Moreover, if the problem (1.2) has a maximal L p -regularity and λD λ κ for any Reλ > λ 0 , where λ 0 ∈ R and κ > 0 are constants, then the problem (1.5) has also the maximal L p -regularity on a non reflexive Banach space X, see Theorem 4.17. On the other hand, we assume that (A, B, P |D(A) ) generates a regular linear system on X, U, X. Then, in Theorem 3.4, we prove that the problem (1.4) is well-posed. Corollary 4.22 shows that the problem (1.4) has the maximal L p -regularity. If X is a UMD space then we use R-boundedness to prove the maximal L p -regularity for the evolution equation (1.4), see Theorem 4.20 and Corollary 4. 22. We mention that in [18], the authors proved perturbation theorems for sectoriality and Rsectoriality in general Banach spaces. They gives conditions on intermediate spaces Z and W such that, for an operator S : Z → W of small norm, the operator A + S is sectorial (resp. R-sectorial) provided A is sectorial (resp. R-sectorial). Their results are obtained by factorizing S = BC. As R-sectoriality implies maximal regularity in UMD spaces, these theorems yield to maximal regularity perturbation only in UMD spaces. In Section 5, we have used product spaces and Bergman spaces to reformulate boundary perturbed intego-differential equations as our abstract boundary evolution equation (1.1). This allows us to translate the results on well-posedness and maximal L p -regularity obtained for the problem (1.1) to intego-differential equations. In the next section, we first recall the necessary material about feedback theory of infinite dimensional linear systems. We then use this theory to prove the well-posedness of the evolution equation (1.1) in Section 3. Our main results on maximal L p -regularity for the problem (1.1) are gathered in Section 4. The last section is devoted to apply the obtained results to perturbed intego-differential equations. Given a semigroup T := (T(t)) t 0 generated by an operator A : D(A) ⊂ X → X, we will always denote by ω 0 (T)(or ω 0 (A)) the growth bound of this semigroup. The resolvent set of A is denoted by ρ(A). Preferably, we denote the resolvent operator of A by R(λ, A) := (λ − A) −1 for any λ ∈ ρ(A), where the notation λ − A means λI − A. Feedback theory of infinite dimensional linear systems In this section, we gather definitions and results from feedback theory of infinite dimensional linear systems mainly developed in the references [30,31,32,37]. We also give some new development of this theory. Hereafter, X and U are Banach spaces and p ∈ [1, ∞[. It is known (see e.g. [30,31]) that partial differential equations with boundary control and point observation can be reformulated as the following distributed linear system where A : D(A) ⊂ Z ⊂ X → X is the generator of a strongly continuous semigroup T := (T(t)) t 0 on X with Z is a Banach space continuously and densely embedded in X, B ∈ L(U, X −1 ) is a control operator such that R(λ, A −1 )B ∈ L(U, Z), λ ∈ ρ(A), and K ∈ L(Z, U) is an observation operator. Here X −1 is the completion of X with respect to the norm R(λ, A) · . We recall that we can extend T to another strongly continuous semigroup T −1 := (T −1 (t)) t 0 on X −1 with generator A −1 : X → X −1 , the extension of A to X (see e.g. [13, chap.2]). The mild solution of the system (2.1) is given by: where the integral is taken in X −1 . Formally, the well-posedness of the system (2.1) means that the state satisfies x(t) ∈ X for any t 0, the observation function y is extended to a locally p-integrable function y ∈ L p loc ([0, ∞), U) satisfying the following property: for any τ > 0, there exists a constant c τ > 0 such that for any initial state x 0 ∈ X and any control function u ∈ L p loc ([0, ∞), U). In order to mathematically explain this concept, let us define We also need the following definition. (i) B ∈ L(U, X −1 ) is called p-admissible control operator for A, if there exists t 0 > 0 such that : . We also say that (A, B) is p-admissible. (ii) C ∈ L(D(A), Y ) is called p-admissible observation operator for A, if there exist α > 0 and κ := κ α > 0 such that: for all x ∈ D(A). We also say that (C, A) is p-admissible. Let us now describe some consequences of this definition. If B is p-admissible control operator for A, then by the closed graph theorem one can see that for any t 0, This implies that the state of the system (2.1) satisfies x(t) = T(t)x 0 + Φ t u ∈ X for any t 0, x 0 ∈ X and u ∈ L p loc ([0, ∞), U). According to [36], for all 0 < τ 1 τ 2 , Now if C is p-admissible observation operator for A, then due to (2.3), the map Ψ ∞ : For any x ∈ X and t 0, we define the family Then for all t 0, ) . On the other hand, let us consider the linear operator Clearly, D(A) ⊂ D(C Λ ) and C Λ = C on D(A). This shows that C Λ is in fact an extension of C, called the Yosida extension of C w.r.t. A. We note that if C is p-admissible for A, then T(t)X ⊂ D(C Λ ) and for any x ∈ X and a.e. t > 0. In the sequel, we assume that B and C are p-admissible for A and set Remark that for any u ∈ W 2,p 0,loc ([0, ∞), U), t 0 and by assuming 0 ∈ ρ(A) (without loss of generality) and using an integration by parts, we have On the other hand, using the fact that KR(0, A −1 )B ∈ L(U), CR(0, A) ∈ L(X, U) and (2.4), the application (t → KΦ t u) ∈ L p loc ([0, ∞), U) for any u ∈ W 2,p 0,loc ([0, ∞), U). Thus we have defined an application Definition 2.2. [35] Let B and C be p-admissible control and observation operators for A, respectively. We say that the triple (A, B, C) generates a well-posed system Σ on X, U, U, if the operator F ∞ defined by (2.5) satisfies the following property: For any α > 0 there exists a constant ϑ α > 0 such that for all u ∈ W 2,p 0,loc ([0, ∞), U), The operator F ∞ is called the extended input-output operator of Σ. If (A, B, C) generates a well-posed system Σ on X, U, U, then we have two folds: first the state of (2.1) satisfies x(t) ∈ X for all t 0, and second F ∞ have an extension F ∞ ∈ L(L p loc ([0, ∞), U)), due to (2.6). Observe that the observation function y verifies y(·) := y(·; for all . We now turn out to give a representation of the observation function y in terms of the observation operator C and the state x(·). To that purpose Weiss [37,38] introduced the following subclass of well-posed linear systems. Definition 2.3. Let (A, B, C) generates a well-posed system Σ on X, U, U with extended input-output operator F ∞ . This system is called regular (with feedthrough D = 0) if : with u z 0 (s) = z 0 for all s 0, is a constant control function. According to Weiss [37,38], if (A, B, C) generates a regular system Σ on X, U, U, then the state and the observation function of the linear system (2.1) satisfy for any initial state x(0) = x 0 ∈ X, any control function u ∈ L p ([0, ∞), U) and a.e. t 0. Definition 2.4. Let a triple (A, B, C) generates a well-posed system Σ on X, U, U with extended input-output operator F ∞ . Define The identity operator I U : U → U is called an admissible feedback for Σ if the operator , U) admits a (uniformly) bounded inverse for some t 0 > 0 (hence all t 0 > 0). A consequence of Definition 2.4 is that the feedback law u = y(·; x 0 , u) has a sense. In fact, due to (2.7) this is equivalent to ( , then the equation u = y(·; x 0 , u) has a unique solution and this solution u ∈ L p ([0, τ ], U) is given also by a.e. t 0, due to (2.8). Using (2.2), the state x(·) satisfies the following variation of constants formula for any x 0 ∈ X and any t 0. Now we set Then by using the definition of C 0 -semigroups one can see that (T cl (t)) t 0 is a C 0semigroup on X. More precisely, we have the following perturbation theorem due to Weiss [37] in Hilbert spaces and to Staffans [31,Chap.7] in Banach spaces. generates a C 0 -semigroup (T cl (t)) t 0 on X such that range(T cl (t)) ⊂ D(C Λ ) for a.e. t > 0, and for any α > 0, there exists c α > 0 such that for all x 0 ∈ X, Moreover, this semigroup satisfies In addition (A cl , B, C Λ ) generates a regular system Σ cl . Definition 2.6. Let (A, B, C) generates a regular linear system on X, U, U with the identity operator I U : U → U as an admissible feedback. The operator is called the Staffans-Weiss perturbation of A. It is not difficult to see that if one of the operators B or C is bounded (i.e. B ∈ L(U, X) or C ∈ L(X, U)) and the other is p-admissible then the triple (A, B, C) generates a regular linear system on X, U, U with the identity operator I U : U → U as an admissible feedback. As application of the Staffans-Weiss theorem (Theorem 2.5), we distinct two subclasses of perturbations as follows: Remark 2.7. (i) We take B ∈ L(X, U) and C ∈ L(D(A), U) a p-admissible observation operator for A. According to Theorem 2.5, the operator A cl := A + BC with domain D(A cl ) = D(A) is a generator of a strongly continuous semigroup T cl := (T cl (t)) t 0 on X such that T cl (t)X ⊂ D(C Λ ) for a.e. t > 0, the estimate (2.10) holds, and On the other hand, it is shown in [15], that the semigroup T cl satisfy also the following formula Using Hölder inequality on can see that there exists α 0 > 0 and γ ∈ (0, 1) such that for all x ∈ D(A). The following operator is a Miyadera-Voigt perturbation for A; (see e.g. [13, p.195]). (ii) We take C ∈ L(X, U) and B ∈ L(U, X −1 ) a p-admissible control operator for A. Then the part of the operator A −1 + BC in X generates a strongly continuous semigroup on X satisfying all properties of Theorem 2.5. In this case the operator P ds := BC : X → X −1 is called Desch-Schappacher perturbation for A (see e.g. [13, p.182]). Well-posedness of perturbed boundary value problems The object of this section is to investigate the well-posedness of the perturbed boundary value problem defined by (1.1). We first rewrite (1.1) as non-homogeneous perturbed Cauchy problem of the form (1.4). Then the well-posedness of (1.1) can be obtained if for example the operator generates a strongly continuous semigroup on X and that P is a p-admissible observation operator for A (see Remark 2.7 (i)). Recently, the authors of [17] introduced conditions on A m , G and K for which A is a generator. To be more precise, assume that (H1) G : Z → U is onto, and (H2) the operator defined by A := A m| ker(G) and D(A) := ker(G), generates a C 0 -semigroup (T(t)) t 0 . 8 According to Greiner [14], these conditions imply that for any λ ∈ ρ(A) the restriction of G to ker(λ − A m ) is invertible. We then define This operator is called the Dirichlet operator. Define the operators : where i is the canonical injection from D(A) to Z. In the rest of this paper, C Λ denotes the Yosida extension of C with respect to A. It is shown in [17, lem.3.6] that if A, B, C as above and if (A, B, C) generates a regular linear system Σ on X, U, U, then we have If H is the transfer function of Σ and α > ω 0 (A) then for any λ ∈ C with Reλ > α. Moreover, we have We have the following perturbation theorem (see [17] for the proof). Under the assumptions of Theorem 3.1, the mild solution of the problem (1.5) is given by for any t 0, x ∈ X and f ∈ L p (R + , X). Before giving another useful expression of z in term of the semigroup T, we need the following very useful result proved in [ Lemma 3.2. let (S(t)) t 0 be a strongly continuous semigroup on X with generator (G, D(G)). Let Υ ∈ L(D(G), X) be a p-admissible observation operator for G. Denote by Υ Λ the Yosida extension of Υ with respect to G. Then Proof. Let, by Theorem 3.1, T cl the semigroup generated by A and let z : [0, +∞) → X be the mild solution of the problem (1.5) given by (3.6). According to Theorem 2.5, we know that C Λ is an admissible observation operator for A. We denote by C Λ,A the Yosida extension of C Λ with respect to A. Then D(C Λ,A ) ⊂ D(C Λ ) and C Λ,A = C Λ on D(C Λ,A ). In fact, let x ∈ D(C Λ,A ) and s > 0 sufficiently large. Then by first taking Laplace transform on both sides of (2.11) and second applying sC Λ , we obtain where we have used (3.4). Remark that Hence, by (3.5) and the fact that The fact that C Λ is p-admissible for A, then by using (3.6) and Lemma 3.2, we obtain z(t) ∈ D(C Λ,A ) for a.e. t > 0. This shows that z(t) ∈ D(C Λ ) and C Λ z(t) = C Λ,A z(t) for a.e. t > 0. The estimation in (3.7) follows immediately from (2.10) and Lemma 3.2. Let us prove the last property in (3.7). By density there exists Using Hölder inequality, it is clear that z n (t) − z(t) → 0 as n → ∞. Now let us prove that z n satisfies the third assertion in (3.7). In fact, the estimate in (3.7) implies that On the other hand, using the expression of the semigroup T cl given in (2.11), change of variable and Fubini theorem we obtain (3.10) For simplicity we assume that 0 ∈ ρ(A). We then have Now replacing this in (3.10), and using (3.9), we have Put Then for any t ∈ [0, α], we have due to the admissibility of B for A and Hölder inequality. This shows that z n (t)−ϕ(t) → 0 as n → ∞, and hence z = ϕ. (ii) The boundary problem (1.1) is well-posed and has a mild solution z : [0, +∞) → X satisfying: Proof. (i) We first remark from (3.3) that Z ⊂ D(P 0,Λ ) and P = P 0,Λ on Z, where P 0,Λ denotes the Yosida extension of P w.r.t. A. Let x ∈ D(A) and α > 0. The facts that (A, B, P) is regular and (2.10), we have where β α > 0 is a constant. On the other hand, by (2.11), we have Hence the p-admissibility of P for A follows by (3.11) and the p-admissibility of P for A. Thus, according to Remark 2.7 (i), the operator (A + P, D(A)) generates a strongly continuous semigroup on X. The assertion (ii) follows from [15, thm.5.1] 4. Perturbation Theorems for maximal regularity 4.1. Maximal regularity. Let G : D(G) ⊂ X → X be the generator of a strongly continuous semigroup S := (S(t)) t 0 on a Banach space X. Consider the following nonhomogeneous abstract Cauchy problem: By "maximal" we mean that the applications f , Gz and z have the same regularity. Due to the closed graph theorem, if G ∈ MR p (0, T ; X) then for a constant C > 0 independent of f . It is known that a necessary condition for the maximal L p -regularity is that G generates an analytic semigroup. According to De Simon [10] this condition is also sufficient if X is a Hilbert space. On the other hand, it is shown in [12] . It is know ((see [33] (2.a) or [23] 1.5)) that G has maximal L p -regularity on [0, T ] if and only if (S(t)) t 0 is analytic and the operator R defined by extends to a bounded operator on L p ([0, T ]; X). As we will see in our main results, this characterization is very useful if one works in general Banach spaces. (ii) It is known (see [12]) that if G ∈ MR(0, T ; X) then for every λ ∈ C, G + λ ∈ MR(0, T ; X), hence without lost of generality, we will assume through this paper that our generators satisfy ω 0 (G) < 0. In order to recall another characterization of maximal regularity, we need some definitions. where S(R, X) is the Schwartz space. Classical UMD-spaces are Hilbert spaces and L p -spaces, where p ∈ (1, ∞). It is to be noted that every UMD-space is a reflexive space (see [2]). Rademacher variables). 13 where C Λ is the Yosida extension of C with respect to A, see Section 2. It is shown in [19, p.513 The following result is due to Weis [34] Theorem 4.6. Let G be the generator of a bounded analytic semigroup in a UMD-space The following remark will be useful in the last section Remark 4.7. Let X, Z, U be a Banach spaces such that Z ⊂ X with dense and continuous embedding, A m : Z → X be a closed differential operator and G : Z → U be a linear surjective operator. We assume that the following operator generates a strongly continuous semigroup T := (T(t)) t 0 on X. let D λ the Dirichlet operator associated with A and G (see Section 3). Moreover, we assume that the following operator is a p-admissible control operator for A. In addition, we assume that A has the maximal L p -regularity on X. Let us first show that the operator (−A) θ , for some θ ∈ (0, 1 p ), coincides with its Yosida extension with respect to A, that is: Finally, let us show that the triple (A, B, (−A) θ ) generates a regular system. In fact, we first prove that range(D µ ) ⊂ D((−A) θ ) (which is equivalent to the regularity of the system generated by (A, B, (−A) [13]), we have range(D µ ) ⊂ D((−A) θ ) and the closed graph theorem asserts that (−A) θ D µ ∈ L(U, X). By virtue of analyticity of the semigroup generated by A, ((−A) θ , A) are p-admissible. To show the well-posedness of the system generated by (A, B, (−A) θ ) we have only to show that the operator F ∞ defined by: is well defined and extends to a bounded operator on L p loc ([0, ∞), U). In fact, by integration by parts and assuming that 0 ∈ ρ(A) we have Maximal regularity of A shows the boundedness of F ∞ . This finishes the proof. 4.2. Perturbations that are p-admissible observation operators. In this part, we investigate maximal L p -regularity for the problem (1.1) in the case K = 0. This is equivalent to study a such property for the evolution equation (1.3). As we have seen in the introductory section, we continue to assume that P : Z ⊂ X → X and A := A m with domain D(A) = ker(G) is the generator of strongly continuous semigroup T := (T(t)) t 0 on X. We define P := P ı with ı : D(A) → X is a continuous injection. So that P ∈ L(D(A), X). We recall from Remark 2.7 (i) that if P is a p-admissible observation operator for A, then the following operator A P := A + P = (A + P )ı with domain D(A P ) := D(A) is the generator of a strongly continuous semigroup T p := (T P (t)) t 0 on X such that T P (t)X ⊂ D(P Λ ) for a.e. t > 0, and for all x ∈ X and t 0, where P Λ is the Yosida extension of P with respect to A. On the other hand, as shown in [15] for any f ∈ L p ([0, T ]) with T > 0, the mild solution of the evolution equation (1.3) satisfies z(s) ∈ D(P Λ ) for a.e. s 0, for any t 0. In addition if we denote by P Λ,A P the Yosida extension of P with respect to A p , then P Λ,A P = P Λ on D(P Λ ). So by using (4.4) and Lemma 3.2, there exists a constant c T > 0 independent of f such that (4.5) We now state the main result of this paragraph. Proof. Assume that A ∈ MR(0, T ; X), so that A generates an analytic semigroup on X. This shows that there exists ω ∈ R such that C ω := {λ ∈ C, Reλ > ω} ⊂ ρ(A) and for every λ ∈ C ω we have: On the other hand, for λ ∈ ρ(A), By the admissibility of P for A, there existsM > 0 such that Finally, for λ ∈ C α 0 we have This implies, by [29,Thm.12.13]; that (T P (t)) t 0 is analytic. We now define, for any f ∈ C([0, T ], D(A)), Due to (4.4), we obtain Using Remark 4.2 (i), the estimate (4.5) and Lemma 3.2; there exists a constantc T > 0 independent of f such that This ends the proof, due to Remark 4.2. (1) In the proof of Theorem 4.8, we have proved that for p-admissible observation operators P for A, the operator A generates an analytic semigroup on a Banach space X if and only if it is so for the operator A p . Hence if X is a Hilbert space, the maximal L p -regularity of A P is automatically guaranted by [10]. (2) As explained in Remark 2.7 (i), p-admissible observation operators are also Miyadera-Voigt perturbations operators for A. We mention that the authors of [24,Cor.4] have obtained a result on maximal L p -regularity under Miyadera-Voigt perturbations, where it is assumed that the state space X is reflexive (or UMD) and the perturbation P is a closed and densely defined operator and satisfies a very special Miyadera-Voigt condition. In our Theorem 4.8, X is supposed to be a general Banach space and the perturbation P is not closed and then with even minimum conditions we have obtained the maximal L p -regularity for A P . In the sequel we will also compare our result Theorem 4.8 with a result in [24, Thm.1] about small perturbations. To that purpose we need the following lemma. Next we will show that for x ∈ D(A) we have Jx = C(−A) −β x. This is equivalent to show that the operator C and the integral Γ (−µ) −β R(µ, A)xdµ commute. Since for all x ∈ D(A). This ends the proof. This implies that there exists a constant c > 0 such that P x c (−A) β x . As (−A) β is a small perturbation for A, then P is so. Now by applying [24, Thm.1], the operator A P is sectorial as well. But if A has the maximal L p -regularity, the result of [24, Thm.1] confirms that A p has also the maximal L p -regularity only if the state space X is a UMD space. However Theorem 4.8 shows that the maximal L p -regularity is preserved for A p even if we work in a general Banach space. This confirms that the p-admissibility for the perturbation operator is a very powerful tool to prove maximal L p -regularity in Banach spaces. 4.3. Desch-Schappacher perturbation. In this section we will discuss maximal L pregularity of the perturbed boundary problem (1.1) (or equivalently (1.4)) under conditions (H1) and (H2) as in Section 3 and when the boundary perturbation K satisfies the condition (H3) K : X → U is linear bounded (i.e. K ∈ L(X, U)). On the other hand, let B as in (3.2). We shall also consider the following assumption (H4) B is a p-admissible control operator for A. We first study the maximal L p -regularity for the evolution equation (1.5), where the operator (A, D(A)) is defined by (3.1). As K is bounded then, under the above conditions, the triple operator (A, B, K) generates a regular linear system on X, U, U with I U : U → U as admissible feedback. By Theorem 3.1, the operator A generates a strongly continuous semigroup T cl := (T cl (t)) 0 on X and then the unique mild solution (1.5) is given by for any t 0 and f ∈ L p ([0, T ], X) with T > 0. According to Proposition 3.3, this mild solution satisfies also Remark 4.12. Let us assume that T is an analytic semigroup on X and B satisfies the condition for any ω > ω 0 (A) and a constant κ > 0. Then without assuming the condition (H4), one can use the same argument as in [24,Thm.8] (in the case α = 0) and the fact that A coincides with the part of the operator A −1 + BK on X to prove that A generates an analytic semigroup on X. In the absence of analyticity of T one cannot prove this generation result. Observe that our condition (H4) implies the estimate (4.11), see e.g. [32, chap.3]. With conditions (H1) to (H4) we have showed that A is a generator on X without assuming any analyticity of T, see Remark 2.7 (ii). Proof. Let us first show that the semigroup T cl generated by A is analytic. The condition A ∈ MR(0, T ; X) implies that the semigroup T is analytic. Hence, by [29,Thm. 12.31], we can find ω > ω 0 (A) and a constant c > 0 such that for and λ ∈ C such that Reλ > ω. Now due to (4.11), for Reλ > ω + (2κ K ) p =:ω, we have 1 ∈ ρ(R(λ, A −1 )BK) and (I − R(λ, A −1 )BK) −1 2. According to Theorem 3.1, we have {λ ∈ C : Reλ >ω} ⊂ ρ(A) and for some constant Reλ >ω, due to (4.12). This shows that (T cl (t)) t 0 is analytic, by [29,Thm 12.31 ]. Now define the following linear operators On the other hand, taking in to account that the function z is the solution of the evolution equation (1.5), using an integration by parts and the fact that range(D µ ) ⊂ ker(µ − A m ) for any µ ∈ ρ(A) we have for almost every t 0, and all f ∈ C([0, T ], D(A)). Now the identity (4.13) becomes for almost every t 0, all µ ∈ ρ(A) and f ∈ C([0, T ], D(A)), where g µ := (I + D µ K)f + µKz(·). By assumption there exists c T > 0 such that Let ω > max{ω 0 (A), ω 0 (A)} and choose and fix µ > ω + (2c T κ K ) p , where the constant κ > 0 is given in (4.11). Then we have Now using (4.15), (4.9) and Hölder inequality, we obtain Now we define the operator for any t 0 and measurable functions g : [0, T ] → X. Using (4.14), we obtain Remark that the restriction of I − F µ to L p ([0, T ], X) is invertible, since by (4.15), we have Now as Rg µ ∈ L p ([0, T ], X) then by (4.17), we have R cl f ∈ L p ([0, T ], X) and Finally, using (4.16), we obtain The required result now follows by density. Remark 4.14. In [24,Rem.11], the authors showed that if A has a maximal L p -regularity on a UMD space X and a perturbation P : X → X −1 satisfies (−A −1 ) −1 P η with η small in some sense (see condition (7) in [24]), then the part of A −1 + P on X has also the maximal L p -regularity on X. The UMD property is an essential condition in [24] due to a Weis' perturbation theorem [34]. In our case, X is a general Banach space (not necessarily UMD). However, instead of the above condition on P we have assumed that the operator P = BK is p-admissible control operator for A (which is the case when B is so). This condition together with (4.11) easily imply the condition (7) in [24]. We now state the result giving the maximal L p -regularity for the systems (1.1) (or equivalently for the equation (1.4)) in the case when K ∈ L(X, U). This is equivalent to the Theorem 4.15. Let X, Z, U be Banach spaces such that Z ⊂ X (with continuous and dense embedding ), p ∈ (1, ∞) and consider the evolution equation (1.1) with bounded boundary perturbation operator K ∈ L(X, U). Assume that the conditions (H1) to (H4) are satisfied. Moreover, we assume that the triple (A, B, P) generates a regular linear system on X, U, X where P is the restriction of P to D(A). Then the operator (A + P, D(A)) is the generator of a strongly continuous semigroup on X. Moreover, if A ∈ MR(0, T ; X) then A + P ∈ MR(0, T ; X) (and hence the evolution equation (1.1) has the maximal L p -regularity). Proof. The fact that (A + P, D(A)) is a generator on X is already proved in Theorem 3.4. Now if A ∈ MR(0, T ; X), then A ∈ MR(0, T ; X), by Theorem 4.13. On the other hand, Theorem 3.4 shows that P is p-admissible observation operator for A. So, thanks to Theorem 4.8 we also have A + P ∈ MR(0, T ; X). Staffans-Weiss perturbation. In this part, we study maximal L p -regularity for the boundary perturbed equation (1.1) in the general case when the boundary perturbation K is unbounded. We then assume, as in the previous part of this paper, that (H1) and (H2) are satisfied. In addition we suppose the following condition (H3)' K : Z → U is linear bounded (i.e. K ∈ L(Z, U)). On the other hand, let B and C as in (3.2). We shall also consider the following assumption (H4)' the triple (A, B, C) generates a regular linear system on X, U, U with I U : U → U as an admissible feedback operator. 22 Theorem 4.16. Let X, Z, U be Banach spaces such that Z ⊂ X (with continuous and dense embedding) and let conditions (H1), (H2), (H3) ′ and (H4) ′ be satisfied. Then the operator (A, D(A)) defined by (3.1) generates a strongly continuous semigroup which is analytic whenever the semigroup generated by A is. Proof. According to Theorem 3.1 (i) A is a generator of a strongly continuous semigroup T cl := (T cl (t)) t 0 on X. Now assume that A generates an analytic semigroup T on X. Then there exist constants β ∈ R and M 1 > 0 such that C β ⊂ ρ(A) and On the other hand, let us prove that the admissibility of B and C for A, imply that where 1 p + 1 q = 1. In fact, we will give a slight modification of the proof given in [7, lem.1.6]. Since A generates a bounded analytic semigroup there exist ω ∈] π 2 , π[ such that σ(A) ⊂ C\Σ ω . Let γ ∈] π 2 , ω[ and Γ the path defined by Γ = {re ±iγ , r > 0} We can see easily that |Rez| |z| = sin γ. By virtue of the resolvent equation and using the analyticity of the semigroup, we obtain: is analytic, then for all z ∈ C 0 we have Since ϕ is bounded on Γ, then it is bounded on C 0 and this is what we want. The other estimation is obtained by the same arguments. Let ω 1 := max{ω 0 (A), ω 0 (A)}. From Theorem 3.1 (ii) we know that for any λ ∈ C ω 1 , we have On the other hand, (I U − C Λ D λ ) −1 = I + H cl (λ), where H cl is the transfer function of the (closed-loop) regular linear system generated by (A, B, C Λ ). Hence there exists α > ω 0 (A) such that Now let ω 2 := max{0, α, β, ω 1 }. Then by using (4.18), (4.19), (4.20) and (4.21), we obtain The following result shows the maximal regularity of the perturbed boundary value problem (1.1) in the case of P ≡ 0. If A ∈ MR(0, T ; X) then A ∈ MR(0, T ; X). Proof. As A ∈ MR(0, T ; X), there exists R ∈ L(L p ([0, T ], X)) such that for all f ∈ L p ([0, T ], X) and a.e t 0. By Theorem 4.16, A generates an analytic semigroup T cl on X. We then can define the following operator Our objective is to show that the operator R cl admits a bounded extension on L p ([0, T ]; X). On the other hand, we define the Yosida approximation operators of A by A n := nAR(n, A n ) = n 2 R(n, A n ) − nI, for any n ∈ N such that n > ω 0 (A). From (4.20) and for any sufficiently integer n, one can write A n = nAR(n, A) + n 2 D n (I − C ∧ D n ) −1 CR(n, A) . We also set 24 for f ∈ C([0, T ]; D(A)) and n ∈ N such that n > ω 0 (A). We have (see [13]), for every t ∈ [0, T ]. Using Proposition 3.3, we have Using (4.23), for large n, , for a constant c T > 0 independent of f . On the other hand, (4.21), (4.22) and Lemma 3.2, where c T,1 is a constant independent of f . We estimate I 3 n (t) by 25 due to (4.22) and Proposition 3.3, where c T,2 > 0 is a constant independent of f . Similarly, for some constantC p > 0 depending on p and independent of f . Thus R cl can be extended to a bounded operator on L p ([0, T ]; X). Remark 4.18. Here we will show that the result of Theorem 4.17 holds only in non reflexive Banach spaces. To that purpose, we define, for α ∈ (0, 1), the Favard space of order α associated to A by Now the assumption sup Reλ>λ 0 λR(λ, A −1 )B < +∞ in the previous theorem implies that range(B) ⊂ F (see [25,Remark 10]) and it is important to remark that the control operator B is strictly unbounded, i.e. range(B) ∩ X = {0}, since it comes from the boundary (see [30]). These facts force us to work in non-reflexive Banach spaces, because if X is reflexive, it is well known that F A −1 1 = X, thus range(B) ⊂ X and this cannot be true. Now we state the previous theorem in the case of X being a GT -space (e.g. if X = L 1 or X = C(K), see for instance [21]), which is a non-reflexive space. Corollary 4.19. Let X, Z, U be Banach spaces such that Z ⊂ X (with continuous and dense embedding) and let conditions (H1), (H2), (H3) ′ and (H4) ′ be satisfied such that either X or X * is a GT -space. Let the operator (A, D(A)) be defined by (3.1). Assume additionally that there exist λ 0 ∈ R such that 2 then A ∈ MR(0, T ; X). Proof. According the Theorem 7.5 in [21], if A is an H ∞ -sectorial operator on X with ω H (A) < π 2 then A has maximal L p -regularity for all 1 < p < ∞, which implies by Theorem 4.17 that A cl ∈ MR(0, T ; X). The next theorem present a perturbation result on UMD-spaces. respectively. These operators are generators of analytic semigroups on X. We first observe that A ω ∈ MR(0, T ; X). To prove our theorem it suffice to show that A ω ∈ MR(0, T ; X). . It is not difficult to show that (A ω , B, C) is also a regular linear system on X, U, U with the identity operator I U : U → U as an admissible feedback. Now according to Theorem 2.5, the following operator , and A cl,ω = A ω due to Theorem 3.1 (i). As in (4.20) we have where H ω (λ) = C Λ R(λ, A ω −1 )B, λ ∈ ρ(A), is the transfer function of the regular linear system generated by (A ω , B, C). Using the assumptions, the equation (4.24) and Theorem 4.6, it suffice to show that the set {(I − H ω (is)) −1 : s = 0} is R-bounded. In fact, by Theorem 3.1 (i) and the condition (H4) ′ the triple operator (A ω , B, C Λ ) generates a regular linear system with transfer function which implies that According to Remark 4.5, the set {H cl,ω (is) : s = 0} is R-bounded. Hence {(I − H ω (is)) −1 : s = 0} is R-bounded. This ends the proof. constants ω > max{ω 0 (A), ω 0 (A)} and α ∈ ( 1 p , 1) such that the set {s α R(ω+is, A −1 )B; s = 0} is R-bounded. If A ∈ MR(0, T ; X) then A ∈ MR(0, T ; X). Proof. Let the operators A ω and A ω as in the proof of Theorem 4.20. Let s ∈ R\{0} and α ∈ ( 1 q , 1). According to Theorem 4.6 it suffices to show that the set {sR(is, A ω ) : s = 0} is R-bounded. In fact, as in (4.24), we obtain By the proof of Theorem 4.20, we know that the set {(I − H ω (is)) −1 : s = 0} is Rbounded. Now as by assumption the set {s α R(is, A ω −1 )B; s > 0} is R-bounded, it suffices to show that the set {s 1−α CR(is, A ω ); s > 0} is R-bounded. We have By [24] Lemma 10, the sets On the other hand, by Lemma 4.10 the operator C(−A ω ) −α has a bounded extension to X. Hence the set {s 1−α CR(is, A ω ); s > 0} is R-bounded. This ends the proof. We end this section by the following result given the maximal L p -regularity for the evolution equation (1.1) (or equivalently (1.4)). Applications The object of this section is to apply our obtained results to solve the problem of maximal L p -regularity for integro-differential equations and boundary integro-differential equations. We then extend some results in [4]. 5.1. Maximal regularity for free-boundary integro-differential equations. Let X 0 , U 0 , Z 0 be Banach spaces such that Z 0 ⊂ X 0 with continuous and dense embedding and let q ∈ (1; ∞). We consider the following problem: where A m : Z 0 → X 0 is a closed linear differential operator, F : Z 0 → X 0 a linear operator, G : Z 0 :→ U 0 is a (trace) linear boundary operator and a certain measurable The main purpose of this subsection is to apply our abstract results on maximal regularity developed in Section 4 for the integro-differential equation (5.1). We first assume that On the other hand, we denote F 0 := F ı, with ı : D(A 0 ) → X 0 is the continuous injection. We now introduce the Banach product space On the other hand, we consider the matrix operator Theorem 5.3. Let X 0 be a UMD space, s > 1, and h be an admissible function satisfying Let p > 1 and set q = ps s−1 . Assume that a(·) ∈ B q h,C and F 0 ∈ L(D(A 0 ), X 0 ) is a padmissible observation operator for A 0 . If A 0 has maximal L p -regularity on X 0 , then A has the maximal L p -regularity on X q . Proof. In [5], the author showed that ( d dz , D( d dz )) has the maximal L p -regularity on B q h,X 0 . By assumptions, A 0 has maximal L p -regularity on X 0 , then it is easy to see that A has maximal L p -regularity on X q . Since P is p-admissible for A and A = A + P , Theorem 4.8 guaranties that A has the maximal L p -regularity on X q . Remark 5.4. With the notation in [5,Thm.3.3], we have B(·) = a(·)F , but with the concept of the admissibility of the operator F and according to Lemma 4.10 and Remark 4.11, B(·) becomes small with respect to A 0 in the sense of (i) in [5,Thm 3.3]. Thus the result of [5] can be obtained for this class of Volterra equation. Further, the author in [5], proved that condition (ii) of [5,Thm.3.3] is also sufficient to obtain the required result. But its not clear for us how the author in [5,Thm.3.3] can use [23,Corollary 12] to obtain the result. However, under the condition (ii), the author in [5,Thm.3.3] can get the desired result by observing only that (ii) implies easily(i). Our result of Maximal L p -regularity of free-boundary Volterra equation (5.1) is not very strong, since the author in [5] shows it for a class of the operator F that are not necessarily admissible. However, we show that the next example cannot be examined by the work [5, Thm.3.3]. 5.2. Boundary perturbation of an integro-differential equation: We are still working under the setting of the previous section and we are now studding the problem: This integro-differential equation is similar to that investigated in the previous subsection. We then use the same notation of Bergman space and product spaces. On X = X 0 ×B q h,X 0 , let us define the matrix operator where Υx = a(·)F x for x ∈ Z 0 . As discussed in the previous subsection the maximal L p -regularity of the integro-differential equation (5.6) is reduced to look for conditions for which the operator G is a generator on X q and has the maximal L p -regularity on X q . We introduce the following assumptions: We also need the following hypotheses (A3) the triple operator (A 0 , B 0 , K 0 ) generates a regular linear system on X 0 , U 0 , U 0 with the identity operator I U 0 : U 0 → U 0 as an admissible feedback. (A4) the triple operator (A 0 , B 0 , F 0 ) generates a regular linear system on X 0 , U 0 , X 0 . The following result shows the generation property of the operator (G, D(G)). Proposition 5.5. Let assumptions (A1) to (A4) be satisfied. Then the operator (G, D(G)) generates a strongly continuous semigroup on X q . Proof. Assumptions (A1) to (A3) show that the operator (A, D(A)) generates a strongly continuous semigroup on X 0 , see Theorem 3.1. On the other hand, if in addition we consider the assumption (A4), then as in the proof of Theorem 3.4 (i), one can see that the operator F ∈ L(D(A), X 0 ) is a p-admissible observation of A. Hence the rest of the proof follows exactly in the same way as in the previous subsection. The main result of this subsection is the following. Let p > 1 and set q = ps s−1 . Suppose that a(·) ∈ B q h,C , assumptions (A1) to (A4) be satisfied, A 0 generates a bounded analytic semigroup and there exists ω > max{ω 0 (A 0 ); ω 0 (A)} such that the sets {s 1 p R(ω + is, A 0,−1 )B 0 ; s = 0} and {s Proof. According to Theorem 4.20, we have A ∈ MR(0, T ; X 0 ). On the other hand as we have mentioned in the proof of Proposition 5.5 the operator F is a p-admissible observation operator for A. We then follow the same technique as in the proof of Theorem 5.3 to conclude that G ∈ MR(0, T ; X q ). Remark 5.7. When perturbing boundary conditions of the integro-differential Volterra equation (5.1), even if the operator F is a small perturbation of A 0 which in turns implies that B(·) = a(·)F satisfies the condition (i) of [5,Thm 3.3] with respect to A 0 , B do not necessarily satisfies the condition (i) [5,Thm 3.3] with respect to A. Hence , the result of [5,Thm.3.3] is not applicable. It is to be noted that if we assume that A 0 generates an analytic semigroup on X 0 and F is p-admissible observation operator for A 0 (in particular it is small perturbation for A 0 , due to Remark 4.11), then F is p-admissible for A (of course under the conditions (A3) and (A4)). So that F can be considered as a small perturbation for A due to Remark 4.11. Even with this one cannot use [5,Thm 3.3] to conclude that the integro-differential Volterra equation 5.6 has the maximal L p -regularity, because for Barta [5] it is not clear that A has the maximal L p -regularity on X. 5.3. Example of a IDE with perturbed boundary conditions: Let r, q ∈ (1, ∞) such that 1 r + 1 q = 1. In this section, we first deal with the following PDE:
2018-10-21T14:47:17.000Z
2018-10-21T00:00:00.000
{ "year": 2018, "sha1": "207fc2cd9d66e4f91347712515afbc1720523950", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "207fc2cd9d66e4f91347712515afbc1720523950", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7318745
pes2o/s2orc
v3-fos-license
The impact of the time of drug administration on the effectiveness of combined treatment of hypercholesterolemia with Rosuvastatin and Ezetimibe (RosEze): study protocol for a randomized controlled trial Background Hypercholesterolemia is one of the main risk factors for cardiovascular disease. The first line treatment for hypercholesterolemia is statin therapy. When the expected low-density lipoprotein cholesterol (LDL-C) concentration is not achieved, the pharmacotherapy may be extended by combining the statin with the cholesterol absorption inhibitor ezetimibe. Methods/design The study is designed as a randomized, open-label, single-center, crossover study evaluating the effectiveness of combined therapy with rosuvastatin and ezetimibe for hypercholesterolemia. The study is planned to include 200 patients with hypercholesterolemia ineffectively treated with statins for at least 6 weeks. After enrollment participants are randomized into one of two arms receiving rosuvastatin and ezetimibe. In the first arm the study drug is administered in the morning (8:00 am) for 6 weeks and then in the evening for the next 6 weeks; in the second arm the study drug is administered at first in the evening (8:00 pm) for the first 6 weeks and then in the morning for the following 6 weeks. In order to minimize non-adherence to the treatment, all patients will receive the study drug free of charge. The primary outcome of the study is change in LDL-C at 6 and 12 weeks of the treatment, depending on the time of day of study drug administration. The secondary endpoints include change in total cholesterol, high-density lipoprotein (HDL) cholesterol, triglycerides, apolipoproteins ApoB and Apo AI, non-HDL cholesterol, small, dense (sd)-LDL cholesterol, lipoprotein(a), glucose, glycated hemoglobin, high-sensitivity C-reactive protein, aspartate aminotransferase, alanine aminotransferase, gamma-glutamyl transferase, and creatine kinase at 6 and 12 weeks of the study drug treatment, as well as assessment of plasma fluorescence using stationary and time-resolved fluorescence spectroscopy at baseline and at 6 and 12 weeks of the therapy. Discussion The RosEze trial is expected to demonstrate whether there is a significant difference in the effectiveness of the lipid-lowering therapy in reducing the concentration of cholesterol when the medications are taken in the morning compared with the evening time of day. Trial registration ClinicalTrials.gov, NCT02772640. Registered on 28 March 2016. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-2047-8) contains supplementary material, which is available to authorized users. Background Hypercholesterolemia is one of the main risk factors for cardiovascular disease (CVD) [1]. Despite enormous progress in the treatment of coronary artery disease (CAD), patients after surviving their first episode are at risk of recurrence [2]. Hypercholesterolemia is a modifiable risk factor for CVD. Lifestyle changes [3], increased daily physical activity [4][5][6][7], as well as optimized diet [8][9][10][11][12] may lead to normalization of specific cholesterol fractions. This strategy, however, often fails or is not sufficient, thus providing the need for pharmacotherapy. The current guidelines recommend statins as the first choice drugs for the treatment of hypercholesterolemia up to the highest recommended dose or the highest tolerable dose (class of recommendation I, level of evidence A) [2]. According to a meta-analysis of studies assessing statins, each 1.0 mmol/L (~40 mg/dL) reduction in low-density lipoprotein cholesterol (LDL-C) corresponds to a 10% reduction in all-cause mortality and a 20% reduction in the number of deaths from CAD [13]. Furthermore, each 1 mmol/L (40 mg/dL) reduction in LDL-C translates into a 23% and 17% reduction of the risk of major coronary events and stroke, respectively. Similar results concerning the efficacy and safety of lipid-lowering therapy using statins were obtained in meta-analyses of studies on primary prevention [14][15][16]. Statins are a heterogeneous group of drugs with respect to their LDL-C reduction power. So far, the most potent statin is rosuvastatin. However, despite intensive statin therapy, only a small group of patients (approximately 20%) reach the therapeutic lipid-lowering goal [17][18][19][20]. When the LDL-C goal is not achieved, the combination of statin with a cholesterol absorption inhibitorezetimibemay be considered (class of recommendation IIa, level of evidence B) [2]. Statin dose titration seems to be less effective compared with the combined therapy with statin and ezetimibe [21]. The combination of statin with ezetimibe reduces the LDL-C by an additional 15-20% [22]. Unfortunately, despite a wealth of evidence on the efficacy and effectiveness of statins in both primary and secondary prevention, statin adherence remains a consistent barrier, with rates below 50% demonstrated in several studies [2,23]. Adherence declines over the duration of treatment [2,[24][25][26][27][28], and this phenomenon is even more pronounced in patients treated for primary compared with secondary prevention of CVD [2]. It was demonstrated in a systematic review and meta-analysis that poor adherence is not limited to statins but to all medications used in secondary prevention for CVD [2,29,30]. Furthermore, non-adherence translates into increased healthcare costs of morbidity, hospital readmissions, and mortality [31][32][33][34][35]. There are many determinants of non-adherence to different medications including statins [36][37][38]. One of the reasons for non-adherence is a large number of drugs taken daily by the patient. Thus, more benefit is achieved with combined drugs containing statin and ezetimibe. Tablets comprising both of these drugs (statin and ezetimibe) simplify the drug administration and increase the probability of drug compliance. Furthermore, the use of these tablets may translate into increased probability for achieving therapeutic goals in hypercholesterolemia treatment [39]. Taking into account the metabolism of cholesterol and possible drug-drug interactions, it is recommended to administer simvastatin in the evening [40]. Rosuvastatin can be administered at any time of the day [41]. In our everyday practice we meet many patients with hypercholesterolemia treated with statins. All of them take the statin in the evening; whereas, for combined treatment with ezetimibe, they take the latter in the morning. Until now there were no studies assessing the effectiveness of the combined treatment of hypercholesterolemia with rosuvastatin and ezetimibe according to the timing of the drug administration. To fill this evidence gap, the goals of this study are to determine whether the time of the day of rosuvastatin and ezetimibe administration plays any role in the effectiveness of the drug, and to identify side effects of combined therapy with rosuvastatin and ezetimibe administered at the same time of the day. Furthermore, we aim to assess whether administration of lipid-lowering drugs in the morning improves adherence compared with their evening administration. Considering the potency of rosuvastatin, which is further enhanced when administered in combination with ezetimibe, we expect significant reduction of LDL-C concentration. However, the role of the time of drug administration in this case is questionable and worth further evaluation. Methods The study is designed as a randomized, open-label, single-center, crossover study evaluating the effectiveness of combined therapy with rosuvastatin and ezetimibe for hypercholesterolemia. The study is conducted with full respect to regulations established in the Declaration of Helsinki. The eligibility criteria for enrollment into the study include adult patients with hypercholesterolemia defined according to the European guidelines [2] and ineffectiveness of statin monotherapy in the treatment of hypercholesterolemia after at least 6 weeks. All study participants will have been on statin therapy due to secondary prevention indications. Furthermore, they are eligible for the study when, despite statin monotherapy, the LDL-C concentration is higher than 70 mg/dL. Key exclusion criteria include the following: active liver disease; unexplained persistent increase in serum transaminase levels, including more than three times the upper limit of normal activity of one of them; severe renal impairment (creatinine clearance <30 mL/min); myopathy; concomitant treatment with cyclosporine and gemfibrozil; pregnancy or lactation; women of childbearing age not using effective methods of contraception; symptoms of muscle damage after using statins or fibrates in the past; activity of creatine kinase of more than five times the upper limit of normal. The study is provided by the Department of Cardiology, Antoni Jurasz University Hospital No. 1 in Bydgoszcz, Poland. After the enrollment, all participants are randomized into one of two arms receiving rosuvastatin and ezetimibe. The study drug (rosuvastatin with ezetimibe) is given: (1) in the morning (8:00 am) for 6 weeks and then in the evening for the next 6 weeks in the first arm; (2) in the evening (8:00 pm) for the first 6 weeks and then in the morning for the following 6 weeks in the second arm. In order to encourage adherence to the treatment, all patients will receive the study drug free of charge over the entire observational period. We plan to enroll 200 patients with ineffectively treated hypercholesterolemia. The scheme of the study and detailed plan of the study are presented in Figs. 1 and 2, respectively. The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist is provided as Additional file 1. Endpoints The primary outcome of the study is change in LDL cholesterol (LDL-C) at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6 th and 12th weeks, of study drug treatment (combination of ezetimibe and rosuvastatin), depending on the time of day of study drug administration. The secondary endpoints include: -Change in total cholesterol, high-density lipoprotein cholesterol (HDL-C), triglycerides (TGs), apolipoprotein B (ApoB), apoliprotein AI (Apo AI), non-HDL-cholesterol, small, dense-LDL-cholesterol (sd-LDL-cholesterol), lipoprotein(a) at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6th and 12th weeks of study drug treatment (combination of ezetimibe and rosuvastatin), depending on the time of day of study drug administration -Assessment of glucose metabolism parameters: glucose, glycated hemoglobin (HbA1c) at baseline and at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6th and 12th weeks of treatment with study drug -Assessment of high-sensitivity C-reactive protein (hsCRP) at baseline and at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6th and 12th weeks of treatment with study drug -Assessment of aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma-glutamyl transferase (GGT), and creatine kinase (CK) at Fig. 1 Scheme of the trial baseline and at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6th and 12th weeks of treatment with study drug -Assessment of plasma fluorescence using stationary and time-resolved fluorescence spectroscopy at baseline, at 6 (0 vs 6) and 12 (0 vs 12) weeks, as well as between the 6th and 12th weeks of treatment with study drug Apart from the analysis in the whole population, the above-mentioned endpoints will be analyzed in subgroups depending on age, sex, and presence of other comorbidities. Blood sample collection and laboratory measurements Blood collection using an intravenous catheter (VACUTAINER, Becton Dickinson, Franklin Lakes, NJ, USA) is scheduled at the day of the enrollment and then during two follow-up visits, after 6 and 12 weeks. Laboratory tests are provided with the use of whole blood, serum, and plasma. Blood is collected in a fasting state, at least 12 h after the last meal, from the ulnar vein, in a volume of approximately 10 mL. Patients are also advised to abstain from alcohol and avoid excessive physical effort within 48 h preceding the blood collection. Serum tubes are allowed to clot for 30 min in a vertical position at room temperature. Serum is separated from venous blood samples by centrifugation for 10 min at 3000 × g at room temperature. Following the centrifugation, routine laboratory measurements are performed in fresh serum (glucose, creatinine, basic lipid profile [total cholesterol, LDL-C, HDL-C, TGs], AST, ALT, GGT, CK), and only HbA1c is measured in whole blood. All remaining serum is aliquoted and stored at -80°C until assayed for hs-CRP, Apo AI, ApoB, lipoprotein(a), and sd-LDL-C. All measurements (except for HbA1c) are performed on the Horiba ABX Pentra 400 analyzer (Horiba ABX, Montpellier, France). LDL-C is measured directly and non-HDL-C is calculated. Reagents for lipoprotein(a) and sd-LDL-C (direct automated sdLDL-C kit) are supplied by Randox Laboratories (Crumlin, UK). HbA1c is measured on the BIO-RAD D-10™ Hemoglobin Testing System using high-performance liquid chromatography (HPLC). Laboratory measurements are performed at the Department of Laboratory Medicine, Nicolaus Copernicus University, Collegium Medicum, Bydgoszcz, Poland, holding national and international procedures for quality control assays. Blood samples for fluorescence measurements are collected at predefined time points. Plasma is separated from venous blood samples by centrifugation for 10 min at 3000 × g at room temperature. Plasma is filtered with the use of micro-dialyzers (Xpress Micro Dialyzer MD100, cut off 12-14 kDa) before fluorescence measurements. It is important to apply the preliminary fractionation to remove the majority of unnecessary particles from plasma just before the final measurement. In order to measure the fluorescence lifetime of samples, the stationary fluorescence spectrometer Hitachi F7000 and time-resolved spectrometer Life Spec II (Edinburgh Instruments Ltd) with the subnanosecond pulsed EPLED diode emitting a light of wavelength λ = 360 nm are used. The spectrometer Life Spec II is equipped with an electronically cooled photomultiplier Hamamatsu R928 connected with a TCC900 PC Card, which incorporates all the electronic modules required for time-correlated single photon counting (TCSPC). Additionally, the concentration of hydroxyproline in all samples is determined. Assessment of plasma fluorescence is provided at the Department of Pharmacology and Therapy, Nicolaus Copernicus University, Collegium Medicum. The statistical analysis Since there is no reference study examining the effectiveness of combined treatment of hypercholesterolemia with rosuvastatin and ezetimibe according to timing of drug administration, we decided to perform an internal pilot study of 20 patients to estimate the final sample size. The means and standard deviations of reduction in LDL-C were 53.25 ± 31.49 mg/dL and 57.71 ± 30.35 mg/dL during morning and evening administration, respectively. The correlation coefficient between total cholesterol reduction during morning and evening drug administration was 0.901. Based on these results and assuming a two-sided alpha value of 0.005, we calculated using the t test for dependent variables that enrollment of 157 patients would provide a 98% power to demonstrate a significant difference in total cholesterol level. To compensate for potential withdrawal of consent or loss of study participants due to other reasons, we plan to enroll 200 patients. The statistical analysis will be carried out using the Statistica 12.0 package (StatSoft, Tulsa, OK, USA). Normal distribution of quantitative variables will be assessed with the Shapiro-Wilk test. Depending on the results of the Shapiro-Wilk test parametric, the Student t tests, the one-way analysis of variance (ANOVA) or a non-parametric test (the Mann-Whitney U test, Wilcoxon's signed rank test, or the Kruskal-Wallis ANOVA, multiple comparison test) will be used. The χ 2 test, the χ 2 with the Yates correction, or Fisher's exact test will be used for qualitative variables depending on subgroup size. To assess factors influencing plasma fluorescence parameters, correlation analysis and multiple regression analysis will be conducted. Two-sided differences will be considered significant at p < 0.05. Safety of the trial The study is limited only to patients with diagnosed hypercholesterolemia, in whom statin monotherapy does not allow one to reach the therapeutic goals. Moreover, all participants receive medications of all other groups recommended by the European Society of Cardiology guidelines accordingly to their comorbidities. Discussion The RosEze trial is a phase IV, single-center, randomized, open-label, crossover study evaluating the effectiveness of combined therapy with rosuvastatin and ezetimibe for hypercholesterolemia depending on timing of the day of administration of the study treatment. This trial will reveal whether there is a significant difference in the effectiveness in reducing the concentration of cholesterol when the medications are taken in the morning compared with the evening time of the day. Considering that most medications are taken in the morning, it is possible that compliance with administration targets will improve if an effective dose can be taken in the morning instead of the evening. The study status The study is currently recruiting participants. It was registered in the ClinicalTrials.gov database with identifier NCT02772640.
2017-07-18T11:11:17.132Z
2017-07-11T00:00:00.000
{ "year": 2017, "sha1": "e6ca475e5664bc3308efd1ec1fbcba63991036eb", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-017-2047-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6ca475e5664bc3308efd1ec1fbcba63991036eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232321732
pes2o/s2orc
v3-fos-license
A PCR-RFLP method for genotyping of inversion 2Rc in Anopheles coluzzii Background Genotyping of polymorphic chromosomal inversions in malaria vectors such as An. coluzzii Coetzee & Wilkerson is important, both because they cause cryptic population structure that can mislead vector analysis and control and because they influence epidemiologically relevant eco-phenotypes. The conventional cytogenetic method of genotyping is an impediment because it is labor intensive, requires specialized training, and can be applied only to one gender and developmental stage. Here, we circumvent these limitations by developing a simple and rapid molecular method of genotyping inversion 2Rc in An. coluzzii that is both economical and field-friendly. This inversion is strongly implicated in temporal and spatial adaptations to climatic and ecological variation, particularly aridity. Methods Using a set of tag single-nucleotide polymorphisms (SNPs) strongly correlated with inversion orientation, we identified those that overlapped restriction enzyme recognition sites and developed four polymerase chain reaction (PCR) restriction fragment length polymorphism (RFLP) assays that distinguish alternative allelic states at the tag SNPs. We assessed the performance of these assays using mosquito population samples from Burkina Faso that had been cytogenetically karyotyped as well as genotyped, using two complementary high-throughput molecular methods based on tag SNPs. Further validation was performed using mosquito population samples from additional West African (Benin, Mali, Senegal) and Central African (Cameroon) countries. Results Of four assays tested, two were concordant with the 2Rc cytogenetic karyotype > 90% of the time in all samples. We recommend that these two assays be employed in tandem for reliable genotyping. By accepting only those genotypic assignments where both assays agree, > 99% of assignments are expected to be accurate. Conclusions We have developed tandem PCR-RFLP assays for the accurate genotyping of inversion 2Rc in An. coluzzii. Because this approach is simple, inexpensive, and requires only basic molecular biology equipment, it is widely accessible. These provide a crucial tool for probing the molecular basis of eco-phenotypes relevant to malaria epidemiology and vector control. Supplementary Information The online version contains supplementary material available at 10.1186/s13071-021-04657-x. Background Chromosomal inversions are taxonomically ubiquitous structural rearrangements resulting from the breakage and end-to-end reversal of a chromosome segment [1]. Growing numbers of studies powered by genomic sequencing suggest that inversions play important roles in sex chromosome evolution, speciation, and environmental adaptation, primarily because they suppress recombination between the inverted and corresponding non-inverted regions in heterozygotes [1][2][3]. Adaptive allelic combinations between inversion breakpoints are preserved as haplotype blocks against recombination with other genetic backgrounds. Balancing selection acting on environmentally adaptive variation appears to be a major force responsible for the maintenance of inversions and their involvement in local adaptation [1][2][3][4]. In most cases, however, the specific targets of selection within inversions and underlying molecular mechanisms controlling alternative phenotypes remain unknown. The Afrotropical Anopheles gambiae complex radiated within the last 0.5 million years [5] into at least nine morphologically identical species [6][7][8][9], three of which are major vectors of human malaria. Across the complex, more than 120 inversion polymorphisms have been detected in natural populations, and an additional ten inversions are fixed between species [10]. Of note, the central part of chromosome 2R that overlaps the 2Rc inversion in three taxa [An. gambiae (s.s.) (henceforth, An. gambiae), An. coluzzii, and An. arabiensis] is disproportionately involved in both fixed and polymorphic inversions in the species complex, potentially implicating the 2Rc region in oviposition site specializations that distinguish taxa in this group [10]. Historically, fixed inversions provided crucial taxonomic tools for identification of these isomorphic species, while analysis of inversion polymorphisms such as 2Rc segregating nonrandomly within species led to the recognition of assortatively mating 'chromosomal forms' presumed to be in the incipient stages of ecological speciation. One of these, the MOPTI chromosomal form (previously considered an incipient species of An. gambiae but now regarded as an arid-associated ecotype of An. coluzzii [6,10]), was characterized in Mali by three main alternative whole-arm karyotypic arrangements: 2Rbc, 2Ru, and 2R + [11]. Importantly, the bc karyotypic arrangement (i.e. co-segregation of the adjacent 2Rb and 2Rc inversions on one arm) was found to be significantly correlated with climatic and ecological variation [11], as bc regularly increased in frequency both in the dry season and in dry geographic localities (Sahelian and Saharan). Mosquito carriers of the bc arrangement escape rain dependence through successful exploitation of irrigated sites for larval development and adult tolerance of extreme aridity. In an extensive review of these findings, the authors conclude: "The evidence for consistent temporal and spatial adaptive changes in (2Rbc) inversion frequencies is unquestionable" [p.505, ref . 11]. Until the advent of molecular approaches, cytotaxonomy was the only practical method for distinguishing species of An. gambiae (s.l.), ensuring the preservation of cytogenetic karyotyping skills despite the drawbacks. Cytogenetic karyotyping is labor-intensive, requires specialized training, and is applicable only to properly preserved ovarian tissue from adult An. gambiae (s.l.) females at the half-gravid gonotrophic stage. The development of an rDNA-based PCR diagnostic assay beginning in 1993 [12] eliminated the requirement for cytotaxonomy and led to declining cytogenetic expertise, especially after the rDNA assay was extended to An. coluzzii (formerly M-molecular form, including MOPTI karyotypic arrangements) and An. gambiae (formerly S-molecular form, including mainly SAVANNA arrangements) [13]. Cytogenetic karyotyping had another important application in addition to species identification. Until recently, it was the only available tool for genotyping of inversion polymorphisms known to be significant predictors of epidemiologically relevant eco-phenotypes in An. gambiae (s.l.). As inversion polymorphism is a form of cryptic population substructure, unrecognized heterogeneities in population samples can bias genome-wide association studies as well as vector surveillance and control [14]. To address the need for an inversion genotyping method that does not rely on cytogenetics, we have been developing genomic approaches to identify and detect tag single nucleotide polymorphisms (SNPs) that are highly predictive of inversion orientation [15][16][17]. In addition to high-throughput molecular and in silico methods that can comprehensively genotype multiple tags from multiple inversions in parallel [15][16][17], we have also designed more conventional PCR-RFLP assays for individual inversions that, because they do not require expensive equipment or services, are both field-and budget-friendly [18]. Here, we complement our recently developed PCR-RFLP assays for inversion 2Rb in An. gambiae and An. coluzzii [18] with assays for 2Rc in An. coluzzii. Mosquito sampling and processing Anopheles coluzzii mosquitoes used in this study were from various historical Afrotropical collections motivated by polytene chromosome analysis of An. gambiae (s.l.). We focused our primary effort on an An. coluzzii population sampled from Burkina Faso, which lies in the arid Sudan savanna belt of West Africa. This geographic region was chosen because local An. coluzzii populations are characterized by high 2Rc chromosomal inversion polymorphism [19]. Furthermore, the particular Burkina Faso population sample analyzed in this study (n = 463; Additional file 1: Table S1) had been previously genotyped for chromosomal inversions using each of three independent methodologies that collectively provide a robust basis for evaluating performance of individual PCR-RFLP assays developed here: (i) classical cytogenetics based on phase contrast microscopy and two molecular approaches, (ii) amplicon sequencing (GT-Seq) and (iii) array hybridization (TaqMan Open Array), based on the detection of tag single nucleotide polymorphisms (SNPs) strongly correlated with inversion orientation [15]. Molecular identification to species DNA individually extracted by one of various methods was used as template in a PCR reaction for An. coluzzii species identification, based on the SINE200 assay [23] or ribosomal DNA (rDNA) [24,25]. Inversion genotyping via cytogenetics and multilocus tags Polytene chromosome preparations from preserved ovarian nurse cells followed della Torre [26]. Paracentric inversion karyotypes were scored according to the An. gambiae cytogenetic map [10,27,28] and established nomenclature [11]. High-throughput molecular inversion genotyping based on the simultaneous detection of multiple tag SNPs was conducted previously via amplicon sequencing or probe hybridization to arrays, as detailed in Love et al. [15]. Assay design for PCR-RFLP genotyping of An. coluzzii 2Rc Tag SNPs predictive of 2Rc genotype were computationally identified previously [15,16] in the Ag1000G database of natural genomic variation [29,30], a database constructed from deeply sequenced wild-caught individual An. gambiae and An. coluzzii mosquitoes. At the time of our tag SNP ascertainment, population samples of An. coluzzii represented in Ag1000G came from Angola, Burkina Faso, Cameroon, Cote d'Ivoire, Ghana, Guinea, and Mali (we omitted samples from The Gambia and Guinea Bissau that were admixed with An. gambiae), and only a subset of those from Burkina Faso, Cameroon, and Mali carried metadata about cytogenetic karyotype. Accordingly, tag SNPs for An. coluzzii 2Rc were identified in the pooled Burkina Faso and Angola Ag1000G samples following Ma and Amos [31], who showed that the application of principal components analysis (PCA) to SNP genotypes within the local window of the genome containing an inverted region produces a pattern indicative of two distinct "populations" of inversion homozygotes (inverted and standard) and their 1:1 admixture (inversion heterozygotes), a pattern of population substructure created by suppressed recombination in the inverted region. Briefly, to apply this approach, we created a matrix of one-digit genotypes at biallelic SNPs within 2Rc for each mosquito. One-digit SNP genotypes represent the count (0, 1, or 2) of alternate alleles (i.e. variants with respect to the PEST reference sequence at a SNP position). PCA of the resulting data matrix allowed computational imputation of the 2Rc inversion genotype for mosquitoes in the sample. Individual SNPs capable of accurately predicting inversion genotype (tag SNPs) should have allelic states that are strongly correlated with inversion genotype. The correlation at individual candidate tags was measured as the percentage of the total mosquito sample with matching inversion and SNP genotypes. To be conservative, we calculated the correlation separately for each of the three inversion genotypes, adopting the minimum value across the three genotypes as the final genotypic concordance value. Candidate tags were ranked based on the final genotypic concordance values. Unlike high-throughput molecular inversion genotyping based on tens of tags per inversion, conventional PCR-RFLP genotyping necessarily relies on one or a few tags, each of which should thus have the highest genotypic concordance. Although we prioritized such tags, suitable candidates for RFLP also depend on the serendipitous overlap of the tag SNP with the recognition site of a commercially available restriction enzyme, such that recognition and cleavage depend upon allelic status of the tag on both chromosomes in a diploid mosquito. Additional constraints include the ability to design suitable flanking PCR primers (e.g. free of possible hairpin or primer-primer interactions), which anneal to high-complexity, non-repetitive template DNA to reduce off-target mis-priming. Applying a minimum threshold of 85% genotypic concordance to the ranked tags, we screened the qualified tags for those that overlapped restriction enzyme recognition sites, using NEBcutter v2.0 [32]. Using the An. gambiae PEST reference genome accessed through VectorBase [33] and Primer3Plus v2.4.2 [34], we designed primer pairs that flanked each tag SNP and produced amplicons 200-300 bp in length. We avoided any primer binding sites and restriction enzyme recognition sites that contained high frequency variants (> 5%, as judged from Ag1000G variation data) and also excluded primer binding sites that overlapped repetitive sequence (as judged from softmasking of AgamP4). Assays whose electrophoretic profiles provided the best separation between inversion genotypes were prioritized. PCR-RFLP genotyping PCR was carried out in 25 µl reactions containing 20 mM Tris-HCl (pH 8.3), 50 mM KCl, 200 µM of each dNTP, 2 mM MgCl 2 , 10 pmol of each primer, 1 U of Taq polymerase, and 1 µl of template genomic DNA. PCR conditions included an initial incubation at 94 °C for 2 min, 35 cycles of 94 °C for 30 s, 57 °C for 30 s, and 72 °C for 45 s, followed by 72 °C for 2 min and a 4 °C hold. PCR amplification was confirmed via gel electrophoresis on 1% agarose gels stained with SYBR Safe at 135 V in 0.5 × TBE buffer. An 8 µl aliquot of the resulting PCR product was digested in 20 µl reactions with 0.5 µl restriction enzyme and 1 × Cutsmart buffer following the recommendations of the manufacturer (New England Biolabs, Ipswich, MA, USA): HinfI and HaeII reactions were incubated at 37 °C for 1 h, then heat inactivated at 80 °C for 20 min; Cac8I incubated at 37 °C for 1 h, then heat inactivated at 65 °C for 20 min; BstUI incubated at 60 °C for 1 h without heat inactivation. Optionally, SDS loading dye was prepared with 10 µl of 10% SDS and 1 ml of 6 × loading dye to mitigate protein-DNA interactions and improve band quality. Digest products were analyzed via gel electrophoresis on 2.5-3% agarose gels stained with SYBR Safe at 100 V in 0.5 × TBE buffer. Amplicon sequencing Enzymatic cleanup of the amplified PCR product was achieved in reactions containing 2 U of Exonuclease 1 (USB Corporation, Cleveland, OH), 1U of Shrimp Alkaline Phosphate (USB), 1.8 µl of ddH 2 O, and 8 µl of the PCR product. After incubation at 37 °C for 15 min, the enzymes were inactivated at 80 °C for 15 min. Sanger sequencing was performed directly on the resulting samples, using the forward PCR primer (Table 1) and the ABI 3730X1 DNA Analyzer Platform (ThermoFisher Scientific, Waltham, MA), by the Genomics and Bioinformatics Core Facility at the University of Notre Dame or by Eurofins Genomics Italy. Sequences were deposited with GenBank under accessions MW158473-MW158543. Results and discussion Four candidate tag SNPs met the four design criteria for PCR-RFLP assays (see"Methods"): (i) ≥ 85% genotypic concordance in the Ag1000G database; (ii) overlap with a restriction enzyme recognition site; (iii) PCR primers fulfilling Primer3Plus default parameters; (iv) clearly distinguishable electrophoretic profiles among inversion genotypes (Table 1, Fig. 1). For simplicity, we named these assays according to the restriction enzyme employed: Cac8I, BstUI, HaeII, and HinfI. Note that owing to distinct filtering requirements among molecular approaches, two of the four tag SNPs targeted by these assays differ from those employed in the respective sets of 23 and 11 tags developed for highthroughput amplicon sequencing and array hybridization genotyping of 2Rc [15] (see Table 1), though the common principles underlying all 2Rc tags suggest that their performance should be similar. Validation in the Burkina Faso sample We previously genotyped 2Rc in a sample of 463 An. coluzzii collected from Burkina Faso, using three independent methods: cytogenetic karyotyping, highthroughput amplicon sequencing, and TaqMan array hybridization (see Methods). Three-way concordance among the methods exceeded 94%, and two-way agreement between the molecular approaches was even higher (97.7%) despite the fact that only two tag SNPs were shared between them [15]. Genomic DNA remaining from specimens subjected to this three-fold genotyping was employed in the validation of the four PCR-RFLP assays. We defined the reference 2Rc genotype for each specimen-the 'gold standard' against which PCR-RFLP assays were compared-as the consensus genotype indicated by at least two of the three previous methods (Additional file 1: Table S1). Aggregate and individual PCR-RFLP assay genotypes were compared against this reference 2Rc genotype. The aggregate PCR-RFLP assay genotype was defined by majority rule, i.e. at least three of four PCR-RFLP assays agreed. There were 26 of 463 specimens (5.6%) for which no aggregate genotype could be defined, either because two different genotypes were each supported by two assays or three different genotypes were supported among the four (Additional file 1: Table S1). Of the remaining 437 specimens that could be assigned aggregate genotypes, all matched the corresponding reference 2Rc genotype ( Table 2; Additional file 1: Table S1). Not surprisingly, the concordance between individual PCR-RFLP assays and the reference 2Rc genotype was imperfect, varying from ~ 80% to ~ 97% ( Table 2). The most important factor underlying incomplete congruence is simply that none of the tag SNPs detected by PCR-RFLP assays are deterministic for inversion orientation in the Ag1000G variation database, presumably owing to low levels of recombination and gene conversion between opposite orientations in heterozygotes (see Ref. 36). As such, no single assay will unerringly predict the correct inversion genotype. Moreover, the percent concordance between tag and chromosomal arrangement observed in different population samples is expected to vary at least somewhat from the value observed in the Ag1000G database, for stochastic reasons alone, if not due to temporal, geographic, or other population genetic differences. Additional (non-exclusive) more minor sources of disagreement between the reference genotype and the genotype predicted by PCR-RFLP assays include SNP variation in the enzyme recognition site at positions other than the tag itself, SNP variation in the primer binding sites on one or both chromosomal arrangements that reduce or preclude primer binding (often referred to as 'allelic dropout' and typically recognized as a heterozygote deficit), and technical problems with restriction digestion (partial or complete failure of the restriction enzyme to cleave an intact target site). The two assays that were least concordant with the reference genotype in the Burkina Faso sample were BstUI (80.3%) and HaeII (88.6%). In neither case did we find evidence for significant heterozygote deficits or any striking imbalance in the distribution of discordances across genotypes. In fact, the HaeII assay in our sample slightly outperformed its predicted genotypic concordance (87% based on the Ag1000G database; Table 1), suggesting that this factor alone is sufficient to explain the HaeII assay's performance. The BstUI assay, by contrast, underperformed its predicted genotypic concordance in Ag1000G (87%; Table 1). We sequenced a subset of 19 BstUI amplicons from specimens whose PCR-RFLP assay disagreed with the reference genotype (Additional file 1: Table S1). We identified three cases in which an additional SNP destroyed the BstUI restriction target site and abrogated cleavage (despite the allelic state of the tag SNP matching the reference genotype in all three cases), which at least Table 1 PCR-RFLP genotyping assays for inversion 2Rc in An. coluzzii Tag position (breakpoints): chromosome coordinates of tag SNP and estimated breakpoint positions from Sangaré [35]; Concord: minimum percent concordance of tag with inversion genotype in Ag1000G based on Love et al. [15]; Ref/Alt, reference and alternate allele at tag SNP; RE, restriction enzyme; chr cut, chromosome (inverted or standard) expected to be cleaved in the assay a Tag SNP also employed in both high-throughput genotyping panels as described in Love et al. [15] b Tag SNP also employed in the amplicon sequencing genotyping panel as described in Love et al. [15] partly explains the apparent underperformance of this assay. The remaining two assays, Cac8I and HinfI, agreed more often with the reference genotype (96.5% and 92.4%, respectively; Table 2). The Cac8I assay performed considerably better in the Burkina Faso sample than predicted based on the genotypic concordance of the tag in Ag1000G (89.5%; Table 1). Of the 16 specimens with discrepant genotypes, we sequenced the Cac8I amplicons of 12 (Additional file 1: Table S1), finding three whose discrepancies were explained not by the allelic state of the tag SNP (which agreed with the reference genotype in all three specimens) but by a different SNP that destroyed the restriction target site. The performance of HinfI closely matched expectations based on its tag in Ag1000G (94.1%; Table 1); we did not perform sequencing on HinfI amplicons from Burkina Faso owing to COVID-19 restrictions, but we have some insight based on sequencing data from other population samples (see below). Validation in other population samples To extend our analysis spatially, we analyzed samples from four additional countries in West and Central Africa (Mali, Senegal, Benin, Cameroon) that had been subject to cytogenetic analysis, but not high-throughput molecular genotyping approaches. For these specimens, the reference 2Rc genotype was based solely on cytogenetics, although this method is not without human error (up to 4% or more, depending upon degree of training and experience; [16,18]). We compared the cytogenetic genotype to the aggregate (majority rule) and individual PCR-RFLP assay genotypes in pooled (Table 3) and individual (Additional file 2: Table S2) population samples. Due to small sample size and lack of 2Rc inversion polymorphism in some individual population samples, we present below the results based on pooled samples. Similar to the Burkina Faso population sample, we found nearly perfect concordance (98%) between cytogenetics and the aggregate genotype (184 of 187), after excluding nine specimens whose four PCR-RFLP assays did not produce a majority genotype. Among the three discrepancies (Table 3), one specimen showed a '0′ karyotype (i.e. homozygous standard, confirmed cytogenetically) contradicted by three of four PCR-RFLP assays showing a genotype of '1′ (i.e. heterozygote, with the BstUI gentotypes confirmed by sequencing). The other two specimens showed a '1′ karyotype, contradicted by all four PCR-RFLP assays indicating a '0′ genotype. In both specimens, an inversion loop in the 2Rc region was confirmed cytogenetically but, due to the relatively low quality of the polytene chromosomes, it was not possible to rule out that the loop corresponds to a rare inversion in the same chromosomal region [28]. Qualitatively, the performance of individual PCR-RFLP assays was comparable inside and outside of Burkina Faso. The same two assays with concordances < 90% in Burkina Faso (BstUI and HaeII) also performed below 90% elsewhere ( Table 3). Considering that the concordance of their tag SNPs was ~ 87% in Ag1000G, these assays actually met expectations. However, the higher correlation between the tag SNPs of the other two assays (Cac8I and HinfI) and inversion status in Ag1000G make them better prospective candidates than BstUI and HaeII. Indeed, in agreement with results from Burkina Faso, both Cac8I and HinfI were superior at genotyping elsewhere in West and Central Africa. Although the Cac8I assay in these other samples did not match its 96.5% performance in Burkina Faso, it was nevertheless concordant with cytogenetics > 91% of the time. Sequencing the PCR amplicons of a subset of nine specimens with discordances between the Cac8I assay and cytogenetics revealed that in five cases the tag SNP genotype actually agreed with cytogenetics. In two of those cases, the PCR-RFLP assay disagreement was caused by a different SNP that destroyed the restriction site. Similarly, the HinfI assay was ~ 93% concordant with cytogenetics in these same samples. Sequencing of 11 PCR amplicons from specimens with genotypic discordances between the assay and cytogenetics revealed no additional polymorphisms in the HinfI recognition site. Our previous efforts to develop tag SNPs for inversion genotyping were directed toward maximizing geographic and taxonomic inclusion based on An. coluzzii and An. gambiae samples represented in Ag1000G at the time [16]. For three inversions (2La, 2Rb, and 2Ru), a single set of tags was identified that successfully genotyped both sister species [15,16]. By contrast, population structure between An. coluzzii and An. gambiae in the 2Rj, 2Rd, and 2Rc arrangements dictated the development of taxon-specific tags [15,16]. For 2Rj and 2Rd, the tags are specific for An. gambiae and are not applicable in An. coluzzii. In the case of inversion 2Rc, separate tag sets were successfully developed for in silico genotyping of both taxa [16]. However, application of these tags for high-throughput molecular genotyping of Burkina Faso samples (independent of Ag1000G) revealed that only An. coluzzii tags performed faithfully against cytogenetically karyotyped An. coluzzii specimens; An. gambiae tags applied to An. gambiae specimens were inadequate [15]. Hence, in the present work, using these same Burkina Faso samples (and others), we focused our PCR-RFLP assay development exclusively on An. coluzzii. The reasons for heterogeneous tag performance across different taxa and even among population samples of the same taxon have yet to be examined in detail, but are likely caused by population structure, which uncouples the correlation between a tag SNP and inversion orientation. Possible (non-exclusive) sources of population structure, aside from taxonomic boundaries themselves, include geography, different selective regimes on allelic targets within inversions, and/or different molecular origins of the inversion. With respect to the last factor, indirect evidence based on allelic variation near the 2Rc breakpoints is consistent with the idea of multiple origins [16], although computational haplotype phasing or long molecule sequencing leading to molecular breakpoint characterization will be required for a confident resolution. Inversion 2Rc is also peculiar in that it is almost never found alone on chromosome 2R. Instead, 2Rc is in nearly perfect linkage disequilibrium with the inverted arrangement of either 2Rb (i.e. 2Rbc) or 2Ru (i.e. 2Rcu) [10,16]. Of note, cu is a characteristic SAVANNA karyotypic arrangement common in many populations of An. gambiae (s.s.), while bc is a characteristic MOPTI karyotypic arrangement that predominates in arid populations of An. coluzzii. Except for a highly endemic chromosomal form of An. gambiae (s.s.) known as BAMAKO [10,11], An. gambiae carriers of the cu arrangement are underrepresented in the Ag1000G database. Future development of 2Rc tags and genotyping assays in non-BAMAKO An. gambiae may benefit from additional whole-genome sequencing of An. gambiae carriers of cu. Several factors, most importantly the non-deterministic nature of tag SNPs predictive of 2Rc genotype, operate to prevent any single tag-and any single PCR-RFLP assay dependent on that tag-from unerringly predicting the correct inversion genotype. However, we have shown here that the joint application of multiple PCR-RFLP assays targeting different tags can substantially improve genotypic concordance. For laboratories unwilling or unable to invest in high-throughput genotyping, the most efficient strategy for accurate genotyping of 2Rc which minimizes 'false positives' at the cost of some 'false negatives, ' would be to apply both the Cac8I and HinfI assays jointly to each specimen in a population sample, preserving only those specimens with genotypes supported by both assays. Had this approach been adopted in the Burkina Faso sample of 463 An. coluzzii, 414 specimens (89.4%) would have had concordant Cac8I and HinfI assay genotypes. Of those 414, all genotypes except one (99.8%) would have agreed with the reference 2Rc genotype. From the original sample of 463, 49 specimens (10.6%) would have been excluded because of conflicting PCR-RFLP assay genotypes. There are several limitations of any PCR-RFLP approach. First, and arguably most important, this method is premised on a naturally occurring restriction site overlapping the tag SNP of interest, severely limiting the choice of amenable tag SNPs. Second, the process requires two steps, hence more time: PCR, followed by restriction digestion. Third, restriction enzymes may be costly, difficult to obtain commercially, and labile even if handled carefully. Finally, genotyping errors result both from technical failures of restriction digestion even if the cut site is intact, and from additional polymorphisms arising in the restriction enzyme recognition sequence that destroy the site. We recommend the consistent use of positive controls as indicators of successful restriction enzyme activity. However, beyond this best practice, the other limitations remain. Overcoming these limitations requires the development of a genotyping assay that dispenses with the need for restriction digestion of the amplicon. Quite recently, a rapid and inexpensive approach termed "SuperSelective (SS) PCR" was developed to genotype single nucleotide variants in Caenorhabditis elegans directly following endpoint PCR [37]. This approach has broad application in any genetic system including An. coluzzii. Moreover, it can be developed
2021-03-24T05:09:08.441Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "c3d5c2f3bf89e29354014ec4c584add593c317a8", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-021-04657-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3d5c2f3bf89e29354014ec4c584add593c317a8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259226766
pes2o/s2orc
v3-fos-license
Antimicrobial Activity against Foodborne Pathogens and Antioxidant Activity of Plant Leaves Traditionally Used as Food Packaging In accordance with Thai wisdom, indigenous plant leaves have been used as food packaging to preserve freshness. Many studies have demonstrated that both antioxidant and antimicrobial activities contribute to protecting food from spoilage. Hence, the ethanolic extracts of leaves from selected plants traditionally used as food packaging, including Nelumbo nucifera (1), Cocos nucifera (2), Nypa fruticans (3), Nepenthes mirabilis (4), Dendrocalamus asper (5), Cephalostachyum pergracile (6), Musa balbisiana (7), and Piper sarmentosum (8), were investigated to determine whether they have antioxidant and antimicrobial activities against spoilage microorganisms and foodborne pathogens that might be beneficial for food quality. Extracts 1–4 exhibited high phenolic content at 82.18–115.15 mg GAE/g and high antioxidant capacity on DPPH, FRAP and SRSA assay at 14.71–34.28 μg/mL, 342.92–551.38 μmol Fe2+/g, and 11.19–38.97 μg/mL, respectively, while leaf extracts 5–8 showed lower phenolic content at 34.43–50.08 mg GAE/g and lower antioxidant capacity on DPPH, FRAP, and SRSA at 46.70–142.16 μg/mL, 54.57–191.78 μmol Fe2+/g, and 69.05–>120 μg/mL, respectively. Extracts 1–4 possessed antimicrobial activities against food-relevant bacteria, including Staphylococcus aureus, Bacillus cereus, Listeria monocytogenes, and Escherichia coli. Only N. mirabilis extract (4) showed antimicrobial activities against Salmonella enterica subsp. enterica serovar Abony and Candida albicans. Extracts 5–8 showed slight antimicrobial activities against B. cereus and E. coli. As the growth and activity of microorganisms are the main cause of food spoilage, N. fruticans (3) was selected for bioassay-guided isolation to obtain 3-O-caffeoyl shikimic acid (I), isoorientin (II) and isovitexin (III), which are responsible for its antimicrobial activity against foodborne pathogens. N. fruticans was identified as a new source of natural antimicrobial compounds I–III, among which 3-O-caffeoyl shikimic acid was proven to show antimicrobial activity for the first time. These findings support the use of leaves for wrapping food and protecting food against oxidation and foodborne pathogens through their antioxidant and antimicrobial activities, respectively. Thus, leaves could be used as a natural packaging material and natural preservative. Introduction The main spoilage mechanisms of a great variety of food products, such as nuts, fish, meat, whole-milk powder, sauces, and oils, usually involve oxidation and microbial growth [1]. Food spoilage causes losses of both sensory and nutritional quality [2]. Foodborne pathogens, such as Staphylococcus aureus, Bacillus cereus, Listeria monocytogenes, Foodborne pathogens, such as Staphylococcus aureus, Bacillus cereus, Listeria monocytogenes, Escherichia coli, Salmonella spp., and Aspergillus spp., could possibly contaminate and develop during food production, processing, storage, and transportation [3]. Some bacteria and fungi also produce toxins, leading to chemical and biological food poisoning outbreaks [4]. To prevent food spoilage, some synthetic antioxidants, such as butylhydroxyanisole (BHA) and butylhydroxytoluene (BHT), have been added to foods. However, because of the potential health hazards of these chemical additives, there have many attempts to seek alternative food packaging materials with antioxidant and antibacterial activities that extend the shelf life of food without the addition of chemical additives [5]. In Asian countries, leaves from local plants have been used as packaging for foods or desserts ( Figure 1A-H) to ensure safe food handling and facilitate convenient consumption. Their waxy and waterproof surfaces protect food from excessive moisture and retard the spoilage process [6]. Leaf-packaged food also has an attractive shape and preserved freshness. Previous studies have demonstrated that some leaf extracts have antioxidant and antimicrobial activities that play an important role in food spoilage prevention [7,8]. In Thailand, several plant leaves have been used as food packaging. However, there are no studies that demonstrate both antioxidant and antimicrobial activities against spoilage organisms and foodborne pathogens of plant leaves traditionally used as food packaging. Herein, leaves from selected plants, namely sacred lotus (Nelumbo nucifera Gaertn.; 1), coconut (Cocos nucifera L. , and wild betal (Piper sarmentosum Roxb.; 8), were subjected to testing for their antioxidant activity. They were also subjected to antimicrobial activity testing against selected foodborne pathogens, including Gram-positive bacteria (S. aureus, B. cereus, and L. monocytogenes), Gram-negative bacteria (E. coli and Salmonella enterica subsp. enterica serovar Abony (Salmonella Abony)), and fungi (Candida albicans and Aspergillus niger), to evaluate and compare the bioactivity of these plant leaves. In addition, the active compounds responsible for the antimicrobial activity of frequently used leaves, N. fruticans, were isolated using bioassay-guided isolation and subsequently elucidated for chemical structures. Plant Collection and Extraction Leaves of N. nucifera (1), and N. mirabilis (4) were collected from Queen Sirikit Botanic Garden, Chiang Mai, Thailand. C. nucifera (2), N. fruticans (3), D. asper (5), and C. pergracile (6) leaves were collected from Lung Choke Garden, Nakhon Ratchasima, Thailand. M. balbisiana (7) and P. sarmentosum (8) leaves were collected from the Medicinal Plant Garden, Chulalongkorn University. The taxonomic identification of plants was confirmed by Assoc. Prof. Thatree Phadungcharoen, a botanist at Chulalongkorn University. The voucher specimens were deposited at the Herbarium of Natural Medicines, Faculty of Pharmaceutical Sciences, Chulalongkorn University, Bangkok, Thailand ( Table 1). The plant leaf materials were dried at 50°C overnight, ground into small pieces, and successively extracted by maceration in 95% ethanol until exhausted. The leaf extracts were filtered and evaporated using a rotary evaporator to obtain ethanolic extracts 1-8. HPTLC Analysis and HPTLC-DPPH Bioautography of Leaf Extracts Each leaf extract was dissolved in EtOH to afford a concentration of 10 mg/mL. Ten microliters of extracts 1-8 were spotted on HPTLC glass plates (20 cm × 10 cm) using an HPTLC applicator (CAMAG, Muttenz, Switzerland) with a 6 mm band width. The starting position was 15 mm from the edge and 10 mm from the bottom of the plate. The HPTLC plates were developed using a mixture of EtOAc-MeOH-formic acid (9:1:1 v/v) in a developing chamber. To visualize flavonoids and vegetable acids in the extracts, a developed HPTLC plate was derivatized with a natural product (NP) reagent (a mixture of 1 g diphenylborinic acid aminoethyl ester in 200 mL of EtOAc and 1 g PEG 400 in 20 mL of Then, the derivatized plates were observed under UV light at 365 nm. TLC bioautography based on the DPPH assay was carried out to observe the antioxidant compounds. A developed HPTLC plate of leaf extracts was sprayed with DPPH solution (10 mM in EtOH) and kept in the dark for 5 min. The antioxidant components in the leaf extracts were observed as yellow spots. Total Phenolic Content Assay The total phenolic content of the extracts was determined using the Folin-Ciocalteu (FC) assay [9], with some modifications. The FC reagent was used at tenfold dilution in water. Briefly, 20 µL of test extracts (1.0 mg/mL in ethanol) and 100 µL of FC reagent were added together in a 96-well microplate, and then 80 µL of 7.5% (w/v) Na 2 CO 3 solution was added. The microplate was incubated at room temperature for 30 min with occasional shaking. The absorbance was measured at 765 nm using a microplate reader. The absorbance values of several concentrations of gallic acid (20-160 µg/mL) were plotted as a standard curve to identify the total phenolic content of the leaf extracts. The results were presented as milligrams of gallic acid equivalent (GAE) per gram of dried extract. The assay was performed in triplicate. DPPH Radical Scavenging Assay of Leaf Extracts The DPPH radical scavenging assay used to assess the antioxidant capacity of leaf extracts [10] was performed with some modifications. Briefly, 50 µL of the leaf extract in ethanol (EtOH) at various concentrations (20-400 µg/mL) was added to 100 µL of 0.1 mM DPPH solution in a 96-well microplate. The microplate was incubated for 30 min in the dark at room temperature. The absorbance was measured at 510 nm using a Victor 3 multilabel plate reader (PerkinElmer, Waltham, MA, USA). EtOH and ascorbic acid were used as a blank and positive control, respectively. The DPPH radical scavenging activity was determined using the following formula: where A c is the absorbance of DPPH without sample, and A s is the absorbance of the samples mixed with DPPH solution. The assay was performed in triplicate. Ferric Reducing Antioxidant Power Assay The ferric reducing antioxidant power (FRAP) assay was performed to identify the reducing ability of the leaf extracts [11]. The FRAP reagent was freshly prepared by mixing 300 mM acetate buffer (pH 3.6), 10 mM TPTZ in 40 mM HCl, and 20 mM FeCl 3 ·6H 2 O, at a ratio of 10:1:1. In a 96-well microplate, 10 µL of each leaf extract sample (0.5 mg/mL) and 190 µL of FRAP reagent were mixed together and incubated at 37 • C for 30 min in the dark. The absorbance was measured at 595 nm using a microplate reader. The absorbance values of FeSO 4 ·7H 2 O standard solutions (100-1400 µM) were plotted as a standard curve for the determination of ferric reducing capacity. The results are presented as mean ± SD (n = 3) of micromoles (µmol) of Fe 2+ per gram of dried extract. The assay was performed in triplicate. Superoxide Radical Scavenging Assay Formazan generation was measured in terms of the reduction in nitro blue tetrazolium (NBT) via the scavenging of superoxide radicals from a riboflavin-light-NBT system [12]. A mixture of 20 µL of a leaf extract sample, 100 µL of 50 mM phosphate buffer, 40 µL of 1 mM EDTA in phosphate buffer, 20 µL of 0.75 mM NBT in phosphate buffer, and 20 µL of 226 µM riboflavin in phosphate buffer, was added in a 96-well microplate. The reaction was induced via illumination with a 5W LED warm lamp (15 cm height from plate level) for 5 min. The absorbance was measured at 595 nm using Trolox and quercetin where A c is the absorbance of the control, and A s is the absorbance of the leaf extract samples or standards. The assay was carried out in triplicate. Antimicrobial Assay against Foodborne Pathogens All the leaf extracts were dissolved in DMSO. Each extract solution was dropped onto a 6.0 mm Whatman paper disc at 10 mg of extract per disc, the maximum solubility of all extracts, and then all discs were dried in a laminar flow cabinet. Gentamicin (10 µg/disc) and amphotericin B (10 µg/disc) were used as positive controls. The cell suspensions of five foodborne bacteria, S. aureus ATCC 25923, B. cereus ATCC 11778, L. monocytogenes ATCC 7644, E. coli ATCC 25922, and Salmonella Abony NCTC 6017, were prepared to 0.5 McFarland turbidity standard (1.5 × 10 8 CFU/mL). The fungal suspensions of C. albicans ATCC 10231 and A. niger ATCC 16404 were adjusted to a concentration of 1.5 × 10 6 CFU/mL [13]. Twenty milliliters of Mueller-Hinton agar (MHA) and Sabouraud dextrose agar (SDA) were added into Petri dishes (9 cm diameter) for bacterial and fungal tests, respectively. Each pathogenic suspension was spread over the surface of an agar plate with a sterile cotton swab. The tested discs were placed on the spread plates and left for prediffusion for 1 h. The bacterial plates were incubated at 37 • C for 18-24 h, whereas fungal plates were incubated at 30 • C for 1-3 days. After the incubation period, the inhibition zone diameter in millimeters was measured using a Vernier caliper. The assays were performed in triplicate. Bioassay-Guided Isolation of N. fruticans Extract to Obtain Antimicrobial Components The ethanolic extract of N. fruticans (20 g) was suspended in a mixture of water and MeOH (7:3) and sonicated for 1 h at room temperature. The mixture was partitioned with 250 mL of hexanes, dichloromethane, EtOAc, and BuOH to provide a hexane fraction (F1), dichloromethane fraction (F2), EtOAc fraction (F3), BuOH fraction (F4) and water fraction (F5). Fractions F1-F5 were evaporated to dryness and tested for antimicrobial activity against foodborne pathogens in comparison to N. fruticans extract using the disc diffusion method. The fraction with the most effective antimicrobial activity against foodborne pathogens was further separated via column chromatography to obtain pure compounds. Each isolated compound was identified via nuclear magnetic resonance spectroscopy and compared with previous reports. Evaluation of Antimicrobial Activity of Compounds I-III against Foodborne Pathogens The antimicrobial efficacy of compounds I-III was investigated using the dilution method with test tubes [14]. Isolated compounds I-III were dissolved in a 20% DMSOwater solution. Five foodborne pathogens, namely S. aureus, B. cereus, L. monocytogenes, E. coli, and Salmonella Abony, were prepared to 0.5 McFarland turbidity standard (1.5 × 10 8 CFU/mL) in Mueller-Hinton Broth. The test solutions (0.5 mL) were added to pathogenic suspensions (0.5 mL) in a test tube. The final concentrations of isolated compounds were 1000, 800, 500, 400, 250, 125, 100, 62.5, 31.3, and 15.1 µg/mL. Gentamicin in phosphate buffer (pH 4.5) was used as a positive control. The final concentrations of gentamicin after mixing with each pathogenic suspension were 15.17, 7.59 3.79, 1.90, 0.95, 0.47, 0.24, 0.12, and 0.059 µg/mL. DMSO was used as a negative control. All test tubes were incubated at 37 • C for 18-24 h, and the turbidity was checked to determine the minimum inhibitory concentration (MIC) value. Statistical Analysis The data are presented as the means of three replicates ± standard deviations (SDs). The results were subjected to analysis of variance (ANOVA), and mean comparisons were performed with Tukey's honestly significant difference test using GraphPad Prism 9 software. Differences between means were considered significant at a p-value < 0.05. HPTLC Profiles and HPTLC-DPPH Bioautograms of Leaf Extracts Leaves of all plant samples ( Figure 1) were macerated with 95% EtOH until exhausted to obtain ethanolic extracts ( Table 1). The greatest extraction yield of 39.32% was observed for N. nucifera, followed by extraction yields of 19.91, 17.47, 17.08, 14.15, 11.41, 7.93, and 5.20% for C. nucifera, N. mirabilis, P. sarmentosum, N. fruticans, D. asper, C. pergracile, and M. balbisiana, respectively. All extracts were analyzed using HPTLC. Each HPTLC plate was developed and sprayed with DPPH and NP reagents ( Figure 2). Phytochemical screening by spraying NP reagent on an HPTLC plate disclosed a variety of fluorescence spots under UV 365 nm ( Figure 2A). Extracts 1, 3, 4, 6, and 7 showed yellow and orange bands, while extracts 2, 3, and 6 showed light blue spots. However, the fluorescence band was absent for extract 5. In addition, extracts 1-4 revealed a greater number of fluorescence spots than extracts 5-8. To observe antioxidant compounds, an HPTLC plate was sprayed with the DPPH reagent ( Figure 2B). Extracts 1-4 revealed more intense yellow spots than extracts 5-8. The HPTLC results showed that extracts 1-4 contained many spots of chemical constituents with antioxidant activity. The data are presented as the means of three replicates ± standard deviations (SDs). The results were subjected to analysis of variance (ANOVA), and mean comparisons were performed with Tukey's honestly significant difference test using GraphPad Prism 9 software. Differences between means were considered significant at a p-value < 0.05. HPTLC Profiles and HPTLC-DPPH Bioautograms of Leaf Extracts Leaves of all plant samples ( Figure 1) were macerated with 95% EtOH until exhausted to obtain ethanolic extracts ( Table 1). The greatest extraction yield of 39.32% was observed for N. nucifera, followed by extraction yields of 19.91, 17.47, 17.08, 14.15, 11.41, 7.93, and 5.20% for C. nucifera, N. mirabilis, P. sarmentosum, N. fruticans, D. asper, C. pergracile, and M. balbisiana, respectively. All extracts were analyzed using HPTLC. Each HPTLC plate was developed and sprayed with DPPH and NP reagents ( Figure 2). Phytochemical screening by spraying NP reagent on an HPTLC plate disclosed a variety of fluorescence spots under UV 365 nm ( Figure 2A). Extracts 1, 3, 4, 6, and 7 showed yellow and orange bands, while extracts 2, 3, and 6 showed light blue spots. However, the fluorescence band was absent for extract 5. In addition, extracts 1-4 revealed a greater number of fluorescence spots than extracts 5-8. To observe antioxidant compounds, an HPTLC plate was sprayed with the DPPH reagent ( Figure 2B). Extracts 1-4 revealed more intense yellow spots than extracts 5-8. The HPTLC results showed that extracts 1-4 contained many spots of chemical constituents with antioxidant activity. From the DPPH radical scavenging assay, the N. nucifera extract presented the greatest DPPH scavenging activity among all extracts, with an IC 50 According to the FRAP assay, all extracts showed antioxidant efficacy, which was in agreement with the DPPH results. The highest ferric reducing antioxidant power was observed from the N. mirabilis extract with a value of 551.38 ± 4.11 µmol Fe 2+ /g, followed by the N. nucifera, N. fruticans, and C. nucifera extracts with values of 545.72 ± 10.80, 529.36 ± 5.44, 342.92 ± 8.51 µmol Fe 2+ /g, respectively. The other leaf extracts (M. balbisiana, C. pergracile, P. sarmentosum, and D. asper) showed lower FRAP activities (<200 µmol Fe 2+ /g). In the superoxide radical scavenging assay, the N. nucifera extract possessed the highest superoxide radical scavenging activity, with an IC 50 value of 11.19 ± 0.63 µg/mL, followed by the N. mirabilis, N. fruticans, C. nucifera, and P. sarmentosum extracts with IC 50 values of 20.16 ± 1.43, 27.89 ± 1.84, 38.97 ± 1.05, and 69.05 ± 1.5 µg/mL, respectively. According to the results, the D. asper, C. pergracile, and M. balbisiana extracts were observed to have the lowest scavenging activities among the test extracts, with IC 50 values greater than 120 µg/mL. The results showed that extracts 1-4 contained higher phenolic content and antioxidant activity than extracts 5-8 in all assays. Antimicrobial Activity of Leaf Extracts The antimicrobial activities of leaf extracts were examined by measuring the diameters of the inhibition zones of Gram-positive bacteria, namely S. aureus, B. cereus, and L. monocytogenes, and Gram-negative bacteria, namely E. coli and Salmonella Abony (Figure 3). The fungi C. albicans and A. niger were included in the test (Figure 3). Among all the extracts, N. nucifera, C. nucifera, N. fruticans, and N. mirabilis extracts exhibited antimicrobial activity against various types of bacteria, namely S. aureus, B. cereus, L. monocytogenes, and E. coli. Notably, N. mirabilis extract showed the greatest activity against S. aureus, B. cereus, and L. monocytogenes with 17.77 ± 0.15, 13.10 ± 0.70, and 10.93 ± 0.61 mm inhibition zones, respectively (Table 3). N. mirabilis extract was the only extract that exhibited antimicrobial activity against Salmonella Abony and C. albicans, with 8.70 ± 0.53 and 24.10 ± 0.46 mm inhibition zones, respectively. D. asper displayed the greatest microbiostatic activity against E. coli, with a 14.67 ± 0.21 mm inhibition zone, and also showed a 7.80 ± 0.40 mm inhibition zone against B. cereus. However, the D. asper extract showed no inhibition zone against the other tested microbes. C. pergracile, M. balbisiana, and P. sarmentosum extracts displayed antimicrobial activity against only E. coli, with smaller inhibition zones when compared with the others. The results showed that extracts 1-4 exhibited antimicrobial activity against (Table 3). N. mirabilis extract was the only extract that exhibited antimicrobial activity against Salmonella Abony and C. albicans, with 8.70 ± 0.53 and 24.10 ± 0.46 mm (7), and P. sarmentosum (8) extracts (10 mg/disc) against S. aureus, B. cereus, L. monocytogenes, E. coli, Salmonella Abony, C. albicans, and A. niger. DMSO was used as a control. A: antibiotic and antifungal, which was gentamycin and amphotericin B, respectively, at 10 µg/disc. Bioassay-Guided Isolation of N. fruticans Leaf Extract The ethanolic extract of N. fruticans (3) was further partitioned with hexanes, dichloromethane, EtOAc, BuOH, and water, to afford five fractions. The extraction yields of hexanes (F1), dichloromethane (F2), EtOAc (F3), BuOH (F4), and water (F5) fractions were 22.3, 15.9, 4.1, 7.3, and 31.4%, respectively. All fractions were tested for antimicrobial activity against foodborne pathogens using the disc diffusion method (Figure 4). The results revealed that fraction F1 had antimicrobial activity against S. aureus, B. cereus, and L. monocytogenes, with inhibition zone diameters of 8.93 ± 0.35, 8.27 ± 0.12 and 8.53 ± 0.15 mm, respectively (Table 4). Fractions F4 and F5 showed larger inhibition zones against S. aureus, B. cereus, L. monocytogenes, and E. coli than that of fraction F1, with the inhibition zone diameters varying from 7.83 ± 0.85 to 15.67 ± 0.86 mm. However, the inhibition zones of fractions F1 and F2 against S. aureus, B. cereus, and L. monocytogenes were significantly indifferent. Among all the tested fractions, fraction F3 was the most active fraction against all microbes in the test. Fraction F3 was purified using a Sephadex LH-20 column (MeOH was used as a mobile phase). Subfractions were further purified via column chromatography using a reversed-phase C-18 column, eluted with MeOH-water (60:40) as a mobile phase to obtain the compounds, which were characterized as 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III) ( Figure 5). 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 The concentration of N. fruticans extract and fractions were 10 mg/disc; DMSO was used as a vehicle control. Values with the same superscript letter within a column are not significantly different at p < 0.05; (-): no inhibition zone; (nt): not tested. Compound I was obtained as a bright yellow amorphous powder and showed a bright blue spot (Rf = 0.56) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound I was confirmed to be 3-O-caffeoyl shikimic acid by comparing 1 H and 13 C NMR spectroscopic data with those previously reported (Figures S1, S2 and Table S1). Compound II was obtained as a yellow amorphous powder and appeared as a yellow spot (Rf = 0.47) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound II was proven to be isoorientin by comparing 1 H and 13 C NMR spectroscopic data with those previously reported ( Figures S3-S5 and Table S2). Compound III was isolated as a yellow amorphous powder and showed displaying an orange spot (Rf = 0.33) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound III was identified as isovitexin by comparing 1 H and 13 C NMR spectroscopic data with those previously reported ( Figures S6, S7 and Table S3). Evaluation of the Antimicrobial Activity of Isolated Compounds I-III To test whether the isolated compounds had antibacterial activity, the MIC values of 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III) ( Table 5) against S. aureus, B. cereus, L. monocytogenes, and E. coli were evaluated using the tube dilution method. Based on our observations, S. aureus was less susceptible to III than I and II, with a MIC value greater than 1000 μg/mL. Compounds I and III exhibited greater antimicrobial activity against B. cereus, with a MIC value of 800 μg/mL, in contrast with that of 1000 μg/mL of II. E. coli was more vulnerable to III with a MIC value of 800 μg/mL, compared with that of 1000 μg/mL of I and II. Compound III displayed the greatest antimicrobial activity against E. coli, with a MIC value of 800 μg/mL, but showed the least efficacy against Salmonella Abony, with a MIC value greater than 1000 μg/mL. L. monocytogenes was equally vulnerable to all the tested compounds with a MIC value of 1000 μg/mL. The results showed that isolated compounds exhibited antimicrobial activity against S. aureus, B. cereus, L. monocytogenes, E. coli, and Salmonella Abony. Compound I was obtained as a bright yellow amorphous powder and showed a bright blue spot (Rf = 0.56) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound I was confirmed to be 3-O-caffeoyl shikimic acid by comparing 1 H and 13 C NMR spectroscopic data with those previously reported ( Figures S1 and S2 and Table S1). Compound II was obtained as a yellow amorphous powder and appeared as a yellow spot (Rf = 0.47) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound II was proven to be isoorientin by comparing 1 H and 13 C NMR spectroscopic data with those previously reported ( Figures S3-S5 and Table S2). Compound III was isolated as a yellow amorphous powder and showed displaying an orange spot (Rf = 0.33) on a reversed-phase TLC plate under 365 nm UV light (mobile phase: 50% MeOH in water). Compound III was identified as isovitexin by comparing 1 H and 13 C NMR spectroscopic data with those previously reported ( Figures S6 and S7 and Table S3). Evaluation of the Antimicrobial Activity of Isolated Compounds I-III To test whether the isolated compounds had antibacterial activity, the MIC values of 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III) ( Table 5) against S. aureus, B. cereus, L. monocytogenes, and E. coli were evaluated using the tube dilution method. Based on our observations, S. aureus was less susceptible to III than I and II, with a MIC value greater than 1000 µg/mL. Compounds I and III exhibited greater antimicrobial activity against B. cereus, with a MIC value of 800 µg/mL, in contrast with that of 1000 µg/mL of II. E. coli was more vulnerable to III with a MIC value of 800 µg/mL, compared with that of 1000 µg/mL of I and II. Compound III displayed the greatest antimicrobial activity against E. coli, with a MIC value of 800 µg/mL, but showed the least efficacy against Salmonella Abony, with a MIC value greater than 1000 µg/mL. L. monocytogenes was equally vulnerable to all the tested compounds with a MIC value of 1000 µg/mL. The results showed that isolated compounds exhibited antimicrobial activity against S. aureus, B. cereus, L. monocytogenes, E. coli, and Salmonella Abony. Gentamycin was used as a positive control. Discussion Selected plant leaves, namely N. nucifera (1), C. nucifera (2), N. fruticans (3), N. mirabilis (4), D. asper (5), C. pergracile (6), M. balbisiana (7), and P. sarmentosum (8), which have been used to wrap food ( Figure 1A-H) in Thai ethnic culture, were studied for their beneficial as natural food preservatives. Interestingly, our selected plants have also been used in other countries. A Chinese rice pudding, zongzi, is wrapped with N. nucifera or C. nucifera leaves [15]. N. fruticans leaves are popularly used for wrapping desserts called khanom chak in Thailand ( Figure 1C) and a type of rice cake, ketupat, in Malaysia [16]. Bamboo leaves, such as D. asper and C. pergracile, are used for wrapping streamed rice cakes in Japan [16]. N. mirabilis is an exotic plant, and its leaves develop into pitchers in order to trap insects. The pitchers of N. mirabilis are used as containers for a rare traditional dessert found in the southern part of Thailand ( Figure 1D) [17]. Banana leaves, M. balbisiana, are used for wrapping a grilled fish fillet in India [18] and traditional desserts in Thailand ( Figure 1G). P. sarmentosum leaves are used for wrapping a snack called miang kham in Thailand ( Figure 1H). Previous studies suggested that plant leaves as a wrapping material could extend the storage duration of foods [16,18], while several of these leaves possess useful biological activities such as antioxidant and antibacterial activities [18]. There is evidence showing that factors that contribute to the spoilage of food are oxidation and microbial contamination [1]. For instance, the oxidation of lipids, especially unsaturated lipids, by atmospheric oxygen leads to changes in the lipid molecular structure to hydroperoxide and other free radicals [19]. The final products of the oxidation process continuously facilitate protein oxidation, resulting in protein carbonylation, polymerization, and coagulation [20]. These changes lead to the deterioration of the odor, taste, texture, and nutritional value of foods. Thus, in this study, plant leaves that have been used as food packaging were investigated for their antioxidant and antimicrobial activities. Selected plants, namely N. nucifera (1), C. nucifera (2), N. fruticans (3), N. mirabilis (4), D. asper (5), C. pergracile (6), M. balbisiana (7), and P. sarmentosum (8) ( Table 1), were investigated in terms of their chemical profile, antioxidant compounds, and antioxidation capacity. To classify the types of chemical components, the HPTLC plate of extracts was sprayed with the NP reagent to show different fluorescent colors depending on the type of phenolic compounds. The NP-sprayed HPTLC plate of extracts 1, 3, 4, 6, and 7 showed yellow and orange bands (Figure 2A), representing flavonoids and flavonoid glycosides, e.g., hyperoside, isoquercitrin, luteolin, luteolin 7-O-glucoside, rutin, quercetin, quercitrin, and vitexin [21], while extracts 2, 3 and 6 showed light blue spots (Figure 2A), pointing to the presence of phenolics, e.g., caffeic acid and chlorogenic acid [21]. The DPPH-sprayed HPTLC plate of leaf extracts 1-4 displayed far more yellow spots than that of extracts 5-8 ( Figure 2B), in both number and intensity, suggesting that extracts 1-4 may have stronger antioxidant activity than that of extracts 5-8. These yellow spots indicated the presence of antioxidant components, which were possibly flavonoids, saponins, or phenolic compounds [22]. The overlapping bands between the DPPH-and NP-sprayed HPTLC plates, notably observed for extracts 1-4, confirmed the antioxidant activity of several flavonoid and phenolic compounds. Since phenolic compounds are usually associated with antioxidant and antimicrobial properties [23], the total phenolic content of extracts 1-8 was further evaluated in parallel with their antioxidant activities ( Table 2). The FC assay, a method for the determination of phenolic content, is used to measure the antioxidant capacity of samples through the reduction of Mo 6+ to Mo 5+ [24]. The FC assay is quite rapid and reproducible and can be used to show a correlation between antioxidant activity and total phenolic content [25]. However, the FC assay is sensitive to not only phenolics but also other reducing compounds, i.e., reducing sugar and ascorbic acid, leading to biased FC results [26]. Due to this limitation, different types of antioxidation assays were needed. The DPPH assay is typically used to evaluate antioxidation activity through the reduction of 2,2-diphenyl-1-picrylhydrazyl radical [27]. The reduction mechanism of DPPH could be either a single electron transfer (SET) or hydrogen atom transfer (HAT) mechanism [24]. The performance of this method is limited by the reaction kinetics of DPPH, which depend on the type of antioxidants. Some antioxidants, such as ascorbic acid, react rapidly with DPPH, while some other antioxidants react slower or are even inert toward DPPH [28]. Moreover, the reaction of DPPH with some compounds is reversible, resulting in falsely low readings for antioxidant capacity [28]. Thus, a FRAP assay based on the reduction of Fe 3+ to Fe 2+ was performed [29]. However, if any compounds in the reaction have redox potentials lower than that of Fe 3+ (0.70 V), then they can reduce Fe 3+ , leading to the underestimation of antioxidant activity [29]. Thus, the superoxide radical scavenging assay was applied in this study to evaluate antioxidant activity against superoxide (O 2 •− ), which is produced from light-activated riboflavin [12]. Unlike DPPH, which is a synthetic radical, superoxide is a reactive radical species involved in lipid peroxidation [24]. Taken all together, to obtain reliable total antioxidant capacity results, various antioxidant assays with different mechanisms were conducted in parallel [28]. According to our study, extracts 1-4 were proven to display higher phenolic content and antioxidant activity than extracts 5-8. There was also a direct correlation between the phenolic content and antioxidant activity of extracts 1-4, which is consistent with the results revealed by other research groups [30,31]. The correlation between total phenolic content and antioxidant activity was also in agreement with the results of the DPPH-and NP-HPTLC screening of extracts 1-4, which indicated that several types of phenolic compounds exhibited strong radical scavenging activity that reflected the ability of the leaf extracts to prevent or delay food spoilage. Microbial contamination is the main factor that leads to food spoilage and food poisoning [1]. Bacteria cause spoilage by consuming nutrients and moisture in foods, and most of them are also pathogens for humans. Some strains of Gram-positive E. coli produce enterotoxins and Shiga toxin, which cause diarrheal illness [32], dysentery, and hemolytic uremic syndrome [33]. S. aureus is a Gram-positive bacterium found in meat and poultry [1]. S. aureus infects humans and produces toxins that cause many diseases, from mild skin infections to severe pneumonia [34]. Salmonella species are Gram-negative bacteria responsible for salmonellosis, resulting in mild diarrhea to acute gastroenteritis [35]. A. niger is a type of mold that forms black colonies on spoiled foods. This microbe can secrete ochratoxin A, which is recognized as a nephrotoxin and a carcinogen [36]. According to the information above, extracts 1-8 were investigated for antimicrobial activity against S. aureus, B. cereus, L. monocytogenes, E. coli, Salmonella Abony, C. albicans, and A. niger (Figure 3). Interestingly, there was a relationship between the total phenolic content and antimicrobial activity of the extracts, as observed for extracts 1-4, which displayed higher phenolic contents and higher antimicrobial efficacies than extracts 5-8 (Table 3). The correlation between antimicrobial activity and phenolic content was also observed by Nsor-Atindana et al. [37] and Jalal et al. [38], who reported the correlations between the total phenolic content and antimicrobial activity of Theobroma cacao and Artocarpus altilis extracts. Several studies suggested that the antimicrobial activity of phenolic compounds depends on the type of the phenolic compounds, the type of tested bacteria, including Gram-positive or Gram-negative bacteria, and the mechanisms of action. Phenolic compounds such as phenolic acids and flavonoids can damage and disrupt membrane functions, and inhibit bacteria enzymes, leading to bacteria cell death [39]. Thus, the phenolic content of the plant extracts could be the main factor contributing to antimicrobial activity. However, there are other chemical constituents that may also contribute to antimicrobial activity. Regarding extracts 1-4, the antioxidant and microbial activities of N. nucifera (1), C. nucifera (2), and N. fruticans (3) extracts were reported by several research groups. In previous reports, N. nucifera leaf extract exhibited antioxidant activity observed in DPPH and ABTS assays [40], while the ethanolic extract of its flowers was proven to inhibit S. aureus, P. aeruginosa, and C. albicans [41]. Another research group highlighted the antioxidant and antimicrobial activities of C. nucifera leaf extract against Acinetobacter spp., B. cereus, E. coli, S. dysenteriae, S. typhi, and A. niger [42]. N. fruticans leaf extract was investigated to show antimicrobial activity against S. aureus, E. coli, K. pneumonia, S. epidermidis, and P. aeruginosa [43]. Although the N. mirabilis extract (4) exhibited the greatest antimicrobial activity in this study, the use of N. mirabilis as food packaging is exceptionally rare, compared with that of N. fruticans. Furthermore, the distribution, population usage, and biomass of N. fruticans are much greater than those of N. mirabilis, so the study of the biological activity and development of N. fruticans use is much more applicable. Thus, the bioassay-guided isolation of N. fruticans extract (3) was carried out to identify the bioactive compounds that are responsible for its antimicrobial activity. Based on the results of bioassay-guided fractionation, the EtOAc fraction exhibited the highest antimicrobial activity among the five fractions (Table 4 and Figure 4), followed by the water, BuOH, dichloromethane, and hexane fractions. The difference in the antimicrobial activity of each fraction could be attributed to the type and concentration of phenolic compounds, which may be most favored by the polarity of EtOAc and least favored by the polarity of hexanes. Using the results of antimicrobial analysis, the EtOAc fraction was further separated to obtain 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III). Compound I has been found in many plants, such as Phyllostachys pracecox [44], which belongs to the Poaceae family, and Phoenix dactylifera [45], which belongs to the Arecaceae family as N. fruticans. A previous study proved that 3-O-caffeoyl shikimic acid possessed antioxidant activity [46]. Here, this is the first time that 3-O-caffeoyl shikimic acid was isolated from N. fruticans and identified for its antibacterial activity (Table 5). Isoorientin (II) is a flavonoid glycoside found in many plants, such as Rhapis excelsa in the Arecaceae family [47], Stellaria nemorum, and Stellaria holostea in the Caryophyllaceae family [48]. Previous studies also reported various biological activities of isoorientin, such as anti-inflammatory [49], antioxidant, and antibiotic activities [50]. Isovitexin (III) has been isolated from some plants such as the aerial part of Lythrum salicaria [51] and the leaves of Gentiana spp. [52]. In a previous study, isovitexin and isoorientin were proven to show antioxidant and antibacterial activities against various types of bacteria, including S. aureus, E. faecalis, E. coli, and P. aeruginosa [53]. In this study, 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III) were proven to show antibacterial activity against five foodborne pathogens, suggesting that compounds I-III were responsible for the antimicrobial activity of N. fruticans ethanolic leaf extract. Even though all three bioactive components have been isolated from N. fruticans leaves, the bioactivity of N. fruticans crude extract is practically preferred to be used for food industrial applications rather than pure compounds. Thus, additional tests are required to study the antimicrobial activity of isolated compounds and crude extract. A limitation of our study is that not all of the bioactive components in the N. fruticans ethanolic extract were extensively identified. Some fractions were not subjected to isolate bioactive compounds due to their lesser bioactivity than that of the EtOAc fraction. However, those fractions still exhibited antimicrobial activity against some bacteria strains, suggesting that different compounds may contribute to the bioactivity of N. fruticans extract. Further studies are needed to identify compounds with antimicrobial activity against food spoilage and foodborne pathogens. Also, the retention of antimicrobial activities of the extracts or isolated compounds from the food packaging material over a period of time is worth investigating in the future. Conclusions In this study, the leaf ethanolic extracts of N. nucifera, C. nucifera, N. fruticans, and N. mirabilis displayed distinctive total phenolic contents, strong antioxidant activity, and potent antimicrobial activity against food spoilage microbes and food pathogens, namely S. aureus, B. cereus, L. monocytogenes, and E. coli. The results support the traditional use of N. nucifera, C. nucifera, N. fruticans, and N. mirabilis as natural food packaging, which can maintain the freshness of foods. In addition, 3-O-caffeoyl shikimic acid (I), isoorientin (II), and isovitexin (III) were isolated from N. fruticans leaf extract and tested in terms of their antimicrobial activity for the first time. This study demonstrates that the biological activity of leaves that are used in traditional food packaging helps retard food spoilage. The selected plants and their chemical constituents could be developed into biofilm packaging and natural food preservatives to maintain food quality, ensure food safety, and prolong the shelf life of food products. This study also promotes the traditional use of plants and adds value to these plants as natural packaging resources. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/foods12122409/s1: Table S1. 1 H and 13 C NMR signals of compound I in comparison with those from a previous report. Table S2. 1 H and 13 C NMR signals of compound II in comparison with those from a previous report. Table S3. 1 H and 13 C NMR signals of compound III in comparison with those from a previous report. Figure S1. 1 H NMR spectrum of compound I in DMSO. Figure S2. 13 C NMR spectrum of compound I in DMSO. Figure S3. 1 H NMR spectrum of compound II in DMSO. Figure S4. 13 C NMR spectrum of compound II in DMSO. Figure S5. DEPT-135 spectrum of compound II in DMSO. Figure S6. 1 H NMR spectrum of compound III in DMSO. Figure S7. 13 C NMR spectrum of compound III in DMSO.
2023-06-23T08:11:43.663Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "c9f2ec9700ad6d251b4c12a165f8912ac5741b4a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9f2ec9700ad6d251b4c12a165f8912ac5741b4a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
235022300
pes2o/s2orc
v3-fos-license
A Dynamic Game Formulation for Cooperative Lane Change Strategies at Highway Merges : A dynamic game framework is put forward to derive the system optimum strategy for a network of cooperative vehicles interacting at a merging bottleneck with simplified vehicle dynamics model. Merging vehicles minimize the distance travelled on the acceleration lane in addition to the same cost terms of the mainline vehicles, taking into account the predicted reaction of mainline vehicles to their merging actions. An optimum strategy is found by minimizing the joint cost of all interacting vehicles while respecting behavioral and physical constraints. The full dynamic game is cast as a set of sub-problems regularly expressed as standard optimal control problems that can be solved efficiently. Numerical examples show the feasibility of the approach in capturing the nature of conflict and cooperation during the merging process. INTRODUCTION The societal and economical impact of traffic congestion and accidents has encouraged the development of automated driving systems, where planning, design, and deployment of such systems face new challenges everyday (Hanappe et al., 2018).In particular, when multiple vehicles interact, the problem of decision-making under competition and cooperation with multiple players appears, especially at network discontinuities such as highway on-ramps (Rios-Torres and Malikopoulos, 2017b).In order to optimize the utility of the road network at merges, vehicular flow control has been proposed on the infrastructure side via ramp metering and variable speed limits strategies (Papageorgiou et al., 2003). Several strategies were reported to deal with merging situation, most of which act on the longitudinal speed regulation.Ntousakis et al. (2016) proposed an optimal acceleration trajectory planing method for merging vehicles, relying on a passing order decided by a higher decision layer.A specific trajectory design is proposed and fuzzy controllers were used as regulation strategies in (Milanés et al., 2011).Ge and Murray (2019) used control improvisation to synthesize lane change policies for an automated vehicle in various traffic conditions.The scalability of this approach to multiple CAVs remains an open question.For a more complete literature review on this topic, we refer the reader to Rios-Torres and Malikopoulos (2017a). The merge situation can be seen as a negotiation process between vehicles on the main carriageway and vehicles on the on-ramp willing to join the highway (See Fig. 1).Wang et al. (2015) proposed a game theoretical framework where interacting CAVs predictively determine discrete desired lane sequences and continuous accelerations to minimize a cost function reflecting undesirable future situations.The computational load of this approach makes the real-time application a daunting task.Fabiani and Grammatico (2018) also considered a similar approach where the constraints of the changing lane are formulated as a Mixed Logical Dynamical (MLD) model introduced by Bemporad and Morari (1999) and the final control problem is cast via Mixed Integer Linear Programming (MILP).The framework assumes non-cooperative nature of automated vehicles. This paper puts forward a dynamic game framework to derive system optimum strategies for a network of cooperative vehicles interacting at a merging bottleneck.Cooperative vehicles on the highway mainline seek optimal strategies (i.e.whether and when to perform courtesy lane change to facilitate the merging vehicle) to minimize their cost, which penalizes deviations from their desired driving conditions while taking into account the predicted action of merging vehicles.An optimum strategy is found by minimizing the joint cost of interacting vehicles while respecting behavioral and physical constraints.Properties of the games and existence of solutions will be provided in this work. To solve the problem, a simplified discrete formulation of longitudinal vehicle dynamics is formulated.The longi-tudinal model is distributed, e.g.only interacting under predecessor-follower topology, and can be easily adapted to capture platooning systems dynamics.The full dynamic game is then cast as a set of sub-problems regularly expressed as standard optimal control problems that can be solved by mixed-integer quadratic/linear programming.Several examples at simulation level show the feasibility of the approach in capturing the nature of cooperation. The operational assumptions and problem setup are explained with more detail in Section 2, then the model including longitudinal and lateral dynamics is explained in Section 3. The lane change decision action is cast as a dynamic game in Section 4. Section 5 details the approach to solve the merging problem with numerical examples in Section 6. PROBLEM FORMULATION In this paper we consider the situation shown in Fig. 1.Let V = {1, . . ., n} be a group of CAV traveling along a road infrastructure composed by specific lanes labeled σ = {1, 2, 3} ∈ N from right two left.Let denote σ i (k) the lane occupied by vehicle i at a specific instant of time k.Two vehicles i, j traveling in different lanes σ i = σ j are going to perform a merging negotiation at a current time k 0 in a time horizon of N steps.Two dimensions of maneuvers are possible in this case.First, as shown in Fig. 1a the i-th vehicle in the platoon can modify its lateral position (in discrete lanes) to a new state σ i (k) = σ i (k 0 )+1, while other vehicles in the platoon will keep the same position In this case, a lateral decision operates over the vehicle i. A second situation can be envisaged as shown in Fig. 1b, the decision is taken at the level of the longitudinal control where a vehicle i performs a maneuver to pass vehicle j or yields in courtesy to open a gap where the j vehicle will insert in front of vehicle i.Control maneuvers for this situation can be designed under knowledge of the state of the inserting vehicle j (Duret et al., 2019).In this case a longitudinal decision operates over vehicle i. The decision-making and control system follows a hierarchical setting, where the decision-making module is placed on top of a motion control module (Duret et al., 2019).This decision-making is based on a dynamic game framework (Wang et al., 2015).It takes into account the current state information of the dynamic driving environment, which consists of surrounding cooperative/noncooperative vehicles.The interacting vehicles negotiate and jointly decide whether and when to change lane to optimize a joint cost/payoff function, taking into account the dynamic process as a response to the lane change actions.The control problem can be cast as follows: Determine the lateral optimal control strategy such that a joint payoff/cost for vehicle i and j is maximized/minimized. Longitudinal dynamics The headway space and longitudinal position for vehicle i are considered as: where k ∈ Z + denotes the discrete time index and ∆t is the time step size.The collection p, s, v ∈ R n denote vehicle's position, the headway space and the longitudinal speed respectively.Let define the error where v 0,i denotes the desired speed of vehicle i and the subscript l ∈ V ∪ {j} denotes the index of the direct leader of vehicle i. A feedback control law can be formulated as: (4) k 0 , k l are feedback gains for the errors to the desired speed and the predecessor speed respectively. The vehicle dynamics are subject to the following linear constraints: (7) where t min denotes the minimum time gap between two vehicles on the same lane.s 0 denotes the minimum spacing between two vehicles.( 7) states that any leader-follower space headway should keep some safe distance at any time instant k. a min , a max , v min , v max represent boundaries in acceleration and speed correspondingly. We choose (2) to capture the heterogeneous choice of desired speed by system users, while acknowledging that this is not the unique model for CAV platoons.If we use the gap error: where t d denotes the desired time gap of ACC/CACC systems and k s denotes the feedback gain.The model can describe CACC platoon dynamics with proper tuning of feedback gains (Milanés and Shladover, 2014). Lateral dynamics We use the discrete lane change decision δ as the control decision variable, δ i ∈ D := {−1, 0, 1} where {−1, 0, 1} := {change right, no lane change, change left}.In the paper we assume only one lane change during the prediction horizon, but the framework is general to include multiple lane changes in the horizon (Wang et al., 2015).This single switch aims to reduce the computational burden of the approach. We use the travel lane of vehicle i, σ i (k) as the discrete state variable at time k.The dynamics of the lateral behavior are determined by: We assume lane change can take place as long as the gap is sufficiently large according to (7). Lane change and dynamic communication topology The leader-follower pair is dynamic as a result of lane changes for the group of n CAVs.Let a graph G = {V, E}, V represents the set nodes consisting in all CAVs within the network and E = {V × V} the set of edges representing a relationship between leaders and followers.Then E = {ε il = 1} if vehicle l is the leader of vehicle i at specific sample time k, 0 otherwise.The adjacency matrix of G is concentrated in the squared matrix In general thanks to the lane change model (8), the set E is dynamic in time. Stability of the closed loop dynamics In the following we describe a set of characteristics of the closed loop system, in particular the stability property.Remark 1. (Stability of the longitudinal control).The control law (4) can verify the constraints ( 5),( 6), (7) in a uniform asymptotic stable setting. Let suppose a uniform formation where the desired speeds for all vehicles are the same and constant v 0,i = v0 .For system (1), and combining with (4), it is possible to write the closed loop system as: Gathering all individual systems i into an algebraic equation, it can be expressed as: (10) where K 0 , K l , T are diagonal matrices in R n×n with corresponding elements k 0 , k l , ∆t in their diagonal.I, 0 are the identity and the zero matrices of corresponding dimensions.v0 ∈ R n is the constant vector containing on each element v0 .A g is the adjacency matrix of the network topology (see 3.2).System (10) is stable if and only if the spectral radius ρ( Ā) ≤ 1, ρ(A) := {max |λ| : λ = eig(A)}.This condition can be translated into ρ(K l (A g − I)−K 0 ) < 1.For a single lane, the matrix A g − I is lower triangular by construction, in particular, eig(A g − I) = {0, −1}.Given the diagonal nature of K 0 , K L , for stability it is necessary to guarantee ρ(K L ) < 1.Given the diagonal construction of these matrices, then the necessary condition for stability is then given by |k l − k 0 | < 1. At same time by inserting (1) into ( 5)-( 7) it is possible to construct the following system of linear matrix inequalities (LMI)s: Given a fixed values {k l , k 0 : All values that satisfy the LMI (11) make the system uniformly and asymptotically stable. GAME THEORETIC FORMULATION OF THE LANE CHANGE DECISION PROBLEM In this section we propose the dynamic game formulation for the lane change control maneuver. ξ δ represents the sequence associated to a particular lateral control δ(k) which induces the choice lane changing maneuver at k in the horizon N . Consider the case of Fig. 2. The objective of the dynamic game is to create a decision block that considers the trade off between two possible cases.First, the situation in which in a finite time horizon the vehicle i performs a lane change maneuver to create the necessary gap for insertion as depicted in Fig. 1a and a second situation where the vehicle j should wait for the mainline vehicle to yield the necessary gap to so that the merging maneuver is performed without violating constraints.The cost for each vehicle is measured by undesirable situations: (13e) where β g , g ∈ {1, 2, 3, 4, 5, 6} are the weights on different cost terms.p j,end denotes the position of the end of a mandatory lane change section for vehicle j.The running cost function can be interpreted as follows: • (13a) encourages the vehicle to travel at its desired speed; • The second term of (13a) encourages consensus on speed for each leader-follower pair; • (13b) favors smooth speed change and hence discourage sharp acceleration and deceleration; • (13d) penalizes deviation from desired lane σ * i and the fifth term penalizes lane changes. • (13e) penalizes potential failure for mandatory lane change.It favours early mandatory lane changes and increases when the distance to the end of the merging lane p end is decreasing. The optimal control problem can be cast as an optimization of the running cost L i for each one of the players while other players have already decided.A dynamic game can be integrated within an optimal control problem where each one of the players fixes a specific strategy in particular for the lane change by targeting the specific value σ * i .Notice that each player i has a finite number of strategies to choose by selecting specific δ i .In particular, when playing the game in between vehicle i and vehicle j it is possible to write the following finite horizon problem: (1),( 5),( 7),( 6),( 8) The objective of the former optimal control problem is to promote the minimization of the individual costs.This is formulated as an optimization problem, where one seeks the optimal lane change decision trajectories for each vehicle i in a prediction horizon N to maximize the payoff function of the whole group.In fact each one of the player should maximize a payoff given by: The dynamic game entails prediction of the payoff over a time horizon with N steps: [0, N ].We consider N to be sufficiently large and therefore set the terminal cost Φ = 0.The player i will select a strategy among a finite set D of strategies. Let consider the vehicle i and all the possible set of finite strategies A = {a 1 , a 2 , . . ., a r } to be chosen for the lateral decisions.Let B = {b 1 , b 2 , . . ., b q } the possible decisions for the j vehicle traveling in the on-ramp lane.It is worth to remark that a vehicle i, j have at most possibilities to change lane during a future finite horizon.Definition 2. (Payoff function).Let J A i (p(k), v(k), a δ , b δ ) be the function defining the payoff after a player decides among the set of strategies A as: In definition 2 L i is defined as the running cost while the ψ i is called the final cost.Assumption 1. (Available game information).System (1) is fixed for each participant of the game.The same as ) and the sample time k is considered synchronous in vehicles i, j. Properties of the dynamic lane change game Consider the full dynamics expressed in equation ( 10) jointly with (8) and enclosed in the form In a particular case where two players are defining a game it is possible to define split dynamics and running costs as: (18) Remark 2. (Finding equilibrium via PMP).Let consider the system (17) with associated running cost (18).Let x * (•), δ * 1 (•), δ * 2 (•) be respectively the trajectory and open loop controls of two players in a Nash equilibrium.By definition this two controls provide corresponding solutions to the associated optimal control problems for each player.Applying the Pontryagin Maximum Principle (PMP) the following are necessary conditions for the Nash equilibrium (Lewis et al., 2012). where In order to solve the find the conditions for Nash equilibrium let define the Hamiltonian for the control problem (14) based on ( 17),(18): By considering the costate condition for λ ∈ R n from the PMP (Lewis et al., 2012): with the final condition x(T ) = 0, then it is possible to obtain the conditions in ( 20),( 21).The optimal condition is derived from the fact that for a fixed lateral control δ2 (•), the optimal δ 1 (•) can be found via which can be transformed into a maximization problem where the player is maximizing the payoff function similar to ( 16), leading to The stationary condition is necessary for optimality then by introducing (23) in to (26), we obtain: leading to ( 22).In the same way the second equation can be obtained when the first player fixes its own strategy to a value δ1 = δ * 1 .The nash equilibrium is obtained when the payoff for player 1 is maximized 26 with the best reply of player 2 and vice-versa (Bressan, 2010).In other words, no player can increase his payoff by single-mindedly changing his strategy, as long as the other player sticks to the equilibrium strategy. SOLUTION ALGORITHM The problem formulation for this case brings inherent complexity to the solution of the game which in fact cannot be found in an explicit form due to the nature of the control signal δ(k).To solve the optimal control problem (14) the following algorithm 1 is proposed.In general the game here presented is a non-zero sum game, and as players in fact cooperate towards the common objective, given by the successful lane change.On the other hand, the scalability of this approach may suffer with long time horizons.In this case we propose an heuristic way to solve this algorithm.As part of future research it is desired a specific reduction of the search space via integer programming. Experimental setting To test the working of the dynamic game framework, we conducted numerical examples.The scenario is set up as in Fig. 2We simulate 3 vehicles, with Vehicle 2 (red) and Vehicle 3 (turquoise) interacting with each other in the merging section.The initial conditions are: Evolve system (17) with δ 1 (k), δ 2 (k) end end end Algorithm 1: Closed loop operation for the proposed control strategy 6.2 Scenario: delayed merge same lane and passes vehicle 3. Vehicle 3 waits for Vehicle 2 to pass until sufficient safety gap is developed in front, and changes lane at k = 7 second.Interestingly, if following a first-in-first-out strategy that is widely used in cooperative merging systems (Rios-Torres and Malikopoulos, 2017a), it leads to the feasible strategy that is best for vehicle 3, but not the best for the collective vehicle group. Fig. 3 shows the system optimal solution, where the error on desired speed e v 0 , speed error to predecessor e v l , vehicle speed and lane sequence are depicted.Note that the change of increasing rate in speed for Vehicle 3 is due to fact that before the lane change, Vehicle 3 has no leader and it only accelerates towards the desired speed.When it changes lane, the both the error on desired speed and speed error to predecessor demand it to accelerate, resulting in an increase in speed change rate. CONCLUSION We proposed a dynamic game formulation for cooperative lane change maneuvers of automated vehicles at highway merges.Simplified vehicle longitudinal and lateral dynamics models are used to predict the system process under different lane change strategies.The framework captures the competitive and cooperative nature of the interactions between the merging vehicle and the mainline vehicle, and renders the design tractable to a range of mathematical tools related to optimal control and integer programming.The discrete dynamic model with control input substantially reduces the computational load for the dynamic merging game compared to previous work.Numerical examples demonstrate the potential of the approach in generating system optimum strategies as opposed to existing non-cooperative merging algorithms. Future research is directed to the scalability analysis of the proposed framework and efficient solution algorithms to a large network of cooperative vehicles and the assessment of the effect of this framework on traffic operations. Fig. 1 . Fig. 1.Control actions for cooperative lane change maneuvers.In this case the red CAV illustrates two behaviors to open gaps for the inserting vehicle in green 4. 1 Dynamic lane change game formulation Definition 1. (Lane change strategy).A vehicle lane change strategy from lane σ → σ + is defined as the sequence: Fig. 2 . Fig. 2. Lane change dynamic game.The controlled CAV in red optimizes the decision making between yielding at the merging time and changing lane. Fig. 3 .Fig. 4 . Fig. 3. Delayed merge Vehicle 3 is 5 meters in front of Vehicle 2 but with a slower speed.The resulting cost of all vehicles and the cost of Vehicle 3 and Vehicle 2 are shown in Fig. 4. Vehicle 2, leading to a cost of 50.The overall cost is not the optimum for the whole vehicle group.From the collective system perspective, the best strategy is that Vehicle 2 stays in the
2021-05-22T00:05:46.753Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "0fd601ababe14e5897bf2e9ea6d0236fd9124b12", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2020.12.2026", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5e7981ac1c6f7db2c2acd6cf8b2f2e9e44c46659", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52190366
pes2o/s2orc
v3-fos-license
Mother’s knowledge on prevention of mother-to-child transmission of HIV, Ethiopia: A cross sectional study Objective To identify proportion of and factors for comprehensive knowledge on prevention of mother-to-child transmission of HIV in pregnant women attending antenatal care in Northern Ethiopia. Methods A total of 416 pregnant women were interviewed between October 2012 and May 2013. Logistic regression analysis was used to identify factors for comprehensive knowledge on prevention of mother-to-child transmission of HIV. Results The proportion of pregnant women, who have comprehensive knowledge on prevention of mother-to-child transmission of HIV, was 52%. The odds of having comprehensive knowledge on prevention of mother-to-child transmission of HIV were higher among pregnant women who were younger (16 to 24 years old) (Adjusted Odds Ratio (AOR) = 2.95; 95%CI: 1.20, 7.26), urban residents (AOR = 2.45; 95%CI: 1.39, 4.32), attending secondary education and above (AOR = 4.43; 95%CI: 2.40, 8.20), employed (AOR = 4.99;95%CI: 2.45, 10.16), have five children or more (AOR = 9.34; 95%CI:3.78, 23.07), have favored attitude towards HIV positive living (AOR = 2.53; 95%CI: 1.43, 4.44) and have perceived susceptibility to HIV (AOR = 10.72; 95%CI: 3.90, 29.39). Conclusion The proportion of women who have comprehensive knowledge on prevention of mother-to-child transmission of HIV in this study setting was low. Measures which will escalate mother’s knowledge on prevention of mother-to-child transmission of HIV should be emphasized. Efforts to improve mother’s knowledge on prevention of mother-to-child transmission of HIV should target women who were older age (> = 35years), rural residents, unemployed, not attending formal education, primigravids, have no favored attitude towards HIV positive living and have not perceived susceptibility to HIV. Results The proportion of pregnant women, who have comprehensive knowledge on prevention of mother-to-child transmission of HIV, was 52%. The odds of having comprehensive knowledge on prevention of mother-to-child transmission of HIV were higher among pregnant women who were younger ( Conclusion The proportion of women who have comprehensive knowledge on prevention of mother-tochild transmission of HIV in this study setting was low. Measures which will escalate mother's knowledge on prevention of mother-to-child transmission of HIV should be emphasized. Efforts to improve mother's knowledge on prevention of mother-to-child transmission of HIV should target women who were older age (> = 35years), rural residents, unemployed, not a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 attending formal education, primigravids, have no favored attitude towards HIV positive living and have not perceived susceptibility to HIV. Background Globally, significant numbers of people are living with HIV. More than 90 percent of new pediatric HIV infections are in sub-Saharan Africa [1]. In Ethiopia, an enormous number of children acquires HIV infection every year [2]. Mother-to-child transmission (MTCT) is the most common means of acquiring pediatric HIV infection since more than 90% of new HIV infection among children is through mother-to-child transmission. Without any intervention measures to prevent the transmission, the risk of MTCT ranges from 20% to 40%. However, mother-to-child transmission can be reduced to less than 2% in non-breastfeeding populations. In breast feeding populations, the transmission can be reduced less than 5% with effective interventions during the periods of pregnancy, labor, delivery and breastfeeding [3]. Prevention of mother-to-child transmission (PMTCT) is one of the fundamental approaches to control HIV epidemic [1]. In early 2013, Ethiopia launched option B+ implementation; i.e. all HIV infected pregnant women receive triple ARV (anti-retro viral) drugs without an initial CD4 testing [4]. To control the risk of MTCT of HIV, World Health Organization has launched a program as virtual elimination of pediatric HIV. Four-pronged approaches are incorporated as components of virtual elimination of pediatrics' HIV. The approaches include primary prevention of HIV infection among women of childbearing age; preventing unintended pregnancies among women living with HIV; preventing HIV transmission from a woman living with HIV to her infant and providing appropriate treatment, care and support to mothers living with HIV, their children and families [3]. For implementing prevention of mother-to-child transmission of HIV, one of the major problems is poor awareness and knowledge of the people about MTCT and PMTCT. Particularly, mother's knowledge on PMTCT plays significant roles to realize preventive measures and to utilize the service [5]. Mother's knowledge on prevention of mother-to-child transmission of HIV is essential in order to use available prevention options [6]. Women, who have adequate knowledge on HIV prevention, protect themselves, their husband and their children from HIV infection and are more likely to undergo HIV testing than women who do not have adequate knowledge on HIV [7]. On the other hand, women, who do not realize mother-to-child transmission of HIV and its prevention, have limited uptake of PMTCT services [8]. Regardless of widespread extension of PMTCT services, women's knowledge on PMTCT is not satisfactory [9,10]. Investigating the proportion and predictors for mother's knowledge on prevention of mother-tochild transmission of HIV in resource limited settings have many benefits. It is a critical requirement to enhance mother's knowledge on PMTCT. The investigation will also help to escalate utilizations of PMTCT services and it will ultimately be used to prevent and control transmission of HIV. Study design and study settings We conducted a facility based cross sectional study from October 2012 to May 2013 among pregnant women attending antenatal care in East Gojjam, Northern Ethiopia. In Northeast Gojjam Zone, 18 governmental health institutions are providing primary healthcare. Antenatal care service is provided in all primary healthcare centers. All mothers, who come for antenatal care, are linked to integrated PMTCT services [6]. Nurses, Midwifes and Health Officers are providing counselling and other four-pronged PMTCT services. Four governmental health institutions in the four districts were included (Motta, Gedieweyne, Debrewerk and Bichena). Sample size estimation Sample size was estimated using a single population proportion estimation formula and calculated by using Epi-info 7 with 50% proportion, 5% of absolute precision, 95% confidence interval and non-response rate of 10%. The calculated overall sample size was 422. The sample size was calculated for different objectives. We took the largest estimated sample. Sampling procedures From 18 primary healthcare centers in Northeast Gojjam Zone, four primary healthcare centers (Motta, Gendowin, Bitchena and Debrewerk) were selected by using lottery method and therefore included in the study. Pregnant women, who were attending the first antenatal care visit of their current pregnancies, were recruited. Systematic random sampling with sampling interval of three was used to select pregnant women from each health institution. Data collection and analysis Questionnaires were first prepared in English. It was then translated to Amharic and back translated to English. Pretest was conducted for consistency and ease of understanding. The study questionnaire was pretested in similar settings to the study area for two times. Firstly, pretest was conducted at Bahir Dar Zuria Werda primary healthcare center, West Gojjam Zone, among 20 pregnant women attending the first antenatal care of their current pregnancy; in this case, the questionnaire was validated for easy of communication. Vague words, phrases and sentences were corrected. Secondly, after the first pretest, study questionnaire was retested in Meray Woreda primary healthcare center, West Gojjam Zone, in 20 pregnant women attending the first antenatal care of their first pregnancy to endorse consistency of understanding among respondents; almost all respondents fully understand the questions. The final version of survey questionnaire was used to collect the actual data. Data were collected from pregnant women during their first antenatal care visit. Trained nurses conducted face to face interviews with study participants by using study questionnaires. Data were collected to assess comprehensive knowledge on prevention of mother-to-child transmission of HIV and its associated factors (age, education, residence, employment, perceived susceptibility to HIV and attitude to HIV positive living). Data were entered into Epi info 7 and analysis was done by using STATA 12. Frequencies and proportions were used to describe the study subjects in relation to the studied variables. Logistic regression model was used to examine the relation between explanatory variables and comprehensive knowledge on prevention of mother-to-child transmission of HIV. Bi-variable logistic regression models were fitted for all explanatory variables. Odds ratio with 95% confidence interval and p-value were used to measure strength of association and to identify statistical significance of results. Identification of confounding variables by using logistic regression model was used; predictor such as health institution and other covariates were adjusted for confounding. P-value less than 0.2 was used as a cutoff point to include explanatory variables from bi-variable models into a multivariable model. In this data set all explanatory variables that were fitted in the bi-variable model were fitted to a multivariable logistic regression model. The Hosmer Lemeshow test was applied and the fit of the model was checked: a poor fit of the model if the p value is < 0.05 and good fit if the p-value is > 0.05. In this study the model was adequately fitted the data and the p-value was 0.93. The clustered nature of the data was taken into account in the analysis; however, multilevel logistic regression model was not fitted. Hence, the ICC (intra-class correlation coefficient) estimate of these data was 0.006 indicating that only 0.6% of the variation is due to difference between health institutions. Most of the variations (99.4%) were explained by the lower level measures (pregnant women). As Intra-class correlation coefficient was less than 5-10%, hierarchical modeling is not required [26]. Operational definitions Comprehensive knowledge of PMTCT. Pregnant women were classified as knowledgeable if the women knew at least one means mother-to-child transmission of HIV (during pregnancy, delivery or breast feeding) and method of prevention from mother-to-child transmission of HIV (antiretroviral therapy for the mother and for the baby). Favorable attitude towards persons living with HIV. Defined as a woman who perceived that her husband or/ and other family member/s will care for her if her HIV test result is positive. The following question was asked. If your result turns out to be positive, what would be the likely reaction to your husband or relatives? The possible answers were four; no one will believe the results; I will be thrown out of home; I will be physically violated/abused; or he/ they will start to care for me. Perceived susceptibility to HIV. Pregnant women who perceive the risk of acquiring HIV infection. The following question was asked. Do you think, you have risk of acquiring HIV? The possible answer was Yes or No. Ethical considerations Ethics approval was received from Bahir Dar University College of Medicine and Health Science research and ethical review committee. Written permission to conduct the study was gotten from each health institution involved in the study. In addition, informed written consents were obtained from study participants whose age were 18 years and more. This study also included participants between 16 years of age and 18 years; therefore, written assents from the teenagers and written permissions from their parents were done. Since there were illiterate participants and parents, the data collectors informed participants and parents about informed consents and assents. Willingness to participate in the study and parental permission were confirmed by signing (finger print for those who can't sign) on the informed consent sheet. Demographic and perceptions of study participants A total of 416 pregnant mothers were included in the analysis. The mean age and Standard Deviation (SD) of study participants was 28.2 years (SD: 6.15 years). Table 1 presents frequency distribution of factors for comprehensive knowledge on prevention of mother-to-child transmission of HIV. The majority of pregnant women (55%) came from rural area. More than half of study participants (54%) were not attended formal education. Higher proportion of mothers (63%) didn't have favored attitude towards HIV positive living. Only 15% pregnant women have perceived their risk of acquiring HIV infection. Proportion of comprehensive knowledge on prevention of mother-to-child transmission of HIV In this study 52% of pregnant women had comprehensive knowledge on prevention of mother-to-child transmission of HIV. Higher proportion of younger age (16 to 24 years) women (59%) had comprehensive knowledge on prevention of mother-to-child transmission of HIV. Similarly, majority of women (85%), who perceived their risk of acquiring HIV infection, had comprehensive knowledge on prevention of mother-to-child transmission of HIV. Table 2 shows factors for comprehensive knowledge on prevention of mother-to-child transmission of HIV. Younger age (16 to 24 years old) (AOR = 2.95; 95%CI: 1.20, 7.26) was an independent factor for increased comprehensive knowledge on prevention of mother-to-child transmission of HIV compared with women older than 35 years. Urban residency (AOR = 2.45; 95%CI: 1.39, 4.32) was an independent factor for increased comprehensive knowledge on prevention of mother-to-child transmission of HIV. Similarly, women who attended secondary education and above (AOR = 4.39; 95%CI: 2.), were more likely to have higher comprehensive knowledge on prevention of mother-to-child transmission of HIV than women who do not attend formal education. Employed women (AOR = 4.99;95%CI: 2.45, 10.16) were more likely to have increased comprehensive knowledge on prevention on mother-to-child transmission of HIV than unemployed women. Likewise, having five or more children (AOR = 9.34; 95%CI:3.78, 23.07), favored attitude towards HIV positive living (AOR = 9.34; 95% CI:3.78, 23.07) and perceived susceptibility to HIV (AOR = 11.72; 95%CI: 3.90, 29.39) were independently associated with increased comprehensive knowledge on prevention of mother to child transmission of HIV among pregnant women. Discussion This study described the proportion of mother's knowledge on PMTCT. In addition, this study identified predictors for mother's knowledge on PMTCT. The proportion of mothers, who do have comprehensive knowledge, was low. These findings have important impact on public health so as to prevent and control transmission of HIV particularly in lower income settings, where HIV overwhelm the already limited health system. The study findings should alert public health agencies since mother's comprehensive knowledge on PMTCT is still unsatisfactory. Furthermore, as this study was a preliminary assessment, researchers should explore other means that can potentially enhance mother's knowledge on PMTCT. In this research 52% of pregnant women had comprehensive knowledge on prevention of mother-to-child transmission of HIV. This finding is consistent with a study conducted in Tanzania which described the proportion of women, who have adequate knowledge of PMTCT, was 46% [21]. However, our finding was higher than a study conducted in Gambila region which showed that only 17% pregnant women knew prevention of mother-to-child transmission of HIV [27]. Variability of mother's knowledge on prevention of mother-to-child transmission of HIV was observed among studies. The proportions of mother, who do have adequate knowledge on preventions of mother-to-child transmission of HIV, rang from 9% to 78% [17-19, 27,28].The variations in proportions of mother's knowledge among studies could be due to differences in study periods. Increased proportions of women, who have comprehensive knowledge on prevention of mother-to-child transmission of HIV, have observed in recent times. Furthermore, difference in source population may be linked with difference on factors that affect mother's knowledge on preventions of mother-to-child transmission of HIV. In our study, younger age was an independent factor for increased comprehensive knowledge on prevention of mother-to-child transmission of HIV. It was consistent with a research finding that showed older age women had lower level of knowledge on PMTCT [20]. This could be due to opportunities among younger women, who have better access to education, enable them to have more information sources such as newspaper and social media. In contrast, a study conducted in Kenya showed that there was no statistically significant difference between knowledge of teenage pregnant women compared with older pregnant women [14]. This could be due to smaller sample size of the study conducted in Kenya that lower the power of the study to detect the association. Similar to studies conducted in Tanzania and Sudan [15,16]. Our study identified that women, who live in urban area, were more likely to have knowledge on prevention of motherto-child transmission of HIV compared with women who live in rural area. Most mothers, who live in rural area, had limited access to PMTCT service. Even if some women do have accesses to PMTCT services, their services utilization remains low. Being a rural resident was found to be a barrier for uptake of PMTCT services [22][23][24]. Therefore, knowledge of the mother on PMTCT could be affected by access and utilizations of the services. Mothers, who are attending PMTCT services, also obtain PMTCT knowledge and counseling [4]. Correspondingly, knowledge of the mother attributes to utilization of PMTCT services. Pregnant women, who do have comprehensive knowledge on prevention of mother-to-child transmission, are more likely tested for HIV than women who do not have comprehensive knowledge on prevention of mother-to-child transmission of HIV [7]. Mothers in this study, who attended formal education, were more likely to have higher knowledge on prevention of mother-to-child transmission of HIV than mothers who do not attend formal education. Level of education is linked with understanding of HIV/AIDS transmission and its prevention. Populations, who attended formal education, have demonstrated adequate understanding of HIV transmission and its prevention [29]. Limited knowledge on PMTCT, on the other hand, was identified in pregnant women who had lower level of education [30]. Similarly, low level of educational was associated with deceased uptake of PMTCT services [25]. Missed opportunities of PMTCT in health facilities as well as in community engagement services were higher in pregnant women with low level of education [31]. Maternal knowledge about PMTCT could directly be affected by school training that enables mothers to understand diseases transmission and its prevention. Educated mothers are privileged in accessing and utilizing PMTCT services that enhance mother's knowledge on PMTCT [7]. This research finding revealed that unemployed women were less likely to have knowledge on prevention of mother-to-child transmission of HIV than employed women. Often, employed women are advantageous in terms of income and social networking than unemployed. Income-level of the mother is linked with the mother's knowledge on preventions of mother-to-child transmission of HIV [32]. Acquiring of HIV related knowledge is associated with income-level in the way that higher income group have better access of information and services on PMTC [33,34]. Employed women are connected with higher income level; employment was coupled with access and use of favored social-networking. Employment enables women to have social links with groups who do have better information on HIV/ AIDS. Knowledge is shared within group members or among different groups. Among working groups, interchange of ideas and learning about health services were evidenced in various types of working environments [35][36][37]. Women who have good knowledge on HIV/AIDS such as adequate knowledge on antiretroviral therapy and on HIV positive living, are able to cope stigma, discrimination and stereotype [37]. Likewise, favored attitude towards HIV positive living was independently associated with increased knowledge on prevention of mother-to-child transmission of HIV. Perceived susceptibility to HIV was associated with increase awareness of HIV services and higher utilizations of health services. On the other hand, those who don't have adequate knowledge on HIV believed that they don't have the risk of acquiring HIV infection once they are married. Nevertheless, newly HIV infected pregnant women were identified among married couples in similar settings [38]. Limitations Since this research was health institution based cross-sectional study, the data can't be inferred to the general population. The findings can't extrapolate to pregnant women who didn't attend the health institutions. Since temporal relationship of outcome and exposure variables is hardly possible, this study does not establish causations. However, the multivariable analysis indicated strong associations between exposure and outcome variables; therefore, this study provides valuable information that will enhance mother's knowledge on prevention of mother-to-child transmission of HIV. Conclusion Proportion of women, who have comprehensive knowledge on prevention of mother-to-child transmission of HIV in this study, was low. Measures, which will escalate mother's knowledge on prevention of mother-to-child transmission of HIV, should be emphasized. Efforts to improve mother's knowledge on prevention of mother-to-child transmission of HIV should target women who are older age (> = 35years), live in rural, unemployed, not attending formal education, primigravids, have no favored attitude towards HIV positive living and have not perceived susceptibility to HIV. Supporting information S1 File. This is the English version of study questionnaire. This is a copy of English Version Study questionnaire. (DOCX) S2 File. This is the Amharic version of study questionnaire. This is a copy of Amharic Version Study questionnaire. (DOCX) S3 File. This is all relevant data of this study. This is the de-identified data. (XLSX)
2018-09-16T07:03:53.540Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "2d86acc7b931fd9115f0abb6d2242330b6c54a63", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0203043&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee9cc8712aa99948c051573fd18554e1cccd756a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251304110
pes2o/s2orc
v3-fos-license
Electrocatalytic performance of MoS 2 nanosheets grown on a carbon nanotubes/carbon cloth substrate for the hydrogen evolution reaction Two dimensional (2D) layered molybdenum sulfide (MoS 2 ) exhibit unique advantages as electrocatalyst for the hydrogen evolution reaction (HER). However, the low conductivity of MoS 2 itself still limit the overall HER rate. In this work, MoS 2 nanosheets grown on a carbon nanotubes (CNTs)/carbon cloth (CC) were used as a cathode electrode for HER under alkaline electrolysis. A MoS 2 /CNTs/CC were prepared by a hydrothermal method using a precursor solution of sodium molybdate (Na 2 MoO 4 ·2H 2 O), thiourea (SC(NH 2 ) 2 ), oxalic acid (C 2 H 2 O 4 ·2H 2 O) and deionized water. The oxalic acid is used as a reducing agent and its concentration plays a significant role in controlling the size and the HER performance of MoS 2 nanosheets. The MoS 2 /CNTs/CC catalyst prepared with the oxalic acid concentration of 5.625 mM shows the optimal HER activity, exhibiting the overpotential of 134 mV at a current density of 10 mA cm -2 , Tafel plots of 45.7 mV dec -1 , electrochemical double layer capacitance of 123.9 mF cm -2 , electrochemical surface area of 3097.5 cm 2 and very high durability. The improved HER activity is attributed to the more exposed active sites for the smaller size MoS 2 nanosheets and improved conductivity of MoS 2 nanosheets by CNTs. Introduction The long-term use of fossil fuels has caused environmental pollution and energy shortages, search and design of green renewable conversion with their high energy density and low cost has received increasing attention. Hydrogen, as a clean energy source, has been considered as a promising alternative for traditional fossil fuels in the future due to its high energy density and low cost [1]. Among various hydrogen production technologies, the hydrogen evolution reaction (HER) from water splitting has the advantages of high purity, simple process, no pollution and so on [2]. However, the hydrogen evolution reaction requires efficient, durable and low-cost catalyst, therefore, the design and preparation of high-efficiency nonprecious metal catalyst for the HER are essential to realizing hydrogen economy and promoting sustainable development in the future. Two-dimensional (2D) layered transition metal sulfide (TMDs) is bonded through strong covalent bonds within the layers, and is bonded through weak van der Waals forces between the interlayer so that there is strong bonding within the layers and weak interlayer interactions. The unique structure and unusual properties make 2D TMDs becomes highly efficient catalysts for the HER [3][4][5]. Currently, MoS 2 is the most widely studied 2D TMD as a catalyst for the HER. Both experimental and theoretical investigations have indicated that the HER activity of MoS 2 correlates with the number of its unsaturated active edge sites [6][7][8]. Nowadays, various synthetic methods were reported to prepare MoS 2 with various morphologies, such as ultrathin flakes [9], nanoflowers [10], thin films [11], nanoplates [12], nanoribbons [13], nanotubes [14], fullerene-like nanoparticles [15] and nanosheet arrays [16]. Li et al. prepared the MoS 2 nanoflowers constructed with nanosheets using a facile hydrothermal method [17]. The MoS 2 nanoflowers ink was prepared by adding MoS 2 nanoflowers powder into ethanol containing nafion. The working electrode for the HER was prepared by spreading the MoS 2 nanoflowers ink onto the surface of glassy carbon. The obtained MoS 2 nanoflowers electrode exhibited the overpotential of 255 mV at current density of 10 mA cm -2 and a Tafel slope of 77.7 mV dec -1 for the HER. Kong et al. synthesized of MoS 2 films with vertically aligned layers on mirror-polished glassy carbon substrates by a rapid sulfurization of ultrathin Mo films. The MoS 2 thin films with vertically aligned layers maximally expose the edge sites on the surface, and have excellent catalytic activity in the HER [18]. The exchange current density and Tafel slope are 2.2×10 −6 A cm -2 and 75 mV dec -1 , respectively. The exchange current density correlates directly with the density of the exposed edge sites. Xie et al. put forward engineering defect structure on the basal planes of MoS 2 to increase the exposure of active edge sites [19]. They prepared defect-rich MoS 2 nanosheets by designing a reaction with a high concentration of precursors and excess thiourea. The existence of rich defects in the ultrathin MoS 2 nanosheets results in partial cracking of the catalytically inert basal planes, leading to exposure of additional active edge sites. The defect-rich MoS 2 ultrathin nanosheets exhibit excellent HER activity with a small onset overpotential of 120 mV and a small Tafel slope of 50 mV dec -1 . Deng et al. developed a synergistic N doping and 4 3− intercalation strategy to induce phase transformation from 2H-MoS 2 to 1T-MoS 2 with a high conversion of about 41%, leading to superior HER performance with a lower Tafel slope of 42 mV dec -1 and overpotential of 85 mV at current density of 10 mA cm -2 [20]. From what has been discussed above, the HER activity of the catalyst can be improved not only by increasing numbers of active sites by a phase transition or the introduction of defects and vacancies, but also by improving the conductivity of the catalyst [21,22]. In this work, a facile hydrothermal method was used to grow MoS 2 nanosheets on a carbon nanotubes (CNTs)/carbon cloth (CC) substrate using a precursor solution of Na 2 MoO 4 ·2H 2 O, SC(NH 2 ) 2 , C 2 H 2 O 4 ·2H 2 O and de-ionized water. Because the poor conductivity of MoS 2 itself significantly limit the overall HER rate, therefore, a CNTs/CC substrate was used as substrate to enhance HER activity of MoS 2 nanosheets. The MoS 2 /CNTs/CC catalysts were prepared by changing the oxalic acid concentrations in a precursor solution. The effects of the oxalic acid concentrations in a precursor solution on the electrocatalytic performance of MoS 2 /CNTs/CC catalysts for the HER were studied. Hydrothermal synthesis of MoS 2 nanosheets on a CNTs/CC substrate MoS 2 nanosheets were synthesized on a carbon nanotubes/carbon cloth (CNTs/CC) by hydrothermal method. The CNTs/CC substrate was produced by Heshi New Materials Co. Ltd. The overall thickness of carbon nanotubes/carbon cloth is 0.2 mm. The loading of carbon nanotubes is 3-4 mg cm -2 , the diameter of carbon nanotubes is 25-50 nm, and the diameter of carbon fiber is 10-25 m. The conductivity of carbon nanotubes/carbon cloth is 138-150 S cm -1 . A typical synthesis process is described as follows: 0.4 mmol Na 2 MoO 4 ·2H 2 O,1.5 mmol SC(NH 2 ) 2 and 0.225 mmol C 2 H 2 O 4 ·2H 2 O were dissolved and stirred in 40 ml deionized water as the reaction solution. The reaction precursor was then poured into a Teflon liner with a volume of 60 mL, and then three CNTs/CC substrates (12 cm 2 ) were placed diagonally against the inner wall of the Teflon liner. The Teflon liner was loaded into a stainless steel autoclave and heated at 200 °C for 8 h. Thereafter, the autoclave was naturally cooled to room temperature. The MoS 2 /CNTs/CC samples were removed and washed with water and ethanol several times, and then the MoS 2 /CNTs/CC samples were dried by nitrogen gas. To study the effects of the oxalic acid concentration in a reaction solution on the morphology, crystallographic structure and electrocatalytic performance of MoS 2 /CNTs/CC, a series samples were prepared by varying the oxalic acid concentration from 0, 1.875, 3.75 to 5.625 mM. The corresponding samples were marked as samples A1, A2, A3 and A4. The preparation process parameters of various samples are shown in Table 1. The loading mass of MoS 2 nanosheets grown on CNTs/CC was estimated to be 0.425 mg cm -2 . Characterization of the MoS 2 nanosheets/CNTs/CC The morphology and element distribution map of the MoS 2 nanosheets were observed using scanning electron microscopy (SEM, SU8010, Hitachi) and energy dispersive spectrometer (EDS, SU8010, Hitachi). The crystal structures of the MoS 2 nanosheets were characterized by X-ray diffraction (D/MAX-Ultima, Rigaku), Raman spectroscopy (LabRAM HR Evolution, HORIBA Jobin Yvon) and transmission electron microscopy (TEM, JEM-2100, JEOL). For TEM imaging, sample A1 was dispersed by ultrasound in 5mL anhydrous ethanol for 15 min, then dropped onto a porous carbon coated 200-mesh copper grid and dried under ambient air at room temperature. Electrochemical measurement of MoS 2 nanosheets/CNTs/ CC All electrochemical measurement was carried out using the electrochemical workstation (CHI760E, Shanghai Chenhua Company). The three-electrode system was employed, where mercury -mercury oxide electrode (Hg/HgO) was used as reference electrode, a graphite rod as pair electrode, and a 11 cm 2 MoS 2 /CNTs/CC as working electrode. 1 M KOH (pH 14) was used as an electrolyte. Before the HER test, MoS 2 /CNTs/CC electrodes were pretreated via a certain number of Cyclic voltammetry (CV) cycles to activate and stabilize the catalysts. Linear sweep voltammetry (LSV) was performed with scan rate of 5 mV s -1 . In LSV test, iR compensation is performed on the data, where i is the current density and R is the solution resistance. CV was tested in the non-Faraday range of -0.9 to -0.8 V (vs. Hg/HgO) at different scanning speeds ranging from 10 to 50 mV s -1 . Electrochemical impedance spectroscopy (EIS) was performed under a potential of -1.058 V (vs. Hg/HgO) in the frequency range from 100 kHz to 0.01 Hz with an AC amplitude of 5 mV. The long-term stability of the MoS 2 /CNTs/CC catalysts was characterized at a current density of 10 mA cm -2 by chronopotentiometry. All potential data are converted to the reversible hydrogen electrode (RHE) according to the Nernst equation: Fig.1 shows SEM images of bare CNTs/CC and samples A1, A2, A3, A4, respectively. It can be clearly seen in Fig. 1a that the surface of the carbon cloth is evenly covered with carbon nanotubes. As can be observed from Fig.1b to Fig.1e, the oxalic acid concentration in the precursor solution has a significant effect on the size and morphology of MoS 2 nanosheets. When the oxalic acid concentration is 0 (sample A1) and 1.875 mM (sample A2), many MoS 2 nanosheets gather together perpendicular to the spherical surface and form flower-like nanosheets microspheres, which cover on surface of CNTs/CC substrate. The diameter of MoS 2 nanosheets microspheres is about 0.9 and 0.3 m for samples A1 and A2, respectively. However, for sample A1, the coverage of MoS 2 nanosheets microspheres on surface of CNTs is poor. For sample A2, the MoS 2 nanosheets microspheres uniformly and densely cover the surface of CNTs/CC substrate. When the oxalic acid concentration is increased 3.750 mM (sample A3), only a small amount of MoS 2 nanosheets microspheres are distributed on the surface of CNTs/CC substrate, while the fine MoS 2 wrap on CNTs. However, when the oxalic acid concentration is further increased to 5.625 mM (sample A4), the fine MoS 2 nanosheets are wrapped on the surface of CNTs as shown in Fig. 1e. The MoS 2 nanosheets microspheres are no longer formed. Fig.1f shows low-magnification SEM image and the corresponding S, Mo and C elemental mapping images (EDX) of sample A4 prepared at the oxalic acid concentration of 5.625 mM. The S and Mo elements are homogeneously distributed throughout the CNTs. In a precursor solution, SC(NH 2 ) 2 is used not only as a S 2ions source, but also as a reducing agent in hydrothermal process. As can be seen from Fig. 1, when there is no oxalic acid in the precursor solution, the flower-like MoS 2 nanosheets microspheres with bigger diameter can be formed (sample A1). The oxalic acid acts also as a reducing agent, which benefits the reduction of Mo 6+ to Mo 4+ . The Mo 4+ then connects with S 2to form MoS 2 . The nucleation rate and reaction rate of MoS 2 will increase with the increasing of oxalic acid concentrations, resulting in the decreasing of size of MoS 2 nanosheets. When oxalic acid concentration is excessive, only small MoS 2 particles is formed. A detailed reaction process and growth mechanism of MoS 2 nanosheets microspheres were proposed in previous work [23]. The XRD patterns of bare CNTs/CC and MoS 2 /CNTs/CC prepared using different oxalic acid concentration are shown in Fig. 2a. The diffraction peaks (marked by #) at 2θ=25.6  and 43.5  correspond to the (0 0 2) and (1 0 0) crystal planes of carbon cloth substrate [24]. The diffraction peaks of CNTs are also very close to those of carbon cloth [25].The diffraction peaks at 2θ=14.4°, 33.6°, 39.6° and 58.4° are assigned the (0 0 2), (1 0 1), (1 0 3) and (1 1 0) planes of hexagonal phase of MoS 2 (JCPDS No. 37-1492) respectively. The morphology of sample A4 is very different with that of samples A1, A2 and A3, but the diffraction peaks of samples A4 are identical to those of these samples. The Raman spectrum to bare CNTs/CC and the as-prepared MoS 2 /CNTs/CC is demonstrated in Fig. 2b. The Raman characteristic peaks at 1345.6 and 1580.8 cm -1 can be ascribed to the D and G bands of the CNTs/CC substrate. I D /I G is the strength ratio of peak D to peak G, and the I D /I G ratio generally indicates the degree of defect density. There are vacancies, edge defect, grain boundaries and disordered carbon types in carbon nanotubes. The higher the I D /I G ratio is, the greater the degree of defect. The I D /I G ratio of CNTs/CC, A1, A2, A3 and A4 is 1.14, 1.10, 1.05, 1.06 and 1.05, respectively, this is because the defects of CNTs participate in the reaction of MoS 2 in the reaction process [26]. Two Raman characteristic peaks located at 368 and 401 cm -1 are corresponding to vibration model of in-plane (E 2g ) and vibration models of out-of-plane (A 1g ) of hexagonal MoS 2 , respectively. A high-resolution transmission electron microscope (HRTEM) is used to further study the crystal structure of MoS 2 nanosheets. Fig. 2c and Fig. 2d provide TEM and HRTEM images of MoS 2 nanosheets (sample A1). The lattice spacing is 0.62 nm is corresponds to the (0 0 2) interlayer spacing of the hexagonal phase of MoS 2 crystal. The interlayer spacing of 0.62 nm corresponds to the distance between the layers of two-dimensional MoS 2 nanosheets. Each nanosheet is composed of approximately 10-15 layers. Electrocatalytic performance of MoS 2 /CNTs/CC catalyst in the HER To study the effect of substrate on HER catalytic activity of MoS 2 nanosheets, MoS 2 nanosheets were grown on CC and CNTs/CC substrates under the same conditions. Fig. 3a shows LSV polarization curves of CC, CNTs/CC, MoS 2 /CC and MoS 2 /CNTs/CC, respectively. The overpotentials of CC, CNTs/CC, MoS 2 /CC and MoS 2 /CNTs/CC are 508, 223, 199 and 130 mV at a current density of 10 mA cm -2 , respectively. The pure CC electrode has poor HER activity. The overpotential of CNTs/CC is 285 mV less than that of CC electrode, the overpotential of MoS 2 /CNTs/CC is 69 mV less than that of MoS 2 /CC. Therefore, CNTs/CC as a substrate is more advantageous for HER than CC as a substrate. To study the effect of the oxalic acid concentrations in the precursor solution on the electrocatalytic performance of MoS 2 /CNTs/CC catalyst for HER, LSV polarization curves of samples A1, A2, A3 and A4 are tested, and the results are shown in Fig. 3b. The overpotentials of samples A1, A2, A3 and A4 are 254, 174, 174 and 130 mV, respectively. The oxalic acid concentration in the precursor solution has an obvious effect on the overpotentials of MoS 2 /CNTs/CC catalysts. The overpotentials of MoS 2 /CNTs/CC catalysts decrease gradually with the increase of oxalic acid concentration in the precursor solution. The hydrogen evolution reaction (HER) from water splitting includes two steps. The first step is hydrogen binds to the catalyst (Volmer adsorption). The second step involves the formation and desorption of hydrogen by either the Heyrovsky reaction or Tafel reaction [27,28]. The Tafel slopes for the Volmer, Heyrovsky and Tafel reaction are 120, 40, and 30 mV dec −1 , respectively [27,29]. Therefore, Tafel slope reflects the HER kinetic process and charge transfer ability of electrocatalysts. To further study the HER dynamics of MoS 2 /CNTs/CC, Tafel curves of samples are ploted according to the LSV polarization curve in Fig.3b. Tafel curves of samples are shown in Fig.3c. The slope of the linear portion of Tafel curve is defined as Tafel slope. Tafel slopes for samples A1, A2, A3 and A4 are 104. 8, 83.6, 80.0 and 45.7mV dec -1 , respectively, indicating a typical Volmer-Heyrovsky route with the Volmer step as the rate-determining step. Tafel slope is inversely proportional to the charge transfer coefficient of the electrocatalysts. Our SEM and Tafel slope analysis results indicate that smaller MoS 2 nanosheets are beneficial for hydrogen adsorption and desorption. The CV curves of samples were measured at scan rates of 10,20,30,40 and 50 mVs -1 , respectively. Fig.3d provides the CV curves of sample A4. From the CV curve, the dependence of the current density difference Δj (Δj = ja -jb, where ja and jb denote the current densities in CV curves at a middle potential) on the scan rate can be obtained, as shown in Fig. 3e. The current density difference Δj and scan rate follows an almost linear relationship, and one-half of its slope is defined as the electrochemical double layer capacitance C dl . The C dl values of samples A1, A2, A3 and A4 are 104.9, 48.9, 61.9 and 123.9 mF cm -2 , respectively. The electrochemical surface area (ECSA) of the catalysts can be calculated according to the Randles−Sevcik equation [30,31]: where Cs is specific capacitance, it is generally in the range of 20~60 μF cm -2 . Cs value of 40 μF cm -2 is used for the calculation of the ECSA [30]. The ECSA values of samples were estimated using Cdl and Equation (2), they are 2622. 5, 1222.5, 1547.5 and 3097.5 cm 2 for samples A1, A2, A3 and A4,respectively. The larger the ECSA, the more active sites on surface of MoS 2 CNTs/CC catalyst. Based on the above experimental results, we can conclude that the oxalic acid concentration in the precursor solution plays a key role on the size of MoS 2 nanosheets, it has obvious effect on HER activity of MoS 2 CNTs/CC catalyst. Sample A4 prepared with the oxalic acid concentration of 5.625 mM exhibits an optimal electrocatalytic performance, yielding overpotential of 134 mV at a current density of 10 mA cm -2 , Tafel slope of 45.7 mV dec -1 , electrochemical double layer capacitance of 123.9 mF cm -2 and maximum electrochemical surface area of 3097.5 cm 2 . For sample A4, the surface of the carbon nanotubes is densely packed and uniformly covered by small MoS 2 nanosheets (Fig. 1). Compared with samples A1, A2 and A3, sample A4 not only can provide more abundant catalytic active edge sites, but also can improve the electrical conductivity of catalysts. Therefore, it exhibits an optimal HER activity. The charge transport capability of catalyst has an important influence on the electrocatalytic performance. Electrochemical impedance spectroscopy (EIS) was employed to analyze the charge transport properties at the MoS 2 catalyst/ Electrolyte interfaces. The Nyquist plots of samples are shown in Fig.3f. The inset in Fig. 3f is equivalent circuit where R ct is the charge transfer resistance at the MoS 2 /CNTs/CC electrode and electrolyte interface; C PE is similar to the electrochemical double layer capacitance and R s is the electrolyte resistance. Rct, C PE and R s can be obtained by fitting the equivalent circuit to Nyquist plots using ZSimpWIN software. The charge transfer resistance R ct for samples A1, A2, A3 and A4 is 8.0, 212.4, 191.2 and 135.0 Ω, respectively. Compared to samples A2 and A3, sample A4 has a smaller the charge transfer resistance, which indicates that sample A4 has the fastest electron transfer and the fastest HER kinetics in a strong alkaline solution. Although the charge transfer resistance R ct of sample A1 is the smallest, however, this is because the low coverage of MoS 2 on the CNTs surface, and the part of the CNTs is exposed (as observed from Fig.1b), resulting in a small R ct . In addition, the electrolyte resistance Rs is equal to 1.8. The stability of electrocatalysts is also very important in industrial applications. The long-term stability of samples A4 was carried out using chronopotentiometry test at 10 mA cm -2 for 10 h duration. The time dependence of the potential is displayed in Fig. 4a. There is no obvious decay observed for the electrode after duration of 10 h, demonstrating the very high durability of the MoS 2 /CNTs/CC catalyst. Furthermore, SEM images of sample A4 at initial and after 10 h stability test are displayed in Fig. 4b and Fig. 4c. The result reveals that the morphology of after duration of 10 h well maintained, further suggesting its excellent structure stability. Conclusion In summary, the MoS 2 nanosheets were successfully grown on a CNTs/CC substrate by hydrothermal method. The effects of the oxalic acid concentrations on the surface morphology, structure and electrocatalytic performance of MoS 2 /CNTs/CC were studied. With increasing the oxalic acid concentration in the precursor solution, the size of the MoS 2 nanosheets gradually decreases and HER activity of MoS 2 /CNTs/CC catalyst gradually enhance. The MoS 2 /CNTs/CC catalyst prepared with the oxalic acid concentration of 5.625 mM has the optimal HER catalytic performance under 1 M KOH electrolyte, exhibiting an overpotential of 134 mV at current density of 10 mA cm -2 , Tafel plots of 45.7 mV dec -1 , electrochemical double layer capacitance of 123.9 mF cm -2 , electrochemical surface area of 3097.5 cm 2 and charge transfer resistance of 135.0 Ω. Meanwhile, it also shows relatively good durability in the continuous HER process. The oxalic acid acts as a reducing agent, which benefits the reduction of Mo 6+ to Mo 4+ . The Mo 4+ then connects with S 2to form MoS 2 . The increasing of the oxalic acid concentrations in the precursor solution results in the decreasing of the size of MoS 2 nanosheets. The smaller size can maximally expose more active edge sites and improve HER catalytic performance.
2022-08-04T15:06:36.993Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "ad2d29dd292ba9279dc01dcfd88f0ca1efb6b916", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15251/djnb.2022.173.799", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ffb8dc482f5c228bd3b1e46efc8e7ee0708ba536", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
49301113
pes2o/s2orc
v3-fos-license
Cysteine boosters the evolutionary adaptation to CoCl2 mimicked hypoxia conditions, favouring carboplatin resistance in ovarian cancer Background Ovarian cancer is the second most common gynaecologic malignancy and the most common cause of death from gynaecologic cancer, especially due to diagnosis at an advanced stage, when a cure is rare. As ovarian tumour grows, cancer cells are exposed to regions of hypoxia. Hypoxia is known to be partially responsible for tumour progression, metastasis and resistance to therapies. These suggest that hypoxia entails a selective pressure in which the adapted cells not only have a fitness increase under the selective environment, but also in non-selective adverse environments. In here, we used two different ovarian cancer cell lines – serous carcinoma (OVCAR3) and clear cell carcinoma (ES2) – in order to address the effect of cancer cells selection under normoxia and hypoxia mimicked by cobalt chloride on the evolutionary outcome of cancer cells. Results Our results showed that the adaptation to normoxia and CoCl2 mimicked hypoxia leads cells to display opposite strategies. Whereas cells adapted to CoCl2 mimicked hypoxia conditions tend to proliferate less but present increased survival in adverse environments, cells adapted to normoxia proliferate rapidly but at the cost of increased mortality in adverse environments. Moreover, results suggest that cysteine allows a quicker response and adaptation to hypoxic conditions that, in turn, are capable of driving chemoresistance. Conclusions We showed that cysteine impacts the adaptation of cancer cells to a CoCl2 mimicked hypoxic environment thus contributing for hypoxia-drived platinum-based chemotherapeutic agents’ resistance, allowing the selection of more aggressive phenotypes. These observations support a role of cysteine in cancer progression, recurrence and chemoresistance. Electronic supplementary material The online version of this article (10.1186/s12862-018-1214-1) contains supplementary material, which is available to authorized users. Background Ovarian cancer is the major cause of death from gynaecologic disease and the second most common gynaecologic malignancy worldwide [1,2], especially due to late diagnosis and resistance to therapy [3]. Epithelial ovarian carcinoma (EOC) includes most malignant ovarian neoplasms [4], being the high-grade ovarian serous carcinoma (OSC) the most prevalent histological type [3] with diagnosis at an advanced stage in approximately 70% of patients [5]. In contrast, ovarian clear cell carcinoma (OCCC), is a rather uncommon histological type of ovarian cancer that is frequently diagnosed at an initial stage [6]. However, tumours present markedly different clinical behaviours compared to other epithelial ovarian cancers presenting, generally, poor prognosis given chemoresistance to conventional platinum-drugs and taxane-based chemotherapy [6]. The standard care for ovarian cancer is a combination of surgery and paclitaxel-carboplatin therapy [7]. However, despite initial response, there is a recurrence of the disease in over 85% of advanced ovarian cancer patients [8]. Usually, OSC shows an initial response to platinum based therapy with further progression to resistance [9] while OCCC is intrinsically resistant to platinum salts [6,10,11]. Serpa and Dias have suggested that the metabolic remodelling is determinant for tumour progression. They have proposed a model in which the selective pressure of the microenvironment involving metabolic pathways switching induces cell death in non-adapted cells and positively selects those cells with growth advantage, increased invasion and altered adhesiveness. This allows local and angio (vascular) invasion, ultimately leading to cancer progression and distant metastasis [12]. Soon after this report, Hanahan and Weinberg had also included reprogramming of energy metabolism as an emerging hallmark of cancer [13]. As a solid tumour grows, cancer cells are exposed to regions of hypoxia. The effects of intermittent hypoxia on cancer biology have been related to the aberrant blood circulation observed in solid tumours. This results in recurrent intra-tumoral episodic hypoxia and assaults metabolically less privileged cell niches. These studies showed that hypoxia is partially responsible for tumour progression, metastasis and resistance to therapies [14][15][16][17]. This evidence supports that hypoxia entails a selective pressure in which the adapted cells not only have a fitness increase under the selective environment, but also in non-selective adverse environments. Moreover, Cutter et al. [18] have recently reported that ovarian cancer cell lines subjected to hypoxia are more invasive, have a migratory ability and display a transformed epithelial-mesenchymal transition (EMT) phenotype. Hence, ovarian cancer is a valuable model to address the metabolic evolution driven by hypoxia. The contribution of cysteine on cancer cells survival has been explored mainly due to hydrogen sulphide (H 2 S) generation [19][20][21][22][23][24] and as a precursor of the antioxidant glutathione (GSH) [25][26][27]. We and others showed that increased levels of cytoplasmic thiol-containing species, especially glutathione or metallothioneins are associated with resistance to platinum-based chemotherapy [25,28,29]. Our group also showed that different ovarian cancer histological types had different metabolic outcomes concerning thiols and chemoresistance [25]. Under normoxic conditions, the OCCC cells were more resistant to carboplatin than OSC cells and the inhibition of GSH production by buthionine sulphoximine (BSO) sensitized OCCC cells to carboplatin, in vitro and in vivo [25]. Those results suggest that the ability to metabolize thiols by cancer cells is directly linked to poorer disease outcome. In this study, we used two different cancer cell lines derived from two different histological types of ovarian cancer (OCCC and OSC) and addressed the effect of cells selection under normoxia and hypoxia, mimicked by cobalt chloride (CoCl 2 ), on the evolutionary outcome of cancer cells. Cobalt is known as a hypoxia mimicking agent both in vivo [30] and in cell culture [31][32][33]. Cobalt was shown to alter several systemic mechanisms related to hypoxia [31][32][33], namely the stabilization of hypoxia inducible factor alpha (HIF-α), thus preventing its degradation [34]. Chemically, CoCl 2 reacts with oxygen impairing its dissolution and oxygenation of aqueous solutions [35], being a way of inducing unavailability of oxygen in culture media. Herein, we hypothesised that selection under CoCl 2 mimicked hypoxia and normoxia leads cells to display different evolutionary outcomes, predicting that hypoxia selected cells would be more resistant to carboplatin than normoxia selected cells. Moreover, we hypothesised that selection under hypoxia is linked to a higher bioavailability of cysteine, resulting in a poorer evolutionary outcome. Prior to any experiment, cells were synchronized under starvation (culture medium without FBS) for 8 h at 37°C and 5% CO 2 . After 24 h of conditions exposure, the medium was changed and fresh conditions were added, with the exception of proliferation curve and cell cycle analysis in which the medium was not changed. Cell lines selection ES2 and OVCAR3 cells (1 × 10 6 cells) were cultured in 25cm 2 tissue culture flasks and selected under normoxia and under hypoxia mimicked with 100μM CoCl 2. After reaching confluency (≈2 × 10 6 cells) cells were trypsinised and cultured in 75cm 2 tissue culture flasks, under selective conditions (hypoxia mimicked with 100μM CoCl 2 ). Every 48 h, cells undergone passaging if confluency reached~80% (~7.5 × 10 6 ) or culture media was only changed if this confluency was not achieved. As the proliferation and survival rates were different between cell lines, ES2 and OVCAR3 were selected for 63 and 84 days, respectively. After the period of selection, cells were expanded in baseline culture conditions to reach the number of cells needed to performed all the assays. Cobalt is a hypoxia mimicking agent commonly used in both in vivo [30] and in vitro [31][32][33] studies. CoCl 2 reacts with oxygen avoiding its dissolution and oxygenation of aqueous solutions [35], being a way of impairing the availability of oxygen in culture media. However, from now on the condition will be designated as CoCl 2 mimicked hypoxia condition. Within each cell line, selection in normoxia and CoCl 2 mimicked hypoxia was performed simultaneously. Ancestral cell line was cultured in base line conditions. Table 1 presents the selection and culture conditions for ES2 and OVCAR3 cell lines. Proliferation curve assay Cells selected under normoxia and CoCl 2 mimicked hypoxia (5 × 10 4 cells/well), were seeded in 24-well plates and cultured either in normoxia or exposed to 100μM CoCl 2 . Cells were collected after 16 h, 32 h and 48 h of conditions. Cells were trypsinized and resuspended in 200 μL of PBS 1×. A total of 15 μL were collected and 5 μL of trypan blue were added. Cells were immediately counted. The remnant cells were used to cell cycle analysis. This assay was performed with 63 days of selection for ES2 cells and 35 days of selection for OVCAR3 cells. Cell death assay Cells selected under normoxia and CoCl 2 mimicked hypoxia (2 × 10 5 cells/well) were seeded in 12-well plates and cultured under normoxia and exposed to 400μM L-cysteine and/or 100μM CoCl 2 . In addition, cells were exposed to the previous conditions combined with carboplatin 25 μg/mL. Cells were collected after 48 h of tested conditions. For the analysis of the response dynamics to carboplatin, the cells were collected after 16 h, 24 h and 48 h of conditions. The ancestral (not selected) cell lines were also tested. Half of the cells were used to cell death analysis and the other half was used for ROS quantification. This assay was performed with 43 days of selection for ES2 cells and 84 days of selection for OVCAR3 cells. Cells were harvested by centrifugation at 1200 rpm for 3 min, cells were incubated with 1 μL annexin V-Alexa Fluor® 488 (640,906, BioLegend) in 100 μL annexin V binding buffer 1× (10 mM HEPES (pH 7.4), 140 mM sodium chloride (NaCl), 2.5 mM calcium chloride (CaCl 2 )) and incubated at room temperature and in the dark for 15 min. After incubation, samples were rinsed with 0.1% (w/v) BSA (A9647, Sigma) in PBS 1× and centrifuged at 1200 rpm for 3 min. Cells were suspended in 200 μL of annexin V binding buffer 1× and 5 μL Propidium Iodide (PI; 50 μg/mL). Acquisition was performed with a FACScalibur (Becton Dickinson). Data were analysed with FlowJo software (www.flowjo.com). ROS quantification assay Cells selected under normoxia and CoCl 2 mimicked hypoxia (2 × 10 5 cells/well) were seeded in 12-well plates and cultured in control condition and exposed to 400μM L-cysteine and/or 100μM cobalt chloride and/or carboplatin 25 μg/mL. Cells were collected after 48 h of tested conditions. The ancestral cell lines were also tested. This assay was performed with 43 days of selection for ES2 cells and 84 days of selection for OVCAR3 cells. Statistical analysis Data are presented as the mean ± SD and all the graphics were done using the PRISM software package (PRISM 6.0 for Mac OS X; GraphPad software, USA, 2013). Assays were performed with 3 replicates per treatment. For comparisons of two groups, two-tailed independent-samples T-test was used. For comparison of more than two groups, One-way analysis of variance (ANOVA) with Tukey's multiple-comparisons post hoc test was used. To assess the existence of a linear relationship between two variables, two-tailed Pearson correlation was used. Statistical significance was established as p < 0.05. All statistical analyses were performed using the IBM Corp. Adaptation to normoxia (N) confers a highly proliferative ability to ES2 cells We started by confirming the induction of HIF1α expression by CoCl 2 . In fact, HIF1α expression was increased in both cell lines upon exposure to CoCl 2 ( Fig. 1). Then, we have assessed the selective effects of normoxia (N) and CoCl 2 mimicked hypoxia (H) in ES2 (OCCC) and OVCAR3 (OSC) cells proliferation, assessed by trypan blue staining and counting under a light microscope. The codes of each cell line and culture condition are presented in Table 1. The proliferation curves showed that ES2-N cells proliferated more than ES2-H, both in N and in H ( Table 2 and Fig. 2a). In addition, ES2-NN tended to proliferate more than ES2-NH which is supported by cell cycle analysis, which showed that ES2-NN had a lower percentage of cells in G0/G1 than ES2-NH (Fig. 2b). Cell cycle analysis was performed by flow cytometry using PI staining in ethanol fixed cells. Adaptation to normoxia (N) is accompanied by an evolutionary trade-off that is suppressed by cysteine under CoCl 2 mimicked hypoxia (H) in ES2 cells Cell death, by flow cytometry using annexin V and popidium iodide (PI) staining, was used to assess the selective effects of N and H in ES2 (OCCC) and OVCAR3 (OSC). The codes of each cell line and culture condition are presented in Table 1. Cell death analysis showed that ES2-A, ES2-N and ES2-H have a trend to benefit from cysteine in normoxia. However, ES2-AH and ES2-NH benefit from cysteine, having lower cell death levels. ES2-N showed to be more sensitive to CoCl 2 mimicked hypoxia than ES2-A thus showing an evolutionary trade-off in the adaptation to N; and cysteine was able to supress this trade-off (Fig. 3a, b and Table 3A, B). As expected for ES2-H any Tukey test presented statistical significance amongst conditions, suggesting that ES2-H performed equally in all environments (Fig. 3a, b and Table 3A, B). In normoxia, OVCAR3-A, OVCAR3-N and OVCAR3-H showed no differences in the absence and presence of cysteine. OVCAR3-N also showed to be more sensitive to CoCl 2 mimicked hypoxia than OVCAR3-A, thus showing again an evolutionary trade-off in the adaptation to normoxia (Fig. 3c, d and Table 3 C, D). Interestingly, only OVCAR3-A showed a benefit from cysteine in hypoxia, suggesting that selection under normoxia (OVCAR3-N) led to a decreased dependence on cysteine metabolism or to the loss of efficacy in taking advantage from cysteine. OVCAR3-H was worse adapted to normoxia than OVCAR3-A and OVCAR3-N, but under H they performed better than OVCAR3-N (Fig. 3 c, d and Table 3 C, D), thus suggesting that this cell line also present a evolutionary trade-off in the adaptation to hypoxia mimicked with CoCl 2 under normoxia. However, like ES2-H, in OVCAR3-H performed equally in all environments (Fig. 3c, d). We must highlight that there was no difference in the response to hypoxia mimicked with CoCl 2 among OVCAR3 and the respective ES2 cells. Nevertheless, OVCAR3 cells presented lower cell death levels in this Figure S1A and B and Additional file 2: Table S1). Metabolic evolution driven by CoCl 2 mimicked hypoxia (H) provides stronger resistance to carboplatin In here, we assessed the effects of selection under N and H on cells capacity to survive upon carboplatin exposure. The codes of each cell line and culture condition are presented in Table 1 and cell death was assessed by flow cytometry using annexin V and popidium iodide (PI) staining. Upon carboplatin exposure, cell death levels increased for ES2-A cells in all treatments when compared to a drug-free environment. ES2-N cells showed a trend similar to ES2-A, in all conditions, with the exception of ES2-NH, in which there was a tendency for higher cell death levels upon carboplatin exposure, though not statistically significant (Fig. 4a and Table 4 A). Nonetheless, cysteine was advantageous under H in the presence of carboplatin for both ES2-A and ES2-N (Additional file 3: Figure S2 and Additional file 2: Table S2A). Interestingly, for ES2-H cells, upon carboplatin exposure, only ES2-HH showed a slight increase in cell death levels upon carboplatin (Fig. 4a and Table 4 A). Hence ES2-H cells present a higher survival capacity upon carboplatin exposure than ES2-A and ES2-N cells (Fig. 4b and Table 4 Figure S2 and Additional file 2: Table S2). This fact is reinforced by the results of ES2 cells selected in normoxia (ES2-N) showing that these cells present higher ratio of cell death when cultured in hypoxia mimicked with CoCl 2 with cysteine (ES2-NHC) versus without cysteine (ES2-NH), upon carboplatin (Fig. 4). Together, results suggest that cysteine facilitates the adaptation to hypoxia mimicked with CoCl 2 , which, in turn, drives carboplatin resistance. On the contrary, long term normoxia drives the selection of cells that have less capacity of benefiting from cysteine protection under hypoxia and upon drug exposure. OVCAR3 cells presented higher cell death levels upon carboplatin exposure in OVCAR3-A, OVCAR3-N and OVCAR3-H cells, when compared to a drug-free environment and in all treatments (Fig. 4c, and Figure S2 and Additional file 2: Table S2 B). Interestingly, OVCAR3-HN cells presented stronger survival ability upon carboplatin than OVCAR3-AN and OVCAR3-NN ( Fig. 4d and Table 4 F). Taken together, results suggest that H-selection can also be advantageous for OVCAR3 cells upon carboplatin exposure, nonetheless at a lessen extent than ES2 cells. Carboplatin resistance driven by CoCl 2 mimicked hypoxia is stronger in ES2 (OCCC) cells We next compared ES2 and OVCAR3 ancestral and selected cells response dynamics to carboplatin Fig. 5c and Table 5 C). In ancestral cells, the dynamics of carboplatin response were similar between ES2 and OVCAR3 cells, in which carboplatin induced cell death in a time-dependet manner ( Fig. 6a and Table 6 A). However, ES2-NH cells showed a stable response to carboplatin over time, whereas OVCAR3-NH cells also presented increased cell death levels with increasing time of carboplatin exposure ( Fig. 6b and Table 6 B). In all conditions, ES2-H cells showed a stable carboplatin response, with the exception of ES2-HH, in which carboplatin induced a slight increase in cell death levels with increasing time of exposure. In OVCAR3-H cells, carboplatin induced cell death in a time-dependet manner in all treatments ( Fig. 6c and Table 6 C). Taken together, results suggest that hypoxia mimicked with CoCl 2 (H) drives carboplatin resistance in ES2 and, at a lower extent, in OVCAR3 cells, thus pointing a more aggressive phenotype in ES2-H than in OVCAR3-H cells. Since ES2-A cells and ES2-N cells were able to take advantage from cysteine in H (ES2-AH and ES2-NH), we propose that cysteine allows a quicker response and adaptation to H conditions that, in turn, drive carboplatin resistance. ES2 cells present metabolic diversity in adverse environments, favouring resistance to carboplatin The codes of each cell line and culture condition are presented in Table 1. In a drug-free environment, the analysis of ROS levels by flow cytometry analysis allowed the observation of two distinct populations in ES2-NH (Fig. 7a), suggesting the existence of a glycolytic and an oxidative phosphorylative population of cells. Interestingly, hypoxia mimicked with CoCl 2 was especially disadvantageous for HC -Hypoxia mimicked with CoCl 2 supplemented with cysteine. In a. and c. asterisks represent statistical significance compared to the respective control (cells cultured in the same experimental condition but in a free-drug environment) within each cell line. In b. and d. Asterisks represent statistical significance compared to ancestral cells. Cardinals (#) represent statistical significance compared to N-selected cells. Data were normalized to the respective control. Results are shown as mean ± SD. *p < 0.05, **p < 0.01, ***p < 0.001 or #p < 0.05, ##p < 0.01, ###p < 0.001 (A. and C. Independent samples T test and B. and D. One-way ANOVA with post hoc Tukey tests) those cells, presenting the higher cell death levels in this condition (Fig. 3a). This suggests that metabolic diversity among ES2-N cells could be a strategy to cope with new adverse environments. In ES2-HH, we only observed one population, thus revelling a higher metabolic adaptive capacity to H (Fig. 7b). Interestingly, in OVCAR3-NH we were not able to distinguish two different populations of cells as in ES2-NH (Fig. 7c). Also, we observed a trend to higher ROS levels in both ES2-N and ES2-H than in OVCAR3-N and OVCAR3-H, especially in conditions with cysteine supplementation (Additional file 4: Figure S3A to D and Additional file 2: Table S3 A to D). This might indicate that cysteine allows higher metabolic activity in ES2 cells, even under H. Moreover, the detection of ROS, using 2′, 7′-Dichlorofluorescin diacetate, never showed a correlation between higher ROS levels and higher cell death levels in any cell line. On the contrary, ROS showed a negative correlation with cell death. Upon carboplatin exposure, different populations were also observed for ES2-N cells under H (Fig. 7e), thus showing again that this cell line present different cell populations with different metabolic states in an adverse environment. In addition, ES2-HH with cysteine showed a notable increase in ROS levels upon carboplatin exposure (Fig. 7f, Additional file 4: Figure S3E and F and Additional file 2: Table S3 E and F). Upon carboplatin exposure, OVCAR3-A, OVCAR3-N and OVCAR3-H cells did not show different populations in any treatment ( Fig. 7g and h). Interestingly, OVCAR3-H selected cells showed no differences in ROS dynamics, thus suggesting that cells do not present metabolic diversity (Fig. 7h). Taken together, results suggest that ES2 cells present higher metabolic diverse strategies under adverse environments when compared to OVCAR3 cells. This diversity possibly explains the increased response capacity to the more stressful environments (hypoxia mimicked with CoCl 2 and carboplatin) of ES2 cells, whereas, in general, OVCAR3 cells failed to respond to it. Discussion Although the outcome prognosis of OCCC and OSC had been a matter of controversy, it was shown that patients with OCCC had a significantly worse prognosis than patients with OSC when matched for age, stage, and level of primary surgical cytoreduction [36,37]. Moreover, while OCCC shows primary resistance to conventional platinum-based chemotherapy, OSC at first shows sensitiveness [10,11] with the development of progressive resistance [9]. In here, we used two different cancer cell lines derived from these two histological types of ovarian cancer and addressed the effect of cells selection under normoxia and CoCl 2 mimicked hypoxia on the evolutionary outcome of cancer cells, exploring also the role of cysteine in this adaptive process. It is widely accepted that adaptation to a specific environment is associated to deterioration in other non-selective environments, being accompanied by an evolutionary trade-off [38][39][40][41]. In fact, our results suggest that there is an evolutionary trade-off in ovarian cancer cells adaptation to normoxia conditions in which, cells adapted under normoxia duplicated rapidly but at the cost of increased mortality in adverse environments. Notably, in ES2 (OCCC) cells, cysteine was able to suppress this trade-off under CoCl 2 mimicked hypoxia (ES2-NH versus ES2-NHC). Our previous data have shown that cysteine is able to protect cells from death under CoCl 2 mimicked hypoxia, allowing fast adaptation to those conditions, especially in ES2 cells (unpublished data). Evidence suggests that intracellular cysteine directly induces the HIF prolyl-hydroxylases, leading to HIF-1α degradation [42,43]. This suggests that cysteine is able to convert a hypoxic cellular metabolism into a normoxic one. In addition, our data suggests that ES2 ancestral cells present both higher intracellular cysteine and GSH degradation levels under hypoxia mimicked with CoCl 2 supplemented with cysteine compared to hypoxia mimicked with CoCl 2 without cysteine supplementation (data not shown). Those observations could Those results also suggest that ES2 cells selected under normoxia (ES2-N) still present metabolic diversity concerning cysteine metabolism under hypoxic conditions (ES2-NH). Interestingly, OVCAR3-N cells showed less plasticity. Moreover, ES2-H presented increased survival in non-selective environments compared to cells selected under normoxia (ES2-N), suggesting a more aggressive phenotype in these cells, as they seem to exhibit a generalist phenotype, hence more adaptive. Remarkably, results showed that the increased survival was accompanied by lower proliferation rates. Life history theory proposes that cancer cells may be subjected to trade-offs between maximizing cell survival and cell growth, and that both strategies can be successful depending on the environmental conditions [44]. We observed that ES2-H proliferated more slowly than ES2-N, but, nevertheless, presented increased survival in the presence of carboplatin, a cytotoxic agent used in ovarian cancer conventional chemotherapy, thus showing again that life-history trade-offs may have clinical implications for cancer patients. Those results are in accordance with the observations that hypoxia promote tumour progression and resistance to therapy (reviewed by Vaupel and Mayer) [45], having a complex role in the hallmarks of human cancers [13,46,47]. Importantly and surprisingly, hypoxia is known to induce mitochondrial ROS levels [48,49]. ROS levels are widely associated with tumour initiation, progression and chemoresistance [48,50,51]. Our results showed increased ROS levels in ES2-H cells under hypoxia mimicked with CoCl 2 and cysteine supplementation upon carboplatin exposure. Interestingly, in the same conditions, ES2 cells showed a higher ability to survive upon carboplatin exposure. Nevertheless, it remains unclear if the increased ROS levels are responsible for carboplatin resistance or, on the contrary, if the higher cells adaptability to this environment leads cells to increased metabolic activity, thus increasing ROS levels. Notably, OVCAR3-A and OVCAR-N cells showed to be less sensitive to hypoxia mimicked with CoCl 2 than ES2-A and ES2-N cells. This observation would suggest that those cells are more prone to chemoresistance than ES2 cells. However, OVCAR3 cells presented a poorer response capacity to carboplatin, thus suggesting that resistance to hypoxia alone cannot explain the more aggressive phenotypes. OVCAR3 cells also presented decreased cells diversity concerning ROS levels in adverse environments. Our results highlight the role of hypoxia-induced chemoresistance in combination with metabolic diversity in cancer cells coping with adverse conditions. Whereas ES2 cells showed metabolic diversity, thus suggesting metabolism reprogramming in adverse conditions, OVCAR3 cells seemed to be inefficient in this process, thus preventing an increased survival upon carboplatin cytotoxicity. We have to highlight that ES2 and OVCAR3 cells were selected during different times, due to a lower proliferation rate of OVCAR3 cells in CoCl 2 mimicked hypoxia than ES2, which could explain, in part, the lower diversity observed in OVCAR3 selected cells, as these cells were selected during more time than ES2 cells. However, in what concerns carboplatin resistance, we would expect an association between higher selection time and higher levels of resistance. However, in a general way, OVCAR3 selected cells showed to be less resistant than ES2 selected cells. Moreover, our main propose was to compare the effect of selection under normoxia and CoCl 2 mimicked hypoxia and cysteine supplementation in the dynamics of adaptation to carboplatin within each cell line (ES2/OVCAR3) and the time of selection was the same in these situations. Also, the ancestral OVCAR3 (OVCAR3-A) cells showed similar dynamics of response to carboplatin as selected cells, corroborating the results. The proliferation curves/cell cycle analysis and cell death analysis /ROS quantification were also performed with different selection times within each cell line but we did not aim to compare proliferation with cell death. The only speculation done was regarding ES2 cells selected under CoCl 2 mimicked hypoxia and increased survival accompanied by lower proliferation rates. However, since proliferation curves were performed with increased selection time, it would be expected that the same selection time as cell death analysis, would lead to a more pronounced effect on decreased cell proliferation, given less time for adaptation. Our second hypothesis that selection under CoCl2 mimicked hypoxia in ES2 (ES2-H) cells would favour a stronger ability of cells to benefit from cysteine under CoCl2 mimicked hypoxia showed to be false in a drug-free environment. Strikingly, in the presence of carboplatin, cysteine was especially advantageous to ES2-H, thus suggesting that they evolved mechanisms to a better usage of this amino acid in new adverse environments. In this study, we only focused on the role of cysteine supplementation in response to hypoxia and further response to carboplatin cytotoxicity. We did not adressed other amino acids since we were interested in cysteine as a sulphur source in hypoxia mimicked with CoCl 2 and carboplatin resistance. However, in another study we showed that glutamine also played a role in GSH synthesis, as glutamine is a source of glutamate and glycine [25], supporting again the role of thiols in chemoresistance. Taken together, results show that the adaptation to normoxia and CoCl 2 mimicked hypoxia leads cancer cells to display opposite strategies. Whereas cells adapted to CoCl 2 mimicked hypoxia tend to proliferate less but present increased survival in adverse environments, cells adapted to normoxia present the opposite strategy, proliferating rapidly but at the cost of increased mortality in adverse environments. Albeit the number of cell lines might be a limitation, we believe those different evolutionary courses might be in the future taken into account in the clinical context, as therapy protocols could be more effective dependent on the evolutionary strategy of cancer cells. Moreover, results stressed that the ability of cancer cells to use cysteine has an impact in cancer cells adaptation to a CoCl 2 mimicked hypoxic environment and, ultimately, to platinum-based chemotherapeutic agents, allowing the selection of resistant phenotypes that are more aggressive, being able to carry out cancer progression and recurrence (Fig. 8). Finally, our study pave the path to show that experimental evolution in cancer can be a valuable tool to predict the metabolic courses underlying resistance to drugs, which will contribute to an endpoint improvement of fighting cancer strategies. Conclusions Despite the limitation of cell line models our results light the role of metabolic evolution driven by CoCl 2 mimicked hypoxia selection and cysteine availability in ovarian cancer cells response to chemotherapy. Moreover, our results highlight cysteine bioavailability as a source of new therapeutic targets in order to reverse resistance both to hypoxia and carboplatin. Finally, the ability of cancer cells to metabolize and import cysteine could also be used to predict the development of resistance to platinum based therapy. Currently, we are developing a study to disclose the biochemical mechanism underlying the benefits of cysteine in order to find prognostic markers and ideally targets to overcome chemoresistance in ovarian cancer. showed different adaptive capacities in a drug free environment which influences the response to carboplatin. a Non-selected ancestral and Normoxia selected ES2 cell lines showed an evolutionary trade-off when exposed to CoCl 2 mimicked hypoxia, this situation is reverted by the presence of cysteine. CoCl 2 mimicked hypoxia selected cells behave equally in the presence or absence of cysteine. Upon carboplatin exposure, all ES2 cells variants benefits from a protective effect of cysteine, decreasing the cytotoxicity of carboplatin. So, ES2 cells exhibited a higher adaptive capacity to CoCl 2 mimicked hypoxia and cysteine which reflects in a better fitness to the carboplatin rich non-selective environment, being CoCl 2 mimicked hypoxia selected the best fitted. b Non-selected ancestral and Normoxia selected OVCAR3 cell lines showed an evolutionary trade-off when exposed to CoCl 2 mimicked hypoxia, cysteine only reverts this situation in Non-selected ancestral cells. Upon carboplatin exposure, all OVCAR3 cells variants benefit from a protective effect of cysteine, decreasing the cytotoxicity of carboplatin. OVCAR3 cells variants benefit from different grades of cysteine protection: Non-selected ancestral>CoCl 2 mimicked hypoxia selected>Normoxia selected. So, OVCAR3 cells exhibited a lower adaptive capacity to CoCl 2 mimicked hypoxia and cysteine which reflects in a worse fitness to the carboplatin rich non-selective environment, being Normoxia selected the worst fitted. Overall, ES2 cells have a higher metabolic plasticity than OVCAR3 cells. This fact can underlie the intrinsic resistance to carboplatin exhibited by ES2 cancer in clinics. White ellipses in ES2 cells represent vacuoles characteristic of clear cell carcinoma
2018-06-19T23:53:50.023Z
2018-06-19T00:00:00.000
{ "year": 2018, "sha1": "597a21389cfcebbb76d99d58f78fcf001e5fa56d", "oa_license": "CCBY", "oa_url": "https://bmcecolevol.biomedcentral.com/track/pdf/10.1186/s12862-018-1214-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "597a21389cfcebbb76d99d58f78fcf001e5fa56d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15279955
pes2o/s2orc
v3-fos-license
Partition properties of the dense local order and a colored version of Milliken's theorem We study the finite dimensional partition properties of the countable homogeneous dense local order. Some of our results use ideas borrowed from the partition calculus of the rationals and are obtained thanks to a strengthening of Milliken's theorem on trees. Introduction The purpose of this paper is the study of the partition properties of a particular oriented graph, called the dense local order. To our knowledge, the dense local order (denoted S(2) in the sequel) appeared first in a work of Woodrow [W76]. The attempt then was to characterize the countable tournaments which are homogeneous, that is for which any isomorphism between finite subtournaments can be extended to an automorphism of the whole structure. It was shown that up to isomorphism, there are only two countable homogeneous tournaments which do not embed the tournament D shown in Figure 1. Those are 1) the tournament corresponding to the rationals (Q, <) where x Q ←− y iff x < y and 2) the dense local order S(2). The tournament S(2) is defined as follows: let T denote the unit circle in the complex plane. Define an oriented graph structure on T by declaring that there is an arc from x to y iff 0 < arg(y/x) < π. Call − → T the resulting oriented graph. The dense local order is then the substructure S(2) of − → T whose vertices are those points of T with rational argument. A few years later, Lachlan proved in [La84] that any countable homogeneous tournament embedding D also embeds every finite tournament. This completed the classification initiated by Woodrow and showed that up to isomorphism there are only three countable homogeneous tournaments: the rationals, the dense local order and the countable random tournament − → R (up to isomorphism, the unique countable homogeneous tournament into which every countable tournament embeds). Note that this is in sharp contrast with the more general case of countable homogeneous oriented graphs as there are continuum many such objects (this latter result is due to Henson [Hen72] while the classification of countable homogeneous graphs is due to Cherlin [Ch98]). In this paper, we will be interested in Ramsey type questions with the following flavor: given k ∈ N and a finite tournament Y, is there a finite tournament Z such that for every k-coloring of the arcs of Z, there is an induced copy Y of Y in Z where all the arcs have the same color? For this particular problem, the answer could be negative (depending on which Y we started with) but becomes positive if one is allowed to have at most two colors instead of one single color for the arcs of Y. More generally, Ramsey-theoretic properties of the rationals and of the random tournament are known in the following sense: given tournaments X, Y and Z, we write X ⊂ Z when X is an induced subtournament of Z and X ∼ = Y when there is an isomorphism from X onto Y. We define the set For k, l positive elements of N (throughout this article, N = {0, 1, 2, 3, . . .}) and a triple X, Y, Z of tournaments, the symbol is an abbreviation for the statement: "For any χ : Z X −→ [k] (by [k] we mean the set {0, . . . , k − 1}), there is Y ∈ Z Y such that χ does not take more than l values on e Y X ." When l = 1, this is simply written Z −→ (Y) X k . Let Q, T and C denote the class of all finite subtournaments of Q, − → R and S(2) respectively. For K = Q, T or C and X ∈ K, a first problem is to determine the value of the Ramsey degree of X in K, denoted t K (X), defined as the least l ∈ N ∪ {∞} such that for every Y ∈ K and every k ∈ N there exists Z ∈ K such that A second problem is to determine the value of the big Ramsey degree of X in K. This latter quantity is denoted T K (X) and is defined as follows: let F denote the Then the big Ramsey degree of X in K is the least L ∈ N ∪ {∞} such that for every k ∈ N, For K = Q, the Ramsey degrees and the big Ramsey degrees are always finite and can be computed effectively. More precisely, every X ∈ Q is such that t Q (X) = 1. This is an easy consequence of the original Ramsey theorem. By contrast, a much more difficult proof due to Devlin in [Dev79] showed that T Q (X) = tan (2|X|−1) (0), the (2|X|−1)st derivative of tan evaluated at 0. Recall that tan (0) = 1, tan (3) (0) = 2, tan (5) (0) = 16, tan (7) (0) = 272 and that in general For K = T , Ramsey degrees and big Ramsey degrees have never been studied explicitly but can be determined thanks to other known Ramsey type results. In particular, thanks to a general partition result of Nešetřil and Rödl, it is known that every X ∈ T has a finite Ramsey degree and that where Aut(X) denotes the set of all automorphisms of X. On the other hand, T T (X) is known to be finite and the work [LSV04] by Laflamme, Sauer and Vuksanovic on the countable random undirected graph actually shows that its value can be interpreted as the number of representations that X admits into a certain wellknown finite structure (that is, there is an algorithm for every X determining the value of T T (X)). However, it is still unclear whether this expression can be simplified so as to give a counterpart to Devlin's formula in the context of T . As for the case K = C, it does not seem to have been studied by anybody so far and the purpose of the present paper is therefore to fill that gap. We first study the Ramsey degrees in C. Our result here reads as follows: Theorem 1. Every element X of C has a Ramsey degree in C equal to t C (X) = 2|X|/|Aut(X)| . We then turn to the study of the big Ramsey degrees in C, and prove: Theorem 2. Every element X of C has a big Ramsey degree in C equal to As a direct corollary, for every natural k > 0 and every coloring χ : S(2) −→ [k], there is an isomorphic copy of S(2) inside S(2) on which χ takes only 2 colors (this statement is not as obvious as it looks), and 2 is the best possible bound. On the other hand, for every k-coloring of the arcs of S(2), there is an isomorphic copy of S(2) inside S(2) where only 8 colors appear, and 8 is the best possible bound. Theorem 1 and Theorem 2 are proved thanks to a connection between the class C and some other classes of finite structures for which several Ramsey properties are already known. Those are the classes P n of all finite structures of the form A = (A, < A , P A 1 , . . . , P A n ) where < A is a linear ordering on A and {P A 1 , . . . , P A n } is a partition of A into disjoint sets. Given two such structures A and B, an isomorphism is an order-preserving bijection f from A to B such that for every As it was the case for the class C, there is a unique countable homogeneous structure whose class of finite substructures is P n . In this paper, this structure is denoted Q n . The role that Q n plays with respect to P n is exactly the same as the role that S(2) plays for the class C. As for S(2), the structure Q n can be represented quite simply. Namely, the structure Q n can be seen as (Q, Q 1 , . . . , Q n , <) where Q denotes the rationals, < denotes the usual ordering on Q, and every Q i is a dense subset of Q. The notions of Ramsey degrees and big Ramsey degrees in P n are then defined in exactly the same way as they are for C. The Ramsey degrees in P n are known: every element in P n has a Ramsey degree in P n equal to one. This result, in the case n = 2, is one of the key facts in our proof of Theorem 1. As for the big Ramsey degrees, we are able to prove that: Theorem 3. Let n be a positive natural. Then every element X of P n has a big Ramsey degree in P n equal to tan (2|X|−1) (0). Equivalently, for every element X of P n , tan (2|X|−1) (0) is the least possible natural such that for every natural k > 0, Again, the corresponding result for n = 2 turns out to be crucial for our purposes. Here, it is one of the ingredients of our proof of Theorem 2. Theorem 3 is obtained by following ideas borrowed from Devlin [Dev79] together with a strengthening of a theorem of Milliken [Mi79]: consider a finitely branching tree (in the order-theoretic sense) T of infinite height, a number m, and a subset S ⊂ T . If S satisfies certain properties listed in Section 5, we say that S is a strong subtree of T of height m. According to Milliken's theorem, if we assign a color to each strong subtree of height m out of a finite family of colors then there exists a strong subtree of infinite height such that all strong subtrees of height m contained in it have the same color. In the version we need in order to prove Theorem 3, each level of the tree is assigned a color (out of a finite set not related to the set of colors of subtrees). We then consider only strong subtrees of height m with some given level-coloring structure and we look for a strong subtree of infinite height with a level-coloring structure similar to that of the original tree. The paper is organized as follows: in section 2, we define the notion of extension in P 2 for any element of C and show that the number of nonisomorphic extensions in P 2 of a given element of C can be expressed simply in terms of the size of its automorphism group. In section 3, we use this result to compute Ramsey degrees in C and to prove Theorem 1. In section 4 we turn to the study of big Ramsey degrees and show how Theorem 2 follows from Theorem 3. The two remaining sections of the paper are devoted to a proof of Theorem 3. The first step is carried out in section 5 where we prove a strengthening of Milliken's theorem on trees. Together with Devlin's original ideas from [Dev79], this result is then used to derive Theorem 3. Ackowledgements: C. Laflamme was supported by NSERC of Canada Grant# 690404. L. Nguyen Van Thé would like to thank the support of the Department of Mathematics & Statistics Postdoctoral Program at the University of Calgary. N. W. Sauer was supported by NSERC of Canada Grant # 691325. We would also like to thank the anonymous referee whose numerous and helpful comments improved the paper considerably. Extensions of circular tournaments The purpose of this section is to establish a connection between the elements of C and the elements of P 2 . This connection is not new: it already appears in [La84] and in [Ch98] as well as in several other papers. Here, it enables us to deduce most of our results from an analysis of the partition calculus on P 2 . This is done by defining a notion of extension for every element of C: . Let then p(A) denote the oriented graph based on A and equipped with the arc relation denoted This construction is illustrated in Figure 2. Then reverse all the arcs between the elements of A which are not ∼ A -equivalent. It should be clear that p(A) is a tournament. For a tournament X, any A such that p(A) = X is called an extension of X. Proof. We construct ϕ(A) ⊂ S(2) isomorphic to p(A) as follows: denote by Im + the complex open upper half plane. The directed graph structure on S(2) induces a linear ordering on S(2) ∩ Im + if we set x < y iff x S(2) ←− y. As a linear order, (S(2)∩Im + , <) is isomorphic to Q, hence without loss of generality we may assume that the linear ordering (A, < A ) is a subset of (S(2) ∩ Im + , <). Using the fact that in the complex plane, (−a) is the symmetric of a with respect to the origin, let ϕ : A −→ S(2) be defined by: Observe that if a, a ∈ A belong to the same P A i , then ϕ preserves the arc relation S(2) ←− between a and a while it reverses it when a and a do not belong to the same P A i . This fact together with the construction scheme described previously for p(A) (paragraph preceding Lemma 1) imply that the tournaments ϕ(A) and p(A) are isomorphic. The procedure applied in Lemma 1 (refered to as projection procedure in the sequel) is illustrated in a simple case in Figure 3. Proof. We first show that the projection procedure to obtain p(A) from A can be reversed to an extension procedure in order to construct extensions of X: Consider a ∈ X H} . That is, P A 2 is the set obtained from X H by symmetry with respect to the origin. As previously, the arc relation on S(2) induces a linear ordering on Then the structure A := (A; P A 1 , P A 2 , < A ) is in P 2 and is an extension of X. A simple application of the extension procedure is illustrated in Figure 4. Note the following essential fact: If A is an extension of X then applying the projection procedure to A produces a copy X of X included in S(2), and applying the extension procedure to this same X where L is the real axis and H is the open upper half plane produces A itself. It follows that every extension of X can be obtained by applying the extension procedure to X. Hence, to count the number of non isomorphic extensions of X in P 2 , we need to know when different choices of L, H provide non isomorphic extensions. Observe first that the choice of L determines a linear ordering on X as follows: Choose any of the two half planes with boundary L. Using symmetry with respect to the complex origin if necessary, bring all the points of X inside this half plane, where the arc relation on S(2) induces a linear ordering. Then, simply pull this linear ordering back to X. Note that the linear ordering we obtain on X does not depend on the half plane we chose to construct it. Observe that if two lines L, L induce linear orderings <, < such that (X, <) and (X, < ) are non isomorphic (when seen as ordered tournaments), then any choice of H, H leads to non isomorphic extensions of X in P 2 . Since for each line L there are two choices for H, it follows that the number of non isomorphic extensions of X in P 2 is twice the number of structures of the form (X, <) where < comes from a line. To compute this number, observe that two lines L, L induce the same linear ordering on X when their half planes contain the same vertices of X. Therefore, there are |X| such orderings. Next, consider < and < . They enumerate X increasingly as {x 1 , . . . , x |X| } and {x 1 , . . . , x |X| } respectively, and (X, <) and (X, < ) are isomorphic exactly when the map x n → x n is an automorphism of X. Therefore, there are essentially |X|/|Aut(X)| different ways to order X via a line. The result of Lemma 2 follows. Remark: Observe that since the number |X|/|Aut(X)| represents the number of different ways to order X via a line, it is an integer. Therefore, |Aut(X)| divides |X|. Ramsey degrees in C For X ∈ C, we write t(X) for the number 2|X|/|Aut(X)|. The purpose of this section is to prove Theorem 1, that is: Every X ∈ C has a finite Ramsey degree t C (X) in C and t C (X) = t(X). Throughout this section, X ∈ C is fixed. We first show that t C (X) ≤ t(X) and next that t(X) ≤ t C (X). Upper bound for . This is done thanks to the following partition property for P 2 : Theorem 4 (Kechris-Pestov-Todorcevic, [KPT05]). Let n ∈ N, A, B ∈ P n and k a positive natural. Then there is C ∈ P n such that Proof. cf [KPT05], Theorem 8.4, p.158-159. In order to prove that X has a finite Ramsey degree t C (X) and that t C (X) ≤ t(X), we apply Theorem 4 t(X) times as follows. For the sake of clarity, we only consider the particular case where t(X) = 2 but it should be clear at the end of the argument how to generalize to any other value. According to Lemma 2, t(X) is equal to the number of nonisomorphic extensions of X in P 2 . Let A 0 , A 1 denote those extensions. Let also B 0 ∈ P 2 be such that p(B 0 ) ∼ = Y. Using Theorem 4, construct B 1 so that We claim that Z := p(B 2 ) is as required. Therefore, the map χ takes no more than 2 values (in the general case, t(X) values) on e Y X , as required. Thus, X has a Ramsey degree t C (X) in C and t C (X) ≤ t(X) . 3.2. Lower bound for t C (X): t(X) ≤ t C (X). The main ingredient is the following lemma: Lemma 3. There exists Y ∈ C such that every extension of X embeds into every extension of Y. Proof. Let C n denote the subtournament of S(2) whose set of vertices is given by {e 2ikπ 2n+1 : k = 0, . . . , 2n}. Observe that up to an interchange of the parts, all the extensions of C n in P 2 are isomorphic. Essentially, this is so because there is only one way to order C n via a line through the origin as in Lemma 2. Another way to see it is to notice that C n admits exactly 2n + 1 automorphisms: every rotation whose angle is a multiple of (2iπ/2n + 1) provides an automorphism. Furthermore, we saw with the Remark at the end of section 2 that the number of automorphisms divides the cardinality of the structure. Thus, there cannot be more than 2n + 1 automorphisms, which means in the present case that there are exactly 2n + 1 automorphisms. Therefore, C n has 2|C n |/|Aut(C n )| = 2(2n + 1)/(2n + 1) = 2 extensions in P 2 , namely Note that if n is large enough, then X embeds into C n . Note also that seeing X as a subtournament of C n , the extension procedure applied to X with any line L and plane H also induces an extension of C n . It follows that any extension of X embeds in D n and E n , and we can take Y = C n . Here is how Lemma 3 leads to the required inequality: Let Z ∈ C. We show that there is a map χ on Z X using t(X) values and taking t(X) values on the set e Y X whenever Y ∈ Z Y . Let C be an extension of Z in P 2 . Then given a copy X of X in Z, the substructure of C supported by X is an extension of X in P 2 and is isomorphic to a unique element A j of the family (A i ) i<t(X) . Let χ( X) = j. Then the map χ is as required. 3.3. Comments about t C (X). The effective computation of t C (X) (or equivalently of |Aut(X)|) in the general case does not seem to be easy. It can be carried out in the most elementary cases, see Figure 3.3. There are also a few particular elements of C for which it can be performed directly. For example, for the oriented graph corresponding to the linear order on n points, the Ramsey degree in C is equal to 2n as there is only one automorphism. On the other hand, call C n the subtournament of S(2) whose set of vertices is given by {e 2ikπ 2n+1 : k = 0, . . . , 2n}. We saw in the proof of Lemma 3 that C n only has two non isomorphic extensions. It follows that the Ramsey degree of C n in C is equal to 2. Finally, note that given any n ∈ N, there are exactly 2 n nonisomorphic structures in P 2 whose base set has exactly n elements. Note also that any such structure is the extension of a unique X ∈ C such that |X| = n. It follows that |X|=n t C (X) = 2 n . Using the expression of t C (X), it follows that X∈C,|X|=n n Aut(X) = 2 n−1 . Big Ramsey degrees in C The purpose of this section is to prove Theorem 2 under the assumption that Theorem 3 holds. Denoting by T (X) the number t C (X) tan (2|X|−1) (0), we need to show that every X has a finite big Ramsey degree T C (X) in C equal to T (X). Equivalently, we first need to prove that for every k ∈ N, Then, when this is done, we need show that T (X) is the least number with that property. 4.1. Upper bound for T C (X): T C (X) ≤ T (X). Recall given a structure A = (A, < A , P A 1 , P A 2 ) where < A is a linear ordering on A and (P A 1 , P A 2 ) is a partition of A into two disjoint sets, the tournament p(A) is obtained by interpreting < A as a directed graph relation ←− (x ←− y iff x < A y) and reversing all the arcs between the elements of A which are in different parts P A i . Lemma 4. Q 2 is an extension of S(2). Proof. Applying the extension procedure (described in the proof of Lemma 2) to the tournament S(2) where L is any line through the origin avoiding S(2) and H any of the open half planes with boundary L, we get Q 2 . Therefore, Q 2 is an extension of S(2) {1} ∼ = S(2). 4.2. Lower bound for T C (X): T (X) ≤ T C (X). We start with an analogue of Lemma 3. Lemma 5. Every extension of X in P 2 embeds into every extension of S(2). Assuming that ( * ) holds, B 1 and B 2 are dense in B. It follows that Q 2 , and therefore every element of P 2 , embeds into B. In particular, every extension of X embeds into B, which finishes the proof of Lemma 5. We consequently turn to the proof of ( * ). Without loss of generality, we may assume that i = 1. We have several elementary cases to verify: (1) If x, y ∈ B 1 . Fix z ∈ S(2) such that x S(2) ←− z S(2) ←− y. Then z ∈ B 1 . Indeed, if not, then z ∈ B 2 and so x > z and z > y. Hence x > y, a contradiction. (2) If x, y ∈ B 2 , then z ∈ S(2) such that x We can now show T (X) ≤ T C (X) by producing a map χ on S(2) X taking T (X) values on the set C X whenever C ∈ S(2) S(2) . First, for every i < t C (X), Theorem 3 guarantees the existence of a map λ i : Q2 Ai −→ [tan (2|X|−1) (0)] witnessing that the big Ramsey degree of A i in P 2 is equal to tan (2|X|−1) (0). Next, consider S(2), seen as p(Q 2 ). Then given a copy X of X in S(2), the substructure A( X) of Q 2 supported by X is an extension of X in P 2 and is isomorphic to a unique element of the family (A i ) i<t C (X) . Define then the map χ : where i < t C (X) is the unique natural such that A( X) ∼ = A i . Then χ is as required: Let C ∈ S(2) S(2) . The substructure B of Q 2 supported by C is an extension of S(2) and by Lemma 5, all the extensions of X embed in B. Additionnally, Q 2 embeds into B so λ i takes ∆ |X| many values on B Ai for every i. Thus, χ takes T (X) many values on C X . This shows that T (X) ≤ T C (X) and finishes the proof of Theorem 2. A colored version of Milliken's theorem In Section 4, we provided a proof of Theorem 2 assuming Theorem 3. The goal of the present section is to make a first step towards a proof of Theorem 3 by proving a strengthening of the so-called Milliken theorem. The motivation behind the strategy here really comes from the proof of Theorem 3 when n = 1. This was completed by Devlin in [Dev79] thanks to two main ingredients. The first one is a detailed analysis of how copies of Q may appear inside Q when Q is identified with the complete binary tree [2] <∞ of all finite sequences of 0's and 1's ordered lexicographically. The second ingredient is a partition result on trees due to Milliken in [Mi79]. In our case, where we are interested in Q n instead of Q, the relevant objects to study are not trees anymore but what we will call colored trees. In that context, Devlin's ideas can be applied with few modifications to identify how copies of Q n may appear inside Q n . Those are presented in Section 6. However, the relevant version of Milliken's theorem requires more work and the purpose of the present section is to show it can be completed. We start with a short reminder about the combinatorial structures lying at the heart of Milliken's theorem: order-theoretic trees. In what follows, a tree is a partially ordered set (T, ≤) such that given any element t ∈ T , the set {s ∈ T : s ≤ t} is finite and linearly ordered by ≤. The number of predecessors of t ∈ T , ht(t) = |{s ∈ T : s < t}| is the height of t ∈ T . The m-th level of T is T (m) = {t ∈ T : ht(t) = m}. The height of T is the least m such that T (m) = ∅ if such an m exists. When no such m exists, we say that T has infinite height. When |T (0)| = 1, we say that T is rooted and we denote the root of T by root(T ). T is finitely branching when every element of T has only finitely many immediate successors. When T is a tree, the tree structure on T induces a tree structure on every subset S ⊂ T . S is then called a subtree of T . Here, all the trees we will consider will be rooted subtrees of the tree N <∞ of all finite sequences of naturals ordered by initial segment. That is, every element of N <∞ is a map t : [m] −→ N for some natural m ∈ N. In the sequel, this natural is denoted |t| and is thought as the length of the sequence t. The ordering ≤ is then defined by t ≤ s iff |t| ≤ |s| and ∀k ∈ [|t|], t(k) = s(k) . That is, if we think of t as the sequence of digits t(0)t(1) . . . t(|t| − 1), then t ≤ s simply means that s is obtained from t by adding some extra digits to the right of t, ie s = t(0)t(1) . . . t(|t| − 1)s(|t|)s(|t| + 1) . . . s(|s| − 1) . The main concept attached to Milliken's theorem is the concept of strong subtree. Fix a downwards closed finitely branching subtree T of N <∞ with infinite height. Say that a subtree S of T is strong when (1) S has a smallest element. (2) Every level of S is included in a level of T . (3) For every s ∈ S not maximal in S and every immediate successor t of s in T there is exactly one immediate successor of s in S extending t. An example of strong subtree in provided in Figure 5. For a natural m > 0, denote by S m (T ) the set of all strong subtrees of T of height m. Denote also by S ∞ (T ) the set of all strong subtrees of T of infinite height. Figure 7. A strong subtree Theorem 5 (Milliken [Mi79]). Let T be a nonempty downward closed finitely branching subtree of N <∞ with infinite height. Let k, m > 0 be naturals. Then for every map χ : For our purposes, we need a stronger version of Milliken's theorem relative to n-colored trees. Let α ∈ N ∪ {∞} and n > 0 be a natural. An n-colored tree of height α is a tree T of height α together with an n-coloring sequence τ assigning an element of [n] (thought as a color) to each of the levels of T (τ (i) then corresponds to the color of T (i), the level i of T ). If S is a strong subtree of T , τ induces an ncoloring sequence of S provided by a subsequence of τ . For β ≤ α and σ a sequence of length β with values in [n], let S σ (T ) denote the set of all strong subtrees of T such that the coloring sequence induced by τ is equal to σ. Theorem 6. Let T be a nonempty downward closed finitely branching subtree of N <∞ with infinite height. Let n > 0 be a natural and Σ an n-coloring sequence of T taking each value i ∈ [n] infinitely many times. Let k > 0 be a natural and σ an n-coloring sequence with finite length. Then for every map χ : S σ (T ) −→ [k], there is S ∈ S Σ (T ) such that χ is constant on S σ (S). Proof. We proceed by induction on n. The case n = 1 is handled by the original version of Milliken's theorem. We therefore concentrate on the induction step. Assume that Theorem 6 holds for the natural n. We show that it also holds for the natural n + 1. Since Σ takes each value i ∈ [n] infinitely many times, then by going to a subtree of T if necessary, we may arrange that Σ is the sequence defined by Σ(k) = k mod (n + 1) . We may also assume that for every t ∈ T , the set {j ∈ ω : t j ∈ T } is an initial segment of N (here, t j denotes the concatenation of t and j, that is the sequence obtained from t by extending it with the extra digit j. Formally t j(n) = t(n) for every n < |t| and t j(|t|) = j). Let q : T −→ T be the function mapping the elements of T with color n onto their immediate predecessor in T and leaving the other elements of T fixed. For an (n + 1)-coloring sequence τ , let q(τ ) be the n-coloring sequence obtained from τ by replacing every occurence of n in τ by (n − 1). Say that a strong subtree U of T satisfies ( * ) when (1) For every u ∈ U and for every immediate succesor u of u in U , if u has color (n − 1) and t is the immediate successor of u in T such that t ≤ u , then t 0 ≤ u . Assuming Lemma 6, the induction step can be carried out as follows: let σ be an (n + 1)-coloring sequence with finite length and χ : S σ (T ) −→ [k]. Using Lemma 6, transfer χ to λ : S q(σ) (q(T )) −→ [k] by setting λ(S) = χ(σ * S). Then, using Theorem 6 for the natural n, find a strong subtree U of q(T ) with coloring sequence q(Σ) such that S q(σ) (U ) is λ-monochromatic with color ε. By refining U if necessary, we may assume that no two consecutive levels of U are consecutive in T . Then Σ * U ∈ S Σ (T ) and satisfies ( * ). We claim that χ is constant on S σ (Σ * U ). Indeed, let V ∈ S σ (Σ * U ). Then q(V ) ⊂ q(Σ * U ) ⊂ U and it has coloring sequence q(σ). Let W ⊂ U be a strong subtree with the same height as q(V ) and such that q(V ) ⊂ W . Since Σ * U has property ( * ), so does V . By Lemma 6, it follows that V = σ * W . Hence Proof of Lemma 6. For a tree V and an element v ∈ V , let IS V (v) denote the set of all immediate successors of v in V . We start by proving the existence of a tree U fulfilling the requirements. We proceed inductively and construct U level by level. For U (0), we distinguish two cases. If σ(0) = n, we set U (0) = S(0)(= {root(S)}). If σ(0) = n, we set U (0) = {root(S) 0}. Assume that U (0) . . . U (k) are constructed. Case 1: σ(k) = n − 1. Then for every u ∈ U (k), any element v of IS T (u) is also in IS q(T ) (q(u)). Thus, there is a unique φ(v) ∈ S(k + 1) such that v ≤ φ(v). If σ(k +1) = n, U (k +1) is formed by collecting all the φ(v)'s. Otherwise, σ(k +1) = n and U (k + 1) is formed by collecting all the φ(v) 0's. Then the immediate successors of the elements of U (k) in T have color n and are not in q(T ). For u ∈ U (k) and v ∈ IS T (u), v / ∈ IS q(T ) (q(u)) and v may be dominated by more than one element in S. However, v 0 ∈ IS q(T ) (q(u)) is dominated by exactly one element in S. Let φ(v) denote this element. Form U (k + 1) as in Case 1 by collecting all the φ(v)'s if σ(k + 1) = n and all the φ(v) 0's otherwise. Repeating this procedure, we end up with a tree U . This tree is as required as at every step, the construction makes sure that it is strong and that the property ( * ) is satisfied. We now show that this procedure is actually the only possible one. Assume that U and U are as required. We show that U = U . First of all, it should be clear that U and U have the same root. We now show that if u ∈ U ∩ U , then IS U (u) = IS U (u). It suffices to show that IS U (u) ⊂ IS U (u). Let w ∈ IS U (u). Proof. Let v ∈ U be such that q(v) ≤ q(w) and u < v. Since q(v) and q(w) are comparable, v and w are above the same immediate successor of u in U . Hence w ≤ v and q(w) ≤ q(v). So, fix t ∈ IS T (u) and v ∈ IS q(T ) (q(u)) such that u ≤ t ≤ w and q(u) ≤ v ≤ q(w) . Observe that because q(T ) ⊂ T , we have t ≤ v. Let w ∈ IS U (u) be such that u ≤ t ≤ w . Note that as for w, we have q(w ) ∈ IS q(U ) (q(u)). Proof. If t ∈ q(T ), then t = v and we are done. Otherwise, t has color n and u ≤ t < v ≤ w. By ( * ) for U , t 0 ≤ w. Hence t 0 = v. Now, by ( * ) for U , we have t 0 ≤ w . Hence, v ≤ w and v ≤ q(w ). It follows that q(w) and q(w ) are in S and above v. Since they have the same height, they must be equal. Hence, w = w . Big Ramsey degrees in P n In this section, we show how Theorem 3 can be proven thanks to the machinery developed in Section 5. As already mentioned, this is essentially done by using the ideas that were used by Devlin in [Dev79] to study the partition calculus of the rationals. For that reason, several results are stated without proof. Our presentation here, however, follows a different path. Namely, it repeats the exposition of the forthcoming book [To]. All the details of the proofs that we omit here will appear in [To] together with a wealth of other applications of Milliken's theorem. In the sequel, we work with the tree T = [2] <∞ of finite sequences of 0's and 1's colored by the map Σ defined by Σ(i) = (i mod n) + 1 for every i ∈ N. Noticing that (T, < lex ) and (Q, <) are isomorphic linear orderings and that in (T, < lex ), the subset T i of all the elements with color i is dense whenever i = 1 . . . n, we see that the colored tree T is isomorphic to Q n . For s, t ∈ T , set Note that A ⊂ A ∧ and that A ∧ is the minimal rooted subtree of T containing A. Define an equivalence relation Em on the collection of all finite subsets of T as follows: for A, B ⊂ T , set AEmB when there is a bijection f : iv) f (s) has color i whenever s has color i. It should be clear that Em is an equivalence relation. Given A ⊂ T , let [A] Em denote the Em-equivalence class of A. Let also σ A denote the sequence of colors corresponding to A ∧ . Proof. Define λ(V ) for every V ∈ S σ A (T ) by the χ-value of its unique subset which belongs to [A] Em . According to Lemma 7, the map λ is well-defined. By Theorem 6, there is S ∈ S Σ (T ) such that λ is constant on S σ (S). It follows that χ is constant on [A] Em S. As a direct consequence, every element X of P n has a big Ramsey degree in P n less or equal to the number of embedding types of X inside T . It turns out that when reconstituting copies of Q n inside T , certain embedding types can be avoided. A finite set A ⊂ T realizes a Devlin embedding type when (1) A is the set of all terminal nodes of A ∧ . (3) t(|s|) = 0 for all s, t ∈ A ∧ such that |s| < |t| and s t. Figure 8 represents eight of the sixteen Devlin types that may be realized by a 3-element subset of T in the uncolored case (n = 1) (Each picture represents a subset of the binary tree). Lemma 8. Every S ∈ S Σ (T ) includes an antichain X such that: (1) (X, X ∩ T 1 , . . . , X ∩ T n , < lex ) is isomorphic to Q n , Proof. Without loss of generality, we may assume that S = T . Let W ⊂ T be the ∧-closed subtree of T uniquely determined by the following properties (For an attempt to represent the lowest levels of W , see Figure 9): (1) root(W ) = ∅. Let f → w f denote the isomorphism between (T, < lex ) and W . Define then x f = w f 01 0 i (here, 0 i denotes the sequence with i many 0's) where i is such that 0 ≤ i < n and |f | = i mod (n). Then one can check that for every Y ⊂ X isomorphic to Q n , the embedding types of the finite subsets of Y are exactly the Devlin's embedding types. It follows that every element X of P n has a big Ramsey degree in P n equal to the number of embedding types of X inside T . Proceeding by induction on the size of X, it can be shown that this number of embeddings actually only depends on the size of X and satisfies a recursion formula which allows to identify it with the number tan (2|X|−1) (0). This finishes the proof of Theorem 3. Department of Mathematics and Statistics, University of Calgary, 2500 University Drive NW, Calgary, Alberta, Canada, T2N1N4.
2008-08-29T18:43:45.000Z
2007-10-15T00:00:00.000
{ "year": 2007, "sha1": "02a06f26c91728a97a676cc7aedf2516c76b07d4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.2885", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "02a06f26c91728a97a676cc7aedf2516c76b07d4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
254329505
pes2o/s2orc
v3-fos-license
The Application of Deep Learning for the Evaluation of User Interfaces In this study, we tested the ability of a machine-learning model (ML) to evaluate different user interface designs within the defined boundaries of some given software. Our approach used ML to automatically evaluate existing and new web application designs and provide developers and designers with a benchmark for choosing the most user-friendly and effective design. The model is also useful for any other software in which the user has different options to choose from or where choice depends on user knowledge, such as quizzes in e-learning. The model can rank accessible designs and evaluate the accessibility of new designs. We used an ensemble model with a custom multi-channel convolutional neural network (CNN) and an ensemble model with a standard architecture with multiple versions of down-sampled input images and compared the results. We also describe our data preparation process. The results of our research show that ML algorithms can estimate the future performance of completely new user interfaces within the given elements of user interface design, especially for color/contrast and font/layout. Introduction A virtual learning environment is an online software platform that provides students with digital solutions that enrich learning, enable access to learning content regardless of time and location, and facilitate knowledge-sharing through online communication [1]. In addition to traditional and virtual learning environments, a combined (hybrid) form of learning is also described in the literature. The combined form of learning considers the best of both basic forms, e.g., by integrating technologies such as augmented reality into instruction that physically takes place in the classroom [2]. In all forms of learning, students should learn together, regardless of their difficulties and differences. Therefore, it is very important to apply the principles of inclusion in education. Inclusion is based on the social model of disability, which emphasizes how people with disabilities can be included in all aspects of life, including education, employment, etc., by adapting the environment and providing support [3]. Teaching students with learning disabilities presents unique challenges. Children with disabilities often have limited attention spans, making it difficult for them to stay engaged with a task for an extended period of time. One of the most important features of the learning experience in traditional and virtual learning environments is the ability to interact with software solutions [4]. Traditional two-dimensional user interfaces of computer systems are familiar to most users. Interaction with these interfaces mainly occurs through input devices such as a mouse or keyboard, using a screen as an output device. Touchscreens can be used as both an input and an output device. The advancement of computer hardware and software has led to the development of user interfaces and software solutions for which traditional input devices such as a keyboard and mouse often cannot be used [5]. In addition to the ability to interact, one of the most important aspects of educational applications is accessibility. If we consider user interfaces as a spectrum of customization possibilities, on one side of the spectrum, there are interfaces that can be customized by the user to increase the usability and efficiency of user interaction. On the other side of the spectrum are intelligent user interfaces. According to [6], intelligent user interfaces (IUI) aim to improve humancomputer interaction, especially the user experience and/or the usability of user interfaces using artificial intelligence (AI). The paper also contains a thorough survey of efforts to evaluate user experience (UX) and the usability of IUIs over the last decade. It identifies research gaps in IUI evaluation and examines IUI research, systematic literature reviews, and systematic mapping studies-for example, intelligent, context-sensitive, and multimodal user interfaces, adaptive user interfaces, intelligent human-computer interaction, and adaptable and adaptive user interfaces. In the context of this spectrum of adaptation possibilities, our work lies somewhere in the middle. The goal of this study was to test the hypothesis that a machine-learning model (ML) can evaluate the future performance of a user interface in terms of user response and that such a model is capable of evaluating different user interface designs within the defined boundaries of some given software. We investigated whether there is a way to automatically evaluate existing and new web application designs and give developers and designers a benchmark for choosing the most user-friendly and effective designs for their software. This principle can apply to both 2D and 3D contexts, with the 3D context to be verified in ongoing research. We found no comparable method for ranking potential application designs in the available literature. In the user interaction format used to test our hypotheses, different user options are available for selection, which further depend on the user's knowledge. In other words, the selection of some correct system parameter is a result of a particular decision made by the user based on their cognitive reasoning. This interaction format is very common in quizzes used specifically in virtual learning environments. The quiz is a tool for independent learning; one study has shown that engineering students find quizzes motivating and encourage regular learning [7], while voluntary use of online quizzes, as well as the results obtained, is a useful general indicator of student performance in the medical field [8]. Authors of another paper presented the results of a pilot project using adaptive quizzes in a fully online unit delivered by an Australian higher education provider [9]. The project results suggest that adaptive quizzes contribute to student motivation and engagement and that students believe that adaptive quizzes support their learning. Therefore, the quiz case study offers a useful example of an application with diverse and wide-ranging uses both in virtual learning environments and beyond. The results of the research presented in this paper show that our ML model can analyze a interface design in its entirety (in our specific example, this includes contrast, colors, arrangement of elements, font, and text size). The peculiarity of the method also lies in the preparation of the data for the learning of the neural network. To avoid bias, learning data were prepared by removing extremes and responses from real users that did not make sense in terms of user interaction with the application. Accessibility is an additional practical implementation for the results we obtained. The model can learn to rank accessible designs and evaluate the accessibility of new designs in terms of preferred relationships between interface elements, layouts, and colors. In this case, the learning data should be different, but the proposed approach based on ML can still be applied. Related Work In addition to rapid incremental progress in web-based applications, end-user satisfaction is critical to successful adoption [10]. Ease of use, perceived usefulness, and appropriateness of user interface adaptation are the three most frequently rated variables. Questionnaires appear to be the most popular method, followed by interviews and data log analysis. Van Velsen et al. [11] noted that the quality of most questionnaires is questionable, and reporting on interviews and think-aloud protocols is perceived to be superficial. The reports that were found lacked empirical value. Therefore, the authors proposed an iterative design process for adaptive and adaptable systems. Miraz et al. [12] presented a review of research on universal usability, plasticity of user interface design, and the development of interfaces with universal usability, focusing on the fundamentals of adaptive (AUI) or intelligent user interfaces (IUI) in terms of three core areas: artificial intelligence (AI), user modeling (UM), and human-computer interaction (HCI). The paper emphasizes that more research is needed to determine the benefits and effectiveness of IUI compared to AUI. It also discusses the question of placing adaptive control of the interface under the system or the user, with application to e-learning being a priority: the use of machine intelligence to achieve appropriate learning, ideally reinforced by "game-like interaction", was considered desirable. Performance evaluations of user interface plasticity have shown that the use of dynamic techniques can improve the user experience to a much greater extent than simpler approaches, although optimizing the tradeoffs between usability parameters requires further attention. In one study, AlRawi [13] used usability metrics to evaluate the relation between web application usability and end-user performance. This relationship was investigated using observations and user feedback sessions. The results suggest a possible relationship between system usability and end-user performance in terms of effectiveness and satisfaction. There are several approaches to designing a user-friendly interactive website. One approach comprises standard evaluation methods, such as the method presented in [14]. In this paper, Kaur and Sharma investigated the usability problems of selected popular web applications based on various parameters, using the traditional observational methods of usability testing. Wang [15] analyzed the priorities in interface design that are important for elderly people. Using a semi-structured questionnaire, they surveyed the needs of elderly internet users and obtained several indicators that describe their specific needs from web interfaces. Using hierarchical analysis, they calculated the weight of those indicators. Based on the results, they made suggestions to improve the accessibility of the interface for older people. Malik et al. concluded that most researchers are interested in introducing a variety of UI-based models to improve the UI designs of web-based applications [10]. In one example, user classification and modeling are presented for improving the design flow of web sites [16], while another describes standardized user interfaces for RIAs (rich internet applications) that can improve usability [17]. There are also more innovative approaches, such as an approach that uses an evolutionary algorithm for automatically generating website designs by treating parameters of functionality, layout, and visual appearance as variables [18]. A chromosome structure has been developed that allows for representing website characteristics in terms of the three aspects mentioned above and facilitates the application of genetic operators [19]. Real-time usage mining (RUM) exploits the rich information provided by client logs to support the construction of adaptive web applications. Rich information about the behavior of users browsing a web application can be used to adapt the user interface in real-time to improve the user experience. This approach also offers support for detecting problematic users and profiling users based on the detection of behavioral patterns. In this way, the research problems in this area of interactive software system development relate not only to the evaluation of user satisfaction, but also to the measurement of system responsiveness, efficiency, and accessibility [10]. One of the most coveted and valuable applications of ML in UX design is its ability to provide users with a new level of personalization [20]. ML algorithms that learn from usability data sources can improve the user experience [21], such as by implementing and testing a system for designing creative web elements using an interactive genetic algorithm in which voting-based feedback from the learning mechanism enables the system to adopt quality measures for visual aesthetics [22]. One systematic review of the literature that was conducted to identify the challenges UX designers face when incorporating ML into their design process contains recommendations based on its findings [20]. In one study, ML-design tools based on UX could use formal models to optimize graphical user interface layouts to meet objective performance criteria [23], while another used ML to automatically vectorize existing digital GUI designs (using computer vision) to quickly apply them to new projects [24]. ML can also facilitate the quantifiable evaluation of given GUIs by using a set of user perception and attention models [25]. In one paper, a thorough review of the last decade's efforts in IUIs, UX, and usability evaluation is presented [6], identifying research gaps in IUI evaluation. In existing IUI-related research, systematic literature reviews and systematic mapping studies have investigated the following user interfaces: (i) intelligent, context-sensitive, and multimodal user interfaces, (ii) adaptive user interfaces, (iii) intelligent human-computer interaction, and (iv) adaptive and adaptable user interfaces. The authors concluded that the most used AI methods are deep-learning algorithms (widely used in various types of recognition) and instance-based algorithms, commonly used with the aim of human/body motion recognition, human activity, gesture, depression, and behavior recognition. The use of artificial neural networks was also identified, as well as their successful use in gesture and emotion recognition. Data Collection and Preparation Phase The goal of this study was to determine whether deep-learning methods are able to evaluate future performance of a user interface in which respondents solve simple mathematical tasks. This was accomplished by recording user accuracy and solution time as they used a web application that was developed for this research. The technologies chosen were HTML, SCSS, JavaScript, and PHP 7.0. Our idea was to develop an application that randomly generates its layout, background, and font color, font family, and font size from predefined classes described in SCSS. We planned for at least 300 respondents completing the questionnaire to obtain a sufficient amount of data for machine-learning needs. Undergraduate and graduate students from the University of Zagreb and the University of Dubrovnik were selected as the main target group. Survey data were collected via web application, optimized for use on mobile devices, that respondents accessed. The application consists of 15 randomly generated questions with four offered answers, only one of which is correct. The 15 questions are divided into three cycles of five questions each. At the beginning of each cycle, a new design is presented to the user. The questions were elementary mathematical equations to avoid a possible bias due to the knowledge of the participants. To encourage participants to read the entire question text, new question text was generated from a predefined set of questions. For processing purposes, the application records information about the user interface that was randomly assigned to the participant (layout, combination of colors, contrast, and type and size of the font), the question, the answers, the respondent's recorded answer, and the time the respondent spent answering. Response time was measured from the moment at which the interface of the specific question was fully loaded and displayed to the respondent until the moment at which the respondent answered the question and the application began loading the next question. Based on the data collected on the appearance of the layout and the question asked, images of the user interface shown to the respondent were created and stored for deep learning purposes. The application was designed to resemble the classic applications for quizzes that are used on online platforms for e-learning. There are six different layouts of elements. Their HTML classes, with descriptions, can be found in Appendix A Table A1. A total of 18 different combinations of font and background colors were used; their RGB hex codes and contrast ratios can be found in Appendix A Table A2. The background and font colors were chosen according to the methodology for the development of an accessible website presented in [26], which states that the preferred contrast between background and text is 7:1, and the minimum contrast is 4.5: 1. This methodology provides a recommendation of eight color combinations. Used font types can be found in Appendix A Table A3. In addition to sans-serif and serif fonts, the dyslexic-friendly font OpenDyslexic (OpenDyslexic font, https://opendyslexic.org/, accessed on 20 October 2022) was also used. Chosen font sizes were 16 px, 18 px, 27 px, and 36 px. Some combinations of layouts, colors, and fonts can be seen in Figure 1. Combinations of the mentioned layouts, colors, fonts, and font sizes were used for training and validation dataset. HTML classes, with descriptions, can be found in Appendix A Table A1. A total of 18 different combinations of font and background colors were used; their RGB hex codes and contrast ratios can be found in Appendix A Table A2. The background and font colors were chosen according to the methodology for the development of an accessible website presented in [26], which states that the preferred contrast between background and text is 7:1, and the minimum contrast is 4.5: 1. This methodology provides a recommendation of eight color combinations. Used font types can be found in Appendix A Table A3. In addition to sans-serif and serif fonts, the dyslexic-friendly font OpenDyslexic (OpenDyslexic font, https://opendyslexic.org/, accessed on 20 October 2022) was also used. Chosen font sizes were 16 px, 18 px, 27 px, and 36 px. Some combinations of layouts, colors, and fonts can be seen in Figure 1. Combinations of the mentioned layouts, colors, fonts, and font sizes were used for training and validation dataset. As usual, part of the basic dataset was used during the initial testing phase. However, to test the real capabilities of the models, an additional test dataset ( Figure 2) was prepared with previously unseen combinations of elements, including four new layouts (Table A4). The colors used and their contrast ratio used in the generated test dataset are presented in Table 1. Font families used for testing were Lora as the serif font, Open Sans as the sansserif font and Omotype (Omotype font, https://omotype.com/, accessed on 20 October 2022) as the dyslexic-friendly font. As usual, part of the basic dataset was used during the initial testing phase. However, to test the real capabilities of the models, an additional test dataset (Figure 2) was prepared with previously unseen combinations of elements, including four new layouts (Table A4). The colors used and their contrast ratio used in the generated test dataset are presented in Table 1. Font families used for testing were Lora as the serif font, Open Sans as the sans-serif font and Omotype (Omotype font, https://omotype.com/, accessed on 20 October 2022) as the dyslexic-friendly font. To process the images of the interface, the text of the question was replaced by the letter "a" to prevent the deep learning algorithm from basing its inference on the specific text of the question. The replacement with the letter "a" was done because it is highly expressive and it carries substantial font family character, as discussed in [27]. Materials and Methods To prove the hypothesis that CNNs can evaluate the effectiveness of a user interface, we tested a number of diverse architectures: vanilla CNNs, general-purpose networks modified for regression tasks such as VGG19 (Visual Geometry Group) [28], Inception-ResNetV2 [29], Xception [30], and ResNet50 [31], and deep ensemble models for regression. CNN training was implemented with the Keras [32] and TensorFlow [33] deep-learning frameworks. We used a workstation equipped with an AMD Ryzen Threadripper To process the images of the interface, the text of the question was replaced by the letter "a" to prevent the deep learning algorithm from basing its inference on the specific text of the question. The replacement with the letter "a" was done because it is highly expressive and it carries substantial font family character, as discussed in [27]. Materials and Methods To prove the hypothesis that CNNs can evaluate the effectiveness of a user interface, we tested a number of diverse architectures: vanilla CNNs, general-purpose networks modified for regression tasks such as VGG19 (Visual Geometry Group) [28], InceptionResNetV2 [29], Xception [30], and ResNet50 [31], and deep ensemble models for regression. CNN training was implemented with the Keras [32] and TensorFlow [33] deep-learning frameworks. We used a workstation equipped with an AMD Ryzen Threadripper 3960X CPU and NVIDIA GeForce RTX 3090 with 24 GB memory and the Linux Ubuntu 20.04 OS. Early stopping and a model checkpoint were used for callback function. Early stopping interrupts the training process if there is no improvement of the validation loss after a defined number of epochs. The model checkpoint is used to save the best model if and once the validation loss decreases. During the experiments, it was necessary to pay attention to the following important facts: • Input data are non-square images • The possibilities of using augmentation are very limited, since any mirroring or rotation of the image, or change in the brightness and contrast, significantly changes the appearance and efficiency of the interface • Input data carry important information at different levels of detail. This means that attention should be paid to details captured by both high and low spatial frequencies. For example, the size or shape of the letters of the used font can be equally important information, as well as the position of the question in relation to the position of the offered answers. Two solutions have been proposed for the high and low spatial frequency problem. The first solution is an ensemble of multiple custom CNNs that use different Conv2D kernel size and stride values. The second solution is based on an ensemble that uses a standard architecture and multiple versions of down-sampled input images. Ensemble methods can improve the predictive and generalization performance of a single model by mixing predictions from several models [34]. Deep ensemble learning models [35] combine the advantages of both the deep-learning models and ensemble learning, so that the final model has better generalization performance. Ensemble of Custom CNNs We designed an ensemble model involving a multichannel custom CNN (Figure 3). Each channel consists of the input layer that defines the various sizes of input images, focusing on a particular scale. All channels share the standard CNN architecture in the transfer mode with the same set of filter parameters. The outputs from the three channels are concatenated and processed by dropout and dense layers. Each channel was inspired by VGG architecture and consists of a combination of depth Conv2D, BatchNormalization, MaxPooling2D, and Dropout layers of different depth. The first channel uses kernel sizes of (3, 3) and strides of (1, 1) for all convolutional layers. The second channel uses kernel sizes of (7, 7) and strides of (2, 2) for the initial three convolutional layers. Kernel sizes of (5, 5) and strides of (1, 1) are used for the remaining convolutional layers of the second channel. Finally, the third channel uses kernel sizes of (15, 15) and strides of (3, 3) for the initial three convolutional layers. Kernel sizes of (7, 7) and strides of (1, 1) were used for the remaining convolutional layers of the third Each channel was inspired by VGG architecture and consists of a combination of depth Conv2D, BatchNormalization, MaxPooling2D, and Dropout layers of different depth. The first channel uses kernel sizes of (3, 3) and strides of (1, 1) for all convolutional layers. The second channel uses kernel sizes of (7, 7) and strides of (2, 2) for the initial three convolutional layers. Kernel sizes of (5, 5) and strides of (1, 1) are used for the remaining convolutional layers of the second channel. Finally, the third channel uses kernel sizes of (15,15) and strides of (3, 3) for the initial three convolutional layers. Kernel sizes of (7, 7) and strides of (1, 1) were used for the remaining convolutional layers of the third channel. The mentioned values were reached after numerous experiments. As shown in Figure 3, channels that use larger values for kernel sizes and strides will have fewer layers. Outputs from all three channels are concatenated into a single vector and process by a Dense-Dropout-Dense combination of layers. Xception-Based Ensemble We also designed an ensemble model that uses a standard architecture and multiple versions of down-sampled input images. Several standard architectures were tested; the best results were achieved using the Xception model ( Figure 4). Each channel has an input layer that defines the various sizes of input images (360 × 640, 180 × 320, and 90 × 160, pixels respectively). We replaced the standard Xception top layer with a Dense-Dropout-Dense combination of layers. Outputs from all three channels are concatenated into a single vector and processed by a second Dense-Dropout-Dense combination of layers. Both the transfer learning and learning from scratch approaches were analyzed. However, in this case, ImageNet pre-trained features do not contribute to the learning process as in some other experiments, due to the large differences between the source and target task/domain, as well as the importance of the spatial arrangement of elements. Data-cleaning and the preparation process are presented in the next section. Statistical Analysis of Participant Responses In total, 338 participants (mostly students of the University of Zagreb and the University of Dubrovnik) took part in the research. The gender and age of participants were not systematically assessed, as participants were selected based on their matriculation in Bachelor-and Master-level degree programs. As some participants did not answer all Each channel has an input layer that defines the various sizes of input images (360 × 640, 180 × 320, and 90 × 160, pixels respectively). We replaced the standard Xception top layer with a Dense-Dropout-Dense combination of layers. Outputs from all three channels are concatenated into a single vector and processed by a second Dense-Dropout-Dense combination of layers. Both the transfer learning and learning from scratch approaches were analyzed. However, in this case, ImageNet pre-trained features do not contribute to the learning process as in some other experiments, due to the large differences between the source and target task/domain, as well as the importance of the spatial arrangement of elements. Data-cleaning and the preparation process are presented in the next section. Statistical Analysis of Participant Responses In total, 338 participants (mostly students of the University of Zagreb and the University of Dubrovnik) took part in the research. The gender and age of participants were not systematically assessed, as participants were selected based on their matriculation in Bachelor-and Master-level degree programs. As some participants did not answer all questions, we collected 4448 answers in total. First, we analyzed all participant responses (correct and incorrect). The distributions of response times of correct and incorrect answers are shown in Figure 5. The results show that both distributions are positively skewed, and a disproportionate number of incorrect answers were answered in a time of less than 0.5 s. Since the goal of the study was to investigate the impact of different user interfaces in tasks where accuracy is important, we excluded from the set all data where respondents did not choose the correct answer (17.18% of answers), whereupon 3684 correct answers remained in the set. In the remainder of the research, we processed only the data where respondents had answered correctly. In the quiz, respondents were shown a new interface appearance with the first question of each series (1st, 6th, and 11th questions). The appearance and settings were different from the interfaces the respondent had seen before in the application. While answering the questions in a series, the respondent became accustomed to the new look of the interface. As a result, the average response time to the first question in each series is significantly longer than the average response time to the other questions in that series, as shown in Figure 6. This phenomenon can be explained by the theory of universal design [36]. Namely, the principle of universal design, which refers to simplicity and intuitiveness, posits that a design should be stable and predictable. This means that once a user gets used to a certain layout and interaction flow when working with the software, they should not experience unexpected design changes, as this leads to confusion. If changes are unavoidable, as in the case of online stores when the user is redirected to the payment pages, these changes should be announced in advance. To eliminate the effects of the respondent's adaptation to the new user interface, these questions were excluded from the training and testing set. Since the questions are simple and the questionnaire was designed so that there is only one correct answer, when the respondent recognizes the correct answer, they do not have to read the other answers that are below it. Since the goal of the study was to investigate the impact of different user interfaces in tasks where accuracy is important, we excluded from the set all data where respondents did not choose the correct answer (17.18% of answers), whereupon 3684 correct answers remained in the set. In the remainder of the research, we processed only the data where respondents had answered correctly. In the quiz, respondents were shown a new interface appearance with the first question of each series (1st, 6th, and 11th questions). The appearance and settings were different from the interfaces the respondent had seen before in the application. While answering the questions in a series, the respondent became accustomed to the new look of the interface. As a result, the average response time to the first question in each series is significantly longer than the average response time to the other questions in that series, as shown in Figure 6. This phenomenon can be explained by the theory of universal design [36]. Namely, the principle of universal design, which refers to simplicity and intuitiveness, posits that a design should be stable and predictable. This means that once a user gets used to a certain layout and interaction flow when working with the software, they should not experience unexpected design changes, as this leads to confusion. If changes are unavoidable, as in the case of online stores when the user is redirected to the payment pages, these changes should be announced in advance. To eliminate the effects of the respondent's adaptation to the new user interface, these questions were excluded from the training and testing set. Since the questions are simple and the questionnaire was designed so that there is only one correct answer, when the respondent recognizes the correct answer, they do not have to read the other answers that are below it. In this environment, the response time when the answer is in the first position could be much shorter than the response time when the correct answer is in the later positions; e.g., if the correct answer is in the last position, the respondent must read the question and all four answers to get to it. To eliminate the influence of the position of the correct answer on the response time, the response time was normalized using Equation (1). In this environment, the response time when the answer is in the first position could be much shorter than the response time when the correct answer is in the later positions; e.g., if the correct answer is in the last position, the respondent must read the question and all four answers to get to it. To eliminate the influence of the position of the correct answer on the response time, the response time was normalized using Equation (1). The graph in Figure 7 shows mean response times by position of correct answer after normalization. During the normalization process, real response time was divided by the answer position increased by 1 (time to read the question). The assumption for such normalization was the fact that most people read text intensively when they need to answer a question [37]. For example, to answer a question with the correct answer in position 2, they must The graph in Figure 7 shows mean response times by position of correct answer after normalization. In this environment, the response time when the answer is in the first position could be much shorter than the response time when the correct answer is in the later positions; e.g., if the correct answer is in the last position, the respondent must read the question and all four answers to get to it. To eliminate the influence of the position of the correct answer on the response time, the response time was normalized using Equation (1). The graph in Figure 7 shows mean response times by position of correct answer after normalization. During the normalization process, real response time was divided by the answer position increased by 1 (time to read the question). The assumption for such normalization was the fact that most people read text intensively when they need to answer a question [37]. For example, to answer a question with the correct answer in position 2, they must During the normalization process, real response time was divided by the answer position increased by 1 (time to read the question). The assumption for such normalization was the fact that most people read text intensively when they need to answer a question [37]. For example, to answer a question with the correct answer in position 2, they must read at least the question and two answers. This normalization did not completely eliminate the influence of the position of the correct answer on the response time, but the response time was significantly reduced (the difference between the largest and smallest average time before normalization was 3.39 s and after normalization was 1.04 s). Part of the difference that occurred when answering the last question could be due to the fact that some of the respondents who had not found an answer in the three previous positions chose the last answer without reading the text of that answer. Further analysis revealed two problematic groups of response times. The first group included very short times, by which the respondent would not have been able to read the question and at least one answer. The second group included outliers in the form of very long times, for which we assumed that something prevented the respondent from answering or that the application or mobile device had performance problems while answering. Such problematic responses accounted for about 2% of all responses. Since they could have a negative impact on the research, we decided to exclude from the set all responses for which the normalized times were shorter than 0.5 s and longer than 5 s. After this exclusion, 98% of the correct answers remained in the set, or 2632 answers in total. Figure 8 shows mean response times by the three different criteria that formed different user interfaces in the application: layout, font, and color combination. Sensors 2022, 22, x FOR PEER REVIEW 1 read at least the question and two answers. This normalization did not completely nate the influence of the position of the correct answer on the response time, but t sponse time was significantly reduced (the difference between the largest and sm average time before normalization was 3.39 s and after normalization was 1.04 s). P the difference that occurred when answering the last question could be due to the fac some of the respondents who had not found an answer in the three previous pos chose the last answer without reading the text of that answer. Further analysis revealed two problematic groups of response times. The first g included very short times, by which the respondent would not have been able to rea question and at least one answer. The second group included outliers in the form o long times, for which we assumed that something prevented the respondent from an ing or that the application or mobile device had performance problems while answe Such problematic responses accounted for about 2% of all responses. Since they have a negative impact on the research, we decided to exclude from the set all resp for which the normalized times were shorter than 0.5 s and longer than 5 s. Afte exclusion, 98% of the correct answers remained in the set, or 2632 answers in total. Figure 8 shows mean response times by the three different criteria that forme ferent user interfaces in the application: layout, font, and color combination. Figure 8a shows how efficient participants were in solving tasks using differen outs. From this graph, we can see that participants performed best with the myStyle out, in which the question is at the top of the screen and the answers are arranged column below the question [38]. The myStyle5 layout, in which the question is in the place but the answers are arranged in two rows (zigzag layout) was second in ter efficiency. We assume that in this layout, changing the reading direction of the an from horizontal to vertical saved time in retrieving the information. Figure 8a shows how efficient participants were in solving tasks using different layouts. From this graph, we can see that participants performed best with the myStyle1 layout, in which the question is at the top of the screen and the answers are arranged in a column below the question [38]. The myStyle5 layout, in which the question is in the same place but the answers are arranged in two rows (zigzag layout) was second in terms of efficiency. We assume that in this layout, changing the reading direction of the answer from horizontal to vertical saved time in retrieving the information. Furthermore, based on the effect of font on the task-solving efficiency, as shown in Figure 8b, the dyslexic-friendly font had a positive effect. The average reaction time of the participants by different color combinations is shown in Figure 8c. The best results were obtained with high-contrast combinations (yellow-black, color01; black-white, color05; black-yellow, color09; blue-yellow, color10; green-black, color11; white-black, color13). Most of the efficient color combinations (color01, color09, color10, and color11) are combinations from the methodology created in a previous study on the use of efficient color and contrast combinations on the web [39]. Apart from that, good results were obtained when using interfaces with high-contrast monochrome black and white (color5) and white and black (color13) combinations. Evaluation of Effectiveness Using CNN Models As described in the previous section, the process of data-cleaning and preparation resulted in the elimination of wrong answers, outliers, and answers resulting from user adaptation to the new interface. Normalization was also conducted to reduce the influence of the position (1)- (4) where the correct answer is found. Ultimately, the corrected dataset contained 2632 samples. The available data were pseudo randomly divided into three datasets: 263 images (10%) were set aside as the test dataset, while the rest was divided into a training dataset of 2106 images (80%) and a validation dataset of 263 images (10%). The division was made in such a way that the exact same interface (taking all elements into account) was not represented in multiple datasets. The performance of the proposed models was evaluated using mean absolute error (MAE) and root mean square error (RMSE) metrics, expressed by Equations (2) and (3) ( Table 2): where y i is the ground-truth value,ŷ i is the predicted data and N is the number of testing samples. It should be noted that in this case, some standard metrics were not suitable for the analysis of predictions. For example, the coefficient of determination R2 does not provide a comparison of different algorithms. The reason lies in the fact that for one interface, the entire response time range will be obtained (distributed mostly according to the normal distribution), and the prediction will actually be reduced to the mean value. The results show that two proposed ensemble models achieved better performance than individual models. For the user interfaces represented in the test dataset, with the best model applied, the range of user response time values was between 1366 and 2011 ms. It is important to note that the expected response time for a specific user interface, for example, of 1550 ms, does not mean that all users will achieve the same or a similar time. The actual response time will depend on many additional parameters, including the user's cognitive abilities or their current mood. However, if the experiment is repeated a sufficient number of times, a mean response time close to the predicted value can be expected for a particular interface. Thus, perhaps the main benefit of the proposed approach is the possibility of ranking interface proposals. An additional experiment was conducted in which additional interfaces were made with elements that were not used before. This refers to the arrangement of objects, used colors, fonts, etc. The best model was applied to the additional test data to rank the interfaces according to the expected mean response time. Examples of the best and worst-ranked interfaces are shown in Figures 9 and 10. In Figure 9, the images are ordered starting from the best response time, while in Figure 10, the images are ordered starting from the worst response time. Analysis of the ranked interfaces reveals that deep-learning models can recognize the essential attributes of an interface and their influence on its future efficiency. time. The actual response time will depend on many additional parameters, including the user's cognitive abilities or their current mood. However, if the experiment is repeated a sufficient number of times, a mean response time close to the predicted value can be expected for a particular interface. Thus, perhaps the main benefit of the proposed approach is the possibility of ranking interface proposals. An additional experiment was conducted in which additional interfaces were made with elements that were not used before. This refers to the arrangement of objects, used colors, fonts, etc. The best model was applied to the additional test data to rank the interfaces according to the expected mean response time. Examples of the best and worst-ranked interfaces are shown in Figures 9 and 10. In Figure 9, the images are ordered starting from the best response time, while in Figure 10, the images are ordered starting from the worst response time. Analysis of the ranked interfaces reveals that deep-learning models can recognize the essential attributes of an interface and their influence on its future efficiency. Conclusions In this study, we successfully tested our hypothesis that our ML model can evaluate the future performance of completely new UI in terms of user response and that it is able to evaluate different designs of UI within the defined boundaries of some given software. In the user interaction format used to test the research hypotheses, different user options are available for selection that depend on user knowledge, which is common in software environments such as e-learning quizzes and others as well. Combinations of design layouts, colors, fonts, and font sizes were used in the training dataset. Model evaluation was performed by combining subject metrics from 300 research participants and the objective metrics related to user response times and answer correctness. A multi-channel ensemble model for CNN was proposed and used, and our results suggest that this approach can be applied to the classification of various UI designs. To confirm our initial hypothesis, an additional dataset with entirely new and previously unseen combinations of elements, colors, and fonts was constructed for an additional testing phase. Our plan for further research includes extending the ML model with UI designs of different sizes/resolutions and with different interface elements and their layout, with all interfaces having previously known usability ratings. Based on this, we will test the hypothesis that ML models can evaluate a completely unknown interface. Such capability would be useful so that design is not just left to the creativity and good practices of de- Conclusions In this study, we successfully tested our hypothesis that our ML model can evaluate the future performance of completely new UI in terms of user response and that it is able to evaluate different designs of UI within the defined boundaries of some given software. In the user interaction format used to test the research hypotheses, different user options are available for selection that depend on user knowledge, which is common in software environments such as e-learning quizzes and others as well. Combinations of design layouts, colors, fonts, and font sizes were used in the training dataset. Model evaluation was performed by combining subject metrics from 300 research participants and the objective metrics related to user response times and answer correctness. A multichannel ensemble model for CNN was proposed and used, and our results suggest that this approach can be applied to the classification of various UI designs. To confirm our initial hypothesis, an additional dataset with entirely new and previously unseen combinations of elements, colors, and fonts was constructed for an additional testing phase. Our plan for further research includes extending the ML model with UI designs of different sizes/resolutions and with different interface elements and their layout, with all interfaces having previously known usability ratings. Based on this, we will test the hypothesis that ML models can evaluate a completely unknown interface. Such capability would be useful so that design is not just left to the creativity and good practices of designers and developers, but also to the formal definition and practical application of objective knowledge about UI usability, accessibility, and/or performance, thereby increasing user satisfaction and software efficiency. Font Name Font Family OpenDyslexic Sans serif, dyslexic-friendly Roboto Sans serif Roboto Slab Serif Table A4. HTML classes used for layout with description of positions of question and answers, used for purpose of test. myStyle1 Top of the page Below question 4 answers in diagonal form myStyle2 Bottom of the page Above question 4 answers in diagonal form myStyle3 Top of the page Below question 4 answers in rhombus form myStyle4 Bottom of the page Above question 4 answers in rhombus form
2022-12-07T19:31:39.658Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "cb4b335014654b67cbbe82420a80822725cd69ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/23/9336/pdf?version=1669813636", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf20979005d4264d9507ca2287d86e6ed31759d2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
9151100
pes2o/s2orc
v3-fos-license
A combinatorial approach to quantification of Lie algebras We propose a notion of a quantum universal enveloping algebra for an arbitrary Lie algebra defined by generators and relations which is based on the quantum Lie operation concept. This enveloping algebra has a PBW basis that admits the Kashiwara crystalization. We describe all skew primitive elements of the quantum universal enveloping algebra for the classical nilpotent algebras of the infinite series defined by the Serre relations and prove that the set of PBW-generators for each of these enveloping algebras coincides with the Lalonde-Ram basis of the ground Lie algebra with a skew commutator in place of the Lie operation. The similar statement is valid for Hall-Shirshov basis of any Lie algebra defined by one relation, but it is not so in general case. 1.Introduction Quantum universal enveloping algebras appeared in the famous papers by Drinfeld [14] and Jimbo [17]. Since then a great deal of articles and number of monographs were devoted to their investigation. All of these researches are mainly concerned with a particular quantification of Lie algebras of the classical series. This is accounted for first by the fact that these Lie algebras have applications and visual interpretations in physical speculations, and then by the fact that a general, and commonly accepted as standard, notion of a quantum universal enveloping algebra is not elaborated yet (see a detailed discussion in [1,31]). In the present paper we propose a combinatorial solution of this problem by means of the quantum (Lie) operation concept [21,23,24]. In line with the main idea of our approach, the skew primitive elements must play the same role in quantum enveloping algebras as the primitive elements do in the classical case. By the Friedrichs criteria [12,15,30,32,33], the primitive elements form the ground Lie algebra in the classical case. For this reason we consider the space spanned by the skew primitive elements and equipped with the quantum operations as a quantum analogue of a Lie algebra. In the second section we adduce the main notions and consider some examples. These examples, in particular, show that the Drinfeld-Jimbo enveloping algebra as well as its modifications are quantum enveloping algebras in our sense. Research at MSRI is supported in part by NSF grant DMS-9701755. Supported in part by CONACyT México, grant 32130-E. In the third section with the help of the Heyneman-Radford theorem we introduce a notion of a combinatorial rank of a Hopf algebra generated by skew primitive semiinvariants. Then we define the quantum enveloping algebra of an arbitrary rank that slightly generalises the definitions given in the preceding section. The basis construction problem for the quantum enveloping algebras is considered in the fourth section. We indicate two main methods for the construction of PBWgenerators. One of them modifies the Hall-Shirshov basis construction process by means of replacing the Lie operation with a skew commutator. The set of the PBWgenerators defined in this way, the values of hard super-letters, plays the same role as the basis of the ground Lie algebra does in the PBW theorem. At first glance it would seem reasonable to consider the k[G]-module generated by the values of hard super-letters as a quantum Lie algebra. However, this extremely important module falls far short of being uniquely defined. It essentially depends on the ordering of the main generators, their degrees, and it is almost never antipode stable. Also we have to note the following important fact. Our definition of the hard super-letter is not constructive and, of course, it cannot be constructive in general. The basis construction problem includs the word problem for Lie algebras defined by generators and relations, while the latter one has no general algorithmic solution (see [4,7]). The second method is connected with the Kashiwara crystallisation idea [19,20] (see also a development in [11,25]). M. Kashiwara has considered the main parameter q of the Drinfeld-Jimbo enveloping algebra as a temperature of some physical medium. When the temperature tend to zero, the medium crystallises. The PBWgenerators must crystallise as well. In our case under this process no one limit quantum enveloping algebra appears since the existence conditions normally include equalities of the form p ij = 1 (see [23]). Nevertheless if we equate all quantification parameters to zero, the hard super-letters would form a new set of PBW-generators for the given quantum universal enveloping algebra. To put this another way, the PBW-basis defined by the super-letters admits the Kashiwara crystallisation. In the fifth section we bring a way to construct a Groebner-Shirshov relations system for a quantum enveloping algebra. This system is related to the main skew primitive generators, and, according to the Diamond Lemma (see [3,5,37]), it determines the crystal basis. The usefulness of the Groebner-Shirshov systems depends upon the fact that such a system not only defines a basis of an associative algebra, but it also provides a simple diminishing algorithm for expansion of elements on this basis (see, for example [2]). In the sixth section we adapt a well known method of triangular splitting to the quantification with constants. The original method appeared in studies of simple finite dimensional Lie algebras. Then it has been extended into the field of quantum algebra in a lot of publications (see, for example [8,29,38]). By means of this method the investigation of the Drinfeld-Jimbo enveloping algebra amounts to a consideration of its positive and negative homogeneous components, quantum Borel sub-algebras. In the seventh section we consider more thoroughly the quantum universal enveloping algebras of nilpotent algebras of the series A n , B n , C n , D n defined by the Serre relations. We adduce first lists of all hard super-letters in the explicit form, then Groebner-Shirshov relations systems, and next spaces L(U P (g)) spanned by the skew primitive elements (i.e. the Lie algebra quantifications g P proper). In all cases the lists of hard super-letters (but the hard super-letters themselves) turn out to be independent of the quantification parameters. This means that the PBW-generators result from the Hall-Shirshov basis of the ground Lie algebra by replacing the Lie operation with the skew commutator. The same is valid for the Groebner-Shirshov relations systems. Note that the Hall-Shirshov bases, under the name standard Lyndon bases, for the classical Lie series were constructed by P. Lalonde and A. Ram [26], while the Groebner-Shirshov systems of Lie relations were found by L.A. Bokut' and A.A. Klein [6]. Furthermore, in all cases g P as a quantum Lie algebra (in our sense) proves to be very simple in structure. Either it is a coloured Lie super-algebra (provided that the main parameter p 11 equals 1), or values of all non-unary quantum operations equal zero on g P . In particular, if char(k) = 0 and p t 11 = 1 then the partial quantum operations may be defined on g P , but all of them have zero values. Thus, in this case we have a reason to consider U P (g) as an algebra of 'commutative' quantum polynomials, since the universal enveloping algebra of a Lie algebra with zero bracket is the algebra of ordinary commutative polynomials. From this standpoint the Drinfeld-Jimbo enveloping algebra is a 'quantum' Weyl algebra of (skew) differential operators. Immediately afterwards a number of interesting questions appears. What is the structure of other algebras of 'commutative' quantum polynomials? Under what conditions are the quantum universal enveloping algebras of homogeneous components of other Kac-Moody algebras defined by the Gabber-Kac relations [16] the algebras of 'commutative' quantum polynomials? When do the PBW-generators result from a basis of the ground Lie algebra by means of replacing the Lie operation with the skew commutator? These and other questions we briefly discuss in the last section. It is as well to bear in mind that the combinatorial approach is not free from flaws: the quantum universal enveloping algebra essentially depends on a combinatorial representation of the ground Lie algebra, i.e. a close connection with the abstract category of Lie algebras is lost. Quantum enveloping algebras Recall that a variable x is called a quantum variable if an element g x of a fixed Abelian group G and a character χ x ∈ G * are associated with it. A noncommutative polynomial in quantum variables is called a quantum operation if all of its values in all Hopf algebras are skew primitive provided that every variable x has a value x = a such that ∆(a) = a ⊗ 1 + g x ⊗ a, g −1 ag = χ x (g)a, g ∈ G. (1) Let x 1 , . . . , x n be a set of quantum variables. For each word u in x 1 , . . . , x n we denote by g u an element of G that appears from u by replacing of all x i with g x i . In the same way we denote by χ u a character that appears from u by replacing of all x i with χ x i . Thus on the free algebra k x 1 , . . . , x n a grading by the group G × G * is defined. For each pair of homogeneous elements u, v we fix the denotations The quantum operation can be defined equivalently as a G × 1-homogeneous polynomial that has only primitive values in all braided bigraded Hopf algebras provided that all quantum variables have primitive homogeneous values g a = g x , χ a = χ x (see [21,). Recall that a constitution of a word u is a sequence of non-negative integers (m 1 , m 2 , . . . , m n ) such that u is of degree m 1 in x 1 , deg 1 (u) = m 1 ; of degree m 2 in x 2 , deg 2 (u) = m 2 ; and so on. Since the group G is abelian, all constitution homogeneous polynomials are homogeneous with respect to the grading. Let us define a bilinear skew commutator on the set of graded homogeneous noncommutative polynomials by the formula These brackets satisfy the following Jacobi and skew differential identities: where by the dot we denote the usual multiplication. It is easy to see that the following conditional restricted identities are valid as well provided that p vv is a primitive t-th root of unit, and n = t or n = tl k in the case of characteristic l > 0. Suppose that a Lie algebra g is defined by the generators x 1 , . . . , x n and the relations f i = 0. Let us convert the generators into quantum variables. For this associate to them elements of G × G * in arbitrary way. Let P = ||p ij ||, p ij = χ x i (g x j ) be the quantification matrix. Definition 2.1. A braided quantum enveloping algebra is a braided bigraded Hopf algebra U b P (g) defined by the variables x 1 , . . . , x n and the relations f i = 0, where the Lie operation is replaced with (2), provided that in this way f i are converted into the quantum operations f * i . The coproduct and the commutation relations in the tensor product are defined by Definition 2.2. A simple quantification or a quantum universal enveloping algebra of g is an algebra U P (g) that is isomorphic to the skew group algebra where the group action and the coproduct are defined by Definition 2.3. A quantification with constants is a simple quantification where additionally some generators x i associated to the trivial character are replaced with the constants α i (1 − g x i ). The formulae (10) and (7) correctly define the coproduct since by definition of the quantum operation ∆(f * i ) = f * i ⊗ 1 + g ⊗ f * i in the case of ordinary Hopf algebras and ∆(f * i ) = f * i ⊗1 + 1⊗f * i in the braided case. We have to note that the defined quantifications essentially depend on the combinatorial representation of the Lie algebra. For example, an additional relation [x 1 , x 1 ] = 0 does not change the Lie algebra. At the same time if χ x 1 (g 1 ) = −1 then this relation admits the quantification and yields a nontrivial relation for the quantum enveloping algebra, 2x 2 1 = 0. Example 1. Suppose that the Lie algebra is defined by a system of constitution homogeneous relations. If the characters χ i are such that p ij p ji = 1 for all i, j then the skew commutator itself is a quantum operation. Therefore on replacing the Lie operation all relations become quantum operations as well. This means that the braided enveloping algebra is the universal enveloping algebra U(g col ) of the coloured Lie super-algebra which is defined by the same relations as the given Lie algebra is. The simple quantification appears as the Radford biproduct U(g col ) ⋆ k[G] or, equivalently, as the universal G-enveloping algebra of the coloured Lie super-algebra g col (see [35] or [21,Example 1.9]). Example 2. Suppose that the Lie algebra g is defined by the generators x 1 , . . . , x n and the system of nil relations Usually instead of the matrix of degrees (without the main diagonal) ||n ij || the matrix A = ||a ij ||, a ij = 1 − n ij is considered. The Coxeter graph Γ(A) is associated to every such a matrix. This graph has the vertices 1, . . . , n, where the vertex i is connected by a ij a ji edges with the vertex j. If a ij = 0 then the relation x j adx i = 0 is in the list (11), and the relation x i (adx j ) n ji = 0 is a consequence of it. The skew commutator [x j , x i ] is a quantum operation if and only if p ij p ji = 1. Under this condition we have [ Therefore both in the given Lie algebra and in its quantification one may replace the relation x i (adx j ) n ji = 0 with x i adx j = 0. In other words, without loss of generality, we may suppose that a ij = 0 ↔ a ji = 0. By the Gabber-Kac theorem [16] we get that the algebra g is the positive homogeneous component g + 1 of a Kac-Moody algebra g 1 . Theorem 6.1 [21] describes the conditions for a homogeneous polynomial in two variables which is linear in one of them to be a quantum operation. From this theorem we have the following corollary. Corollary 2.4. If n ij is a simple number or unit and in the former case p ii is not a primitive n ij -th root of unit, then the relation (11) admits a quantification if and only if p ij p ji = p a ij ii . Theorem 6.1 [21] provides no essential restrictions on the non-diagonal parameters p ij : if the matrix P correctly defines a quantification of (11) then for every set Z = {z ij |z ij z ji = z ii = 1} the following matrix does as well: Example 3. Let G be freely generated by g 1 , . . . g n and A be a generalised Cartan matrix symmetrised by d 1 , . . . , d n , while the characters are defined by p ij = q −d i a ij . In this case the simple quantification is the positive component of the Drinfeld-Jimbo enveloping algebra together with the group-like elements, U P (g) = U + q (g) * G. By means of an arbitrary deformation (12) one may define a 'colouring' of U + q (g) * G. The braided enveloping algebra equals U + q (g) where the coproduct and braiding are defined by (7) and (8) with the coefficient q d k a kj . The formula (12) correctly defines its 'colouring' as well. Example 4. If in the example above we complete the set of quantum variables by the new ones x − 1 , . . . , x − n ; z 1 , . . . , z n such that . (Unformally we may consider the obtained quantification as one of the Kac-Moody algebra identifying g i with q h i , where the rest of the Kac-Moody algebra relations, This quantification coincides with the Drinfeld-Jimbo one under a suitable choice of x i , x − i , and ε i depending up the particular definition of U q (g) : [28] x are quantum operations only if p ij = p ji . So in this case the 'colourings' (12) may be only black-white, z ij = ±1. In the perfect analogy the Kang quantification [18] of the generalised Kac-Moody algebras [9] is a quantification in our sense as well. 3.Combinatorial rank By the above definitions the quantum enveloping algebras (with or without constants) are character Hopf algebras (see [21,Definition 1.2]). In this section by means of a combinatorial rank notion we identify the quantum enveloping algebras in the class of character Hopf algebras. Let H be a character Hopf algebra generated by a 1 , . . . , a n : Let us associate a quantum variable x i with the parameters (χ a i , g a i ) to a i . Denote by G X the free enveloping algebra defined by the quantum variables x 1 . . . , x n . (see [21,Sec. 3] under denotation H X ). The map x i → a i has an extension to a homomorphism of Hopf algebras ϕ : G X → H. Denote by I the kernel of this homomorphism. If I = 0 then by the Heyneman-Radford theorem (see, for example [34, pages 65-71]), the Hopf ideal I has a non-zero skew primitive element. Let I 1 be an ideal generated by all skew primitive elements of I. Clearly I 1 is a Hopf ideal as well. Now consider the Hopf ideal I/I 1 of the quotient Hopf algebra G X /I 1 . This ideal also has non-zero skew primitive elements (provided I 1 = I). Denote by I 2 /I 1 the ideal generated by all skew primitive elements of I/I 1 , where I 2 is its preimage with respect to the projection G X → G X /I 1 . Continuing the process we will find a strictly increasing, finite or infinite, chain of Hopf ideals of G X : Definition 3.1. The length of (15) is called a combinatorial rank of H. By definition, the combinatorial rank of any quantum enveloping algebra (with constants) equals one. In the case of zero characteristic the inverse statement is valid as well. Theorem 3.2. Each character Hopf algebra of the combinatorial rank 1 over a field of zero characteristic is isomorphic to a quantum enveloping algebra with constants of a Lie algebra. Proof. By definition, I is generated by skew primitive elements. These elements as noncommutative polynomials are the quantum operations. Consider one of them, say f. Let us decompose f into a sum of homogeneous components f = f i . All positive components belongs to k X and they are the quantum operations themselves, while the constant component has the form α(1−g), g ∈ G (see [21,Sec. 3 and Prop. 3.3]). If α = 0 then we introduce a new quantum variable z f with the parameters (id, g). i + z f = 0, with the Lie multiplication in place of the skew commutator. It is clear that H is the quantification with constants of g. In the same way one may introduce the notion of the combinatorial rank for the braided bigraded Hopf algebras. In this case all braided quantum enveloping algebras are of rank 1, and all braided bigraded algebras of rank 1 are the braided quantification of some Lie algebras. Now we are ready to define a quantification of arbitrary rank. For this in the definitions of the above section it is necessary to change the requirement that all f * i are quantum operations with the following condition. The set F splits in a union F = ∪ n j=1 F j such that F * 1 consists of quantum operations; the set F * 2 consists of skew primitive elements of G X||F * 1 ; the set F * 3 consists of skew primitive elements of G X||F * 1 , F * 2 , and so on. The quantum enveloping algebras of an arbitrary rank are character Hopf algebras also. But it is not clear if any character Hopf algebra is a quantification of some rank of a suitable Lie algebra. It is so if the Hopf algebra is homogeneous and the ground field has a zero characteristic (to appear). Also it is not clear if there exist character Hopf algebras, or braided bigraded Hopf algebras, of infinite combinatorial rank; while it is easy to see that ∪I n = I. Also it is possible to show that F 1 always contains all relations of a minimal constitution in F. For example, each of (11) is of a minimal constitution in (11). Therefore the quantification of arbitrary rank with the identification g i = exp(h i ) of any (generalized) Kac-Moody algebra g, or its nilpotent component g + , is always a quantification in the sense of the above section. 4.PBW-generators and crystallisation The next result yields a PBW basis for the quantum enveloping algebras. Theorem 4.1. Every character Hopf algebra H has a linearly ordered set of constitution homogeneous elements U = {u i | i ∈ I} such that the set of all products gu n 1 1 u n 2 2 · · · u nm m , where g ∈ G, u 1 < u 2 < . . . < u m , 0 ≤ n i < h(i) forms a basis of H. Here if p ii df = p u i u i is not a root of unity then h(i) = ∞; if p ii = 1 then either h(i) = ∞ or h(i) = l is the characteristic of the ground field; if p ii is a primitive t-th root of unity, t = 1, then h(i) = t. The set U is referred to as a set of PBW-generators of H. This theorem easily follows from [22,Theorem 2]. Let us recall necessary notions. Let a 1 , . . . , a n be a set of skew primitive generators of H, and let x i be the associated quantum variables. Consider the lexicographical ordering of all words in x 1 > x 2 > . . . > x n . A non-empty word u is called standard if vw > wv for each decomposition u = vw with non-empty v, w. The following properties are well known (see, for example [10,13,27,36,37]). 1s. A word u is standard if and only if it is greater than each of its ends. 2s. Every standard word starts with a maximal letter that it has. 3s. Each word c has a unique representation c = u n 1 1 u n 2 2 · · · u n k k , where u 1 < u 2 < · · · < u k are standard words (the Lyndon theorem). 4s. If u, v are different standard words and u n contains v k as a sub-word, u n = cv k d, then u itself contains v k as a sub-word, u = bv k e. Let D be a linearly ordered Abelian additive group. Suppose that some positive D-degrees d 1 , . . . , d n ∈ D are associated to x 1 , . . . , x n . We define the degree of a word to be equal to m 1 d 1 + . . . + m n d n where (m 1 , . . . , m n ) is the constitution of the word. The order and the degree on the super-letters are defined in the following way: Clearly, if the algebra H is D-homogeneous then one may omit the underlined parts of the above definitions. In order to find the set of PBW-generators it is necessary first to include in U the values of all hard super-letters, then for each hard super-letter [u] of a finite height, , and next for each hard superletter of infinite height such that p uu is a primitive t-th root of unity to add the value of [u] t . Obviously the set of PBW-generators plays the same role as the basis of the Lie algebra in the PBW theorem does. Nevertheless the k[G]-bimodule generated by the PBW-generators is not uniquely defined. It depends on the ordering of the main generators, the D-degree, and under the action of antipode it transforms to a different bimodule of PBW-generators k[G]S(U). Another way to construct PBW-generators is connected with the M.Kashiwara crystallisation idea [19,20]. M.Kashiwara considered the main parameter of the Drinfeld-Jimbo enveloping algebra as the temperature of some physical medium. When the temperature tends to zero the medium crystallises. By this means the 'crystal' bases must appear. If we replace p ij with zero then [u, v] turns into uv, while [u] turns into u. Here if p uu is not a root of unity then h = 1; if p uu is a primitive t-th root of unity then h = 1, or h = t, or h = tl k , where l is the characteristic. Proof. Consider an expansion of T in terms of the basis (16) where gU, g i W i are different basis elements of maximal degree, and U is one of the biggest words among U, W i with respect to the lexicographic ordering of words in the super-letters. On basis expansion of tensors, the element ∆(T) − T ⊗ 1 − g t ⊗ T has only one tensor of the form gU ⊗ . . . and this tensor equals gU ⊗ α(g − 1). Therefore g = 1 and one may apply [22,Lemma 13]. ✷ Groebner-Shirshov relations systems Let x 1 , . . . , x n be variables that have positive degrees d 1 , . . . , d n ∈ D. Recall that a Hall ordering of words in x 1 , . . . , x n is an order when the words are compared firstly by the degree and then words of the same degree are compared by means of the lexicographic ordering . Consider a set of relations where w i is a word and f i is a linear combination of Hall lesser words. The system (18) is said to be closed under compositions or a Groebner-Shirshov relations system if first none of w i contains w j , i = j ∈ I as a sub-word, and then for each pair of words w k , w j such that some non-empty terminal of w k coincides with an onset of [3,5,37]). If the system (18) is closed under compositions then the words that have none of w i as sub-words form a basis of the algebra H defined by (18). If none of the words w i has sub-words w j , j = i, then the converse statement is valid as well. Indeed, any composition by means of substitutions w i → f i can be reduced to a linear combination of words that have no sub-words w i . , this linear combination equals zero in H. Therefore all the coefficients have to be zero. Since Bases Crystallisation Lemma provides the basis that consists of words, the above note gives a way to construct the Groebner-Shirshov relations system for any quantum enveloping algebra. Let H be a character Hopf algebra generated by skew primitive semi-invariants a 1 , . . . , a n (or a braided bigraded Hopf algebra generated by graiding homogeneous primitive elements a 1 , . . . , a n ) and let x 1 , . . . , x n be the related quantum variables. A non-hard in H super-letter [w] is referred to as a minimal one if first w has no proper standard sub-words that define non-hard super-letters, and then w has no sub-words u h , where [u] is a hard super-letter of the height h. By the Super-letters Crystallisation Lemma, for every minimal non-hard in H super-letter [w] we may write a relation in H where w j , w i < w in the Hall sense, D(w i ) = D(w), D(w j ) < D(w). In the same way if [u] is a hard in H super-letter of a finite height h then where u j , u i < u h in the Hall sense, D(u i ) = hD(u), D(u j ) < hD(u). The relations (14) and the group operation provide the relations Theorem 5.2. The set of relations (19), (20), and (21) forms a Groebner-Shirshov system that defines H. The basis determined by this system in Diamond Lemma coincides with the crystal basis. Proof. The property 4s implies that none of the left hand sides of (19), (20), (21) contains another one as a sub-word. Therefore by the Bases Crystallisation Lemma it is sufficient to show that the set of all words c determined in the Diamond Lemma coincides with the crystal basis. By 3s we have c = u n 1 1 u n 2 2 · · · u n k k , where u 1 < . . . < u k is a sequence of standard words. Every word u i define a hard super-letter [u i ] since in the opposite case u i , and therefore c, contains a sub-word w that defines a minimal non-hard super-letter [ 6.Quantification with constants By means of the Diamond Lemma in some instances the investigation of a quantification with constants can be reduced to one of a simple quantification. Let H 1 = x 1 , . . . , x k ||F 1 be a character Hopf algebra defined by the quantum variables x 1 , . . . , x k and the grading homogeneous relations {f = 0 : f ∈ F 1 }, while H 2 = x k+1 , . . . , x n ||F 2 is a character Hopf algebra defined by the quantum variables x k+1 , . . . , x n and the grading homogeneous relations {h = 0 : h ∈ F 2 }. Consider the algebra H = x 1 , . . . , x n ||F 1 , F 2 , F 3 , where F 3 is the following system of relations with constants If the conditions below are met then the character Hopf algebra structure on H is uniquely determined: Indeed, in this case the difference w ij between the left and right hand sides of (22) is a skew primitive semi-invariant of the free enveloping algebra G x 1 , . . . , x n . Consider the ideals of relations I 1 =id(F 1 ) and I 2 =id(F 2 ) of H 1 and H 2 respectively. They are, in the present context, Hopf ideals of G x 1 , . . . , x k and G x k+1 , . . . , x n , respectively. Therefore V = I 1 + I 2 + kw ij is an antipode stable coideal of G X . Consequently the ideal generated by V is a Hopf ideal. It remains to note that this ideal is generated in G X by w ij and F 1 , F 2 . Proof. If a standard word contains at least one of the letters x i , i ≤ k then it has to start with one of them (see s2). If this word contains a letter x j , j > k then it has a sub word of the form x i x j , i ≤ k < j. Therefore by Lemma 4.7 and relations (22) this word defines a non-hard super-letter. ✷ The converse statement is not universally true. In order to formulate the necessary and sufficient conditions let us define partial skew derivatives: as k[G]-bimodules and the space generated by the skew primitive elements of H equals the sum of these spaces for H 1 and H 2 . Proof. By (5) and (24) the following equalities are valid in H : If all hard in H 1 or H 2 super-letters are hard in H then H 1 , H 2 are sub-algebras of H. So (26) proves the necessity of the lemma conditions. Conversely. Let us consider an algebra R defined by the generators g ∈ G, x 1 , . . . , x n and the relations (21), (22). Evidently this system is closed under the compositions. Therefore by Diamond Lemma the set of words gvw forms a basis of R where g ∈ G; v is a word in x j , j > k; and w is a word in x i , i ≤ k. In other words R as a bimodule over k[G] has a decomposition Let us show that the two sided ideal of R generated by F 2 coincides with the right ideal The second term belongs to I 2 R, while the first one can be rewritten by (5): Furthermore, consider a quotient algebra R 1 = R/I 2 R : where the equality means the natural isomorphism of k[G]-bimodules. Along similar lines, the left ideal R 1 I 1 = H 2 ⊗ k[G] I 1 of this quotient algebra coincides with the two sided ideal generated by F 1 . Therefore Thus the monotonous restricted G-words in hard in H 1 or H 2 super-letters form a basis of H. This, in particular, proves the first statement. Now let T = α t g t V t W t be the basis decomposition of a skew primitive element, g t ∈ G, V t ∈ H 2 , W t ∈ H 1 , α t = 0. We have to show that for each t one of the super-words V t or W t is empty. Suppose that it is not so. Among the addends with non-empty V t , W t we choose the largest one in the Hall sense, say g s V s W s . Under the basis decomposition of ∆(T ) − T ⊗ 1 − g(T ) ⊗ T the term α s g s g(V s )W s ⊗ g s V s appears and cannot be cancelled with other. Indeed, since the coproduct is homogeneous (see [22,Lemma 9]) and since under the basis decomposition the super-words are decreased (see [22,Lemma 7]) the product α s (g s ⊗ g s )∆(V s )∆(W s ) has the only term of the above type. By the same reasons α t (g t ⊗ g t )∆(V t )∆(W t ) has a term of the above type only if V t ≥ V s and W t ≥ W s with respect to the Hall ordering of the set of all super-words. However, by the choise of s, 7.Quantification of the classical series In this section we apply the above general results to the infinite series A n , B n , C n , D n of nilpotent Lie algebras defined by the Serre relations (11). Let g be any such Lie algebra. Lemma 7.1. If a standard word u has no sub words of the type Proof. Let R be defined by the generators x 1 , . . . , x n and the relations Clearly (29) implies (11) with the skew commutator in place of the Lie operation. Therefore R is a homomorphic image of U P (g). The system (29) is closed under compositions since a composition of monomial relations always has the form 0 = 0. Let u have no sub-words (28). If [u] is not hard then, by the Super-letters Crystallisation Lemma, u is a linear combination of lesser words in U P (g). Therefore u is a linear combination of lesser words in R as well. This contradicts the fact that u belongs to the Groebner-Shirshov basis of R, since every word either belongs to this basis or equals zero in R. ✷ Theorem A n . Suppose that g is of the type A n , and p ii = −1. Denote by B the set of the super-letters given below: The following statements are valid. 1. The values of [u km ] in U P (g) form a PBW-generators set. 2. Each of the super-letters (30) has infinite height in U P (g). 3. The values of all non-hard in U P (g) super-letters equal zero. 4. The following relations with (21) form the Groebner-Shirshov relations system that determines the crystal basis of U P (g) : 5. If p 11 = 1 then the generators x i , the constants 1−g, g ∈ G, and, in the case that p 11 is a primitive t-th root of 1, the elements x t i , x tl k i form a basis of g P = L(U P (g)). Here l is the characteristic of the ground field. 6. If p 11 = 1 then the elements (30) and, in the case l > 0, their l k -th powers, together with 1 − g, g ∈ G form a basis of g P . By Corollary 2.4 the relations (11) with a Cartan matrix A of type A n admit a quantification if and only if In this case the quantified relations (11) take up the form Let us introduce a congruence u ≡ k v on G X . This congruence means that the value of u − v in U b P (g) belongs to the subspace generated by values of all words with the initial letters x i , i ≥ k. Clearly, this congruence admits right multiplication by arbitrary polynomials as well as left multiplication by the independent of x k−1 ones (see (35)). For example, by (33) and (34) we have Proof. Let y = x 2 m+1 , m + 1 > k. By (36) and (35) we have that u km y = u k m−1 x m x 2 m+1 ≡ m+1 0. If y = x i and m+1 = i > k then we get u km y = αu k i−1 Proof. The inequalities at the last column of the following tableaux are valid for all [u] ∈ B that are less than the super-letters located on the same row, where as above deg i (u) means the degree of u in x i . If all super-letters of a super-word U satisfy one of these inequalities then U does as well. Clearly, no one of the super-letters in the first column satisfies the degree inequality on the same row. Proof. The sub-algebra generated by x 2 , . . . x n is defined by the Cartan matrix of the type A n−1 . This allows us to use induction on n. If n = 1 then the lemma is correct in the sense that [u km ] h = x h 1 = 0. Let n > 1. If k > 1 then we may use the inductive supposition directly. Consider the decomposition ∆([u 1m ]) = u (1) ⊗ u (2) . Since Therefore the sum of all tensors u (1) ⊗ u (2) with deg 1 (u (2) ) = 1, deg k (u (2) ) = 0, k > 1 has the form εg ]. By (32) we have p ij p ji = 1 for i − 1 > j. Therefore ε = 1 − p 12 p 21 = 1 − p −1 If k = m, r > k + 1 then the word x k u rs can be diminished by (34) or (35). If k = m then by Lemma 7.4 the word u km u rs has a sub-word of the type u 1 or u 2 . Thus we need show only that the values in U P (g) of u 1 and u 2 are linear combinations of lesser words. The word u 1 has such a representation by Lemma 7.2. Consider the word u 2 . Let us show by downward induction on k that If k = m then one may use (34) with i = k. Let k < m. Let us transpose the second letter x k of u 2 as far to the left as possible by (35). We get By (34) we have x m x k+1 · · · x m+1 ), β = 0. Let us apply the inductive supposition to the word in the parentheses. Since x i , i > k + 1 commutes with x 2 k according to the formulae (35), we get u 2 ≡ k+1 γx 2 k x k+1 x k+2 · · · x m+1 x k+1 · · · x m . Now it remains to replace the underlined sub-word according to (34) and then to transpose the second letter x k to its former position by (35). Note that for the diminishing of u 1 , u 2 we did not use, and we could not use, the relation [x n−1 x 2 n ] = 0 since deg n (u 1 ) ≤ 1, deg n (u 2 ) ≤ 1. If p 11 = 1 then p ij p ji = p ii = 1 for all i, j. So we are under the conditions of Example 1, that is U b P (g) is the universal enveloping algebra of the colour Lie algebra g col . Further, [u km ] ∈ g col and [u km ] are linearly independent in g col since they are hard super-letters and no one of them can be a linear combination of the lesser ones. Let us complete B to a homogeneous basis B ′ of g col . Then by the PBW theorem for the colour Lie algebras the products b n 1 1 · · · b n k k , b 1 < . . . < b k form a basis of U(g col ) = U b P (g). However, the monotonous restricted words in B form a basis of U b P (g) also. Thus B ′ = B and all hard super-letters have the infinite height. In particular, we get that the second statement is valid in complete extent. Moreover, if p 11 = 1 then p(u km , u km ) = 1, thus for l = 0 all homogeneous skew primitive elements became exhausted by [u km ], while for l > 0 the powers [u km ] l k are added to them (of course, here l = 2 since −1 = p ii = 1). So we have proved all statements, but the third and fourth ones. These statements will follow Theorem 5. ✷ Theorem B n . Let g be of the type B n , and p ii = −1, 1 ≤ i < n, p [3] nn = 0. Denote by B the set of the super-letters given below: The following statements are valid. 1. The values of (42) in U P (g) form the PBW-generators set. Every super-letter [u] ∈ B has infinite height in U P (g). 3. The relations (21) with the following ones form a Groebner-Shirshov system that determines the crystal basis of U P (g). 4. If p 11 = 1 then the generators x i and their powers x t i , x tl k i , such that p ii is a primitive t-th root of 1, together with the constants 1 − g, g ∈ G form a basis of g P = L(U P (g)). Here l is the characteristic of the ground field. 5. If p nn = p 11 = 1 then the elements (42) and, for l > 0, their l k -th powers, together with 1 − g, g ∈ G form a basis of g P . If p nn = −p 11 = −1 then [u kn ] 2 , [u kn ] 2l k are added to them. Recall that in the case B n the algebra U b P (g) is defined by (33), (34), (35) where in (33) the last relation, i = n − 1, is replaced with By Corollary 2.4 we get the existence conditions The relations (33) and (44) show that while the relations (34) imply By means of these relations and (35), (44) we have Lemma 7.7. The brackets in [w km ] are set by the recurrence formulae: Here by the definition w k n+1 = u kn . Proof. It is enough to use the property 6s and then 1s and 2s. ✷ Proof. If [[w km ][w rs ]] is standard then w km > w rs and by (49) either w k+1 ≤ w rs , or m = k + 1 and x k+1 ≤ w rs . The inequality w km > w rs is correct only in two cases: k < r or k = r, m > s. We get four possibilities: 1) k < r, k < m − 1, w k+1m ≤ w rs ; 2) k < r, m = k + 1, x k+1 ≤ w rs ; 3) k = r, m > s, k < m − 1, w k+1m ≤ w rs ; 4) k = r, m > s, m = k + 1, x k+1 ≤ w rs . Only the first and third ones are consistent since in the second case x k+1 ≤ w rs implies k + 1 > r, while in the fourth case r < s and k = r < s < m = k + 1. If now we decode w k+1m ≤ w rs in the first and third cases, we get the two possibilities mentioned in the lemma. Proof. The inequality w km > u rs implies r > k. If k < m − 1 then by the first formula (49) we have w k+1m ≤ u rs that is equivalent to k + 1 ≥ r. Therefore r = k + 1 < m. If k = m − 1 then by the second formula (49) we get x k+1 ≤ u rs , i.e. either k + 1 > r or k + 1 = r = s. The former case contradicts r > k while the latter one is mentioned in the lemma. ✷ Proof. The proof is akin to Lemma 7.5 with the following tableaux: Proof. If i < m − 1 then by means of (35) it is possible to permute y to the left beyond x 2 n and use Lemma 7.2 with m ′ = n − 1. If y = x 2 i , m − 1 = i > k then by the above case, i < m − 1, we get where for m = n by definition w k n+1 = u kn , and u kn x n−1 ≡ n−1 0. If y = x i , i = m > k then for m = n one may use the second equality (46). For m < n we have w km y = w k m+1 y 1 where y 1 = x 2 m . Therefore for k < n − 1 we may use (52) with m + 1 in place of m. For k = n − 1 we have w km x n = x n−1 x 3 n ≡ n 0. Finally, if y = x i , i > m > k then by (35) we have w km y = αw ki+1 x i x i−1 x i · v. For i = n one may use (48), while for i < n, changing the underlined word according to (33), we may use the above considered cases: ✷ Another interesting relation appears if we multiply (44) by x n−1 from the left and subtract (34) with i = n − 1 multiplied from the right by x 2 n : x n−1 x n x n−1 x 2 n ≡ n αx n−1 x 2 n x n−1 x n , in which case α = p n−1n p [3] nn = 0. Lemma 7.14. For k < s < m ≤ n the following relation is valid. w km w ks ≡ k+1 εw ks w km , ε = 0. (54) Proof. Let us use downward induction on k. For this we first transpose the second letter x k of w km w ks as far to the left as possible by means of (35), and then change the onset x k x k+1 x k according to (47). We get For k + 1 < s we apply the inductive supposition to the word in the parentheses and then by (47) and (35) transpose x k to its former position. The case k+1 = s, the basis of the induction on k, we prove by downward induction on s. Let k + 1 = s = n − 1. Then m = n. Let us show firstly that For this in the left hand side we transpose the first letter x n by means of (53) to the penultimate position, and then replace the ending x 3 n x n−1 by (44). We get a linear combination of three words. One of them equals the second word of (56), while two other have the following forms. x n−1 x n x n−1 x n x n−1 x 2 n , x n−1 x n x n−1 x 2 n x n−1 x n . The former word by (34) transforms into the form (56). The latter one, after the application of (53) and the replacing of x n−1 x n x n−1 by (34), will have an additional term x n−1 x 3 n x 2 n−1 x n to which it is possible to apply (46). The direct calculation of the coefficients shows that α = p n−1n p nn = 0. Now let us multiply (56) by x 2 n−2 from the left and use (34) with i = n − 2. We get that w n−2 n w n−2 n−1 with respect to ≡ n−1 equals γx n−2 x n−1 x 2 n x n−2 x 2 n−1 x 2 n + δx n−2 x n−1 x n x n−2 x 2 n−1 x 3 n , γ = 0. Let us apply (46) and then (47) and (46) to the second word. We get that this word with respect to ≡ n−1 equals zero. The first word after application of (34) takes up the form εw n−2 n−1 w n−2 n + ε ′ w n−2 n x 2 n−1 x n−2 x 2 n , ε = 0. Thus, by Lemma 7.13, the basis of the induction on s is proved. Let us carry out the inductive step. Let k + 1 = s < n − 1. If m > s + 1 = k + 2 then by the inductive supposition on s we may write Taking into account (51) we may neglect the words starting with x 2 k+1 , x k+2 while transforming the underlined part: In this way (58) is transformed into (54). If m = s + 1 = k + 2 < n then the relation (55) takes up the form w km w ks ≡ k+1 αx 2 k (w k+1k+2 w k+1k+3 )x k+2 x k+1 . Let us apply the inductive supposition with k ′ = k + 1, s ′ = k + 2, m ′ = k + 3 to the word in the parentheses. We get w km w ks ≡ k+1 αε −1 x 2 k w k+1k+3 w k+1k+3 x 2 k+2 x k+1 , or after an evident replacement . In both terms we may transpose one letter x k to its former position by means of (47) and (35). We get It is possible to apply (54) with m ′ = k + 3, s ′ = k + 1 to the first term since the case m > s + 1 is completely considered. Therefore it is enough to show that the second term equals zero with respect to ≡ k+1 . When we transpose the third letter x k+1 as far to the left as possible we get the word Taking into account (51) we may neglect the words starting with x k+1 while transforming the underlined part: Therefore the word (61) equals w kk+1 w kk+3 x 2 k+2 with respect to ≡ k+1 and it remains only to apply Lemma 7.13 twice. ✷ Lemma 7.15. The set B satisfies the conditions of Lemma 4.8. Proof. By Lemmas 7.11 and 4.7 it is sufficient to show that in U b P (g) all words of the form u 0 , . . . , u 6 are linear combinations of lesser ones. The words u 0 are diminished by (35). The words u 1 , u 2 have been presented in this way, without using [x n−1 x 2 n ] = 0, in the proof of the above theorem. The relation (51) shows that u 3 ≡ k+1 0, u 4 ≡ k+1 0. Lemma 7.14 with s = m − 1 yields the necessary representation for u 5 . Let us prove by downward induction on k that u 6 df = u 2 kn x n ≡ k+1 εu kn x n u kn , ε = 0. For k = n − 1 this equality takes up the form (53). Let k < n − 1. Let us transpose the second letter x k of u 2 kn x n as far to the left as possible by means of (35) and then apply (33). We get u 2 kn x n ≡ k+1 αx 2 k (u 2 k+1n x n ), α = 0. We may apply the inductive supposition to the term in the parentheses and then by (33), (35) Proof. Note that for n > 2 the sub-algebra generated by x 2 , . . . x n is defined by the Cartan matrix of the type B n−1 . This allows us to carry out the induction on n with additional supposition that the statements 1 and 2 of Theorem B n are valid for lesser values of n. It is convenient formally consider the sub-algebras x i as algebras of the type B 1 . In this case for n = 1 the lemma and the statements 1 and 2 are correct in the evident way. If v starts with x k = x 1 then we may directly use the inductive supposition. If v = u 1m , one may literally repeat the arguments of Lemma 7.6 starting at the formula (39). Let v = w 1m . If m > 2 then by Lemma 7.7 we have ]. This provides a possibility to repeat the same arguments of Lemma 7.6 with w in place of u. Consider the last case v = w 12 . By Lemma 7.7 we have Applying the coproduct first to (64) then to (63) we may find the sum Σ of all tensors w (1) ⊗ w (2) of ∆([w 12 ]) with deg 1 (w (2) ) = 1, deg k (w (2) ) = 0, k > 1 (in much the same way as (40)): Consider the left hand side of this tensor on applying the inductive supposition. Note that x 2 w 23 is a standard word and [x 2 w 23 ] equals [x 2 [w 23 ]]. This super-letter is non-hard in U P (g) since x 2 w 23 contains the sub-word x 2 2 x 3 . Thus [x 2 w 23 ] is a linear combination of monotonous non-decreasing super-words in lesser super-letters. Among these super-words there is no [w 23 ] · x 2 since x 2 > x 2 w 23 . On the other hand, [w 23 ] · x 2 is a monotonous non-decreasing super-word and hence its value in U P (g) is a basis element. Therefore for n > 2 the left hand side W of Σ is non-zero. It remains to note that for n > 1 the sum of all tensors w (1) ⊗ w (2) of ∆([w 12 ] h ) such that deg 1 (w (2) ) = h, deg k (w (2) ) = 0, k > 1 equals Σ h , hence [w 12 ] h can not be skew-primitive. Along similar lines, by Lemma 4.9, every skew primitive homogeneous element has the form [v] h . This, together with Lemma 7.16, proves the fourth statement and, for p 11 = 1, the second one too. If p 11 = 1 then by (45) we have p 2 nn = 1, p ii = 1, i < n. Besides, p ij p ji = 1 for all i, j. This means that the skew commutator is a quantum operation. Hence all elements of B are skew primitive. In the case p nn = 1 these elements span a colour Lie algebra, while in the case p nn = −1 they span a colour Lie super-algebra. Now as in Theorem A n , we may use the P BW -theorem for the colour Lie super-algebras. The third statement will follow Theorem 5.2 and Lemmas 5.3, 7.11 if we prove that all super-letters (43) are zero in U P (g). We have already proved that these super-letters are non-hard. Therefore it remains to use the homogeneous version of Definition 4.3 and Lemma 7.12. ✷ Theorem C n . Suppose that g is of the type C n , and p ii = −1, 1 ≤ i ≤ n, p [3] n−1n−1 = 0. Denote by B the set of the following super-letters: The statements given below are valid. 1. The values of the super-letters (67) in U P (g) form the PBW-generators set. 2. Each of these super-letters has the infinite height in U P (g). 3. The following relations with (21) form a Groebner-Shirshov system that determines the crystal basis of U P (g). 4. If p 11 = 1 then the generators x i and their powers x t i , x tl k i , such that p ii is a primitive t-th root of 1 together with the constants 1 − g, g ∈ G form a basis of g P = L(U P (g)). Here l is the characteristic of the ground field. 5. If p 11 = 1 then the elements (67) and in the case of prime characteristic l theirs l k -th powers, together with the constants 1 − g, g ∈ G form a basis of g P . By Corollary 2.4 we get the existence conditions Therefore the following relations are correct The left multiplication by x n−2 of the last relation implies [ Proof. It is enough to use the properties 6s, 1s and 2s. ✷ Proof. The first two formulae (75) coincide with (49) up to replacement of v with w provided k + 1 = n > m. Obviously for m < n the inequality v km > v rs is equivalent to w km > w rs , while v km > u rs is equivalent to w km > w rs . Hence Lemmas (7.8), (7.9), (7.10) are still valid under the replacement of w with v : Further, v k > v r if and only if k < r, and under this condition In a similar manner v k > u rm is equivalent to k < r, while v k > v rm is equivalent to k ≤ r. Therefore none of the words is standard since u kn > u rm and u kn > v rm , respectively. For the remaining two cases we have only two possibilities The treatment in turn of the eight possibilities (76), (77) proves the lemma. ✷ Proof. The proof is akin to Lemma 7.5 with the following tableaux: Proof. For i < m − 1, we may transpose y by means of (35) to the left across x 2 n and then use Lemma 7.2 with m ′ = n − 1. where by definition v kn = u kn and u kn x n−2 ≡ n−2 0, while n − 2 = i > k. If y = x i , i = m > k then for m = n − 1 we may use the inequality (74), while for m < n − 1 we have v km y = v km+1 y 1 where y 1 = x 2 m . Hence we may use (80) replacing m by m + 1. If y = x i , i > m > k then by (35) we get v km y = αv ki+1 x i x i−1 x i · w. Changing the underlined by (33), we may apply the previously considered cases: If we multiply (69) by x n from the right and subtract (33) with i = n−1 multiplied from the left by x 2 n−1 , then by means of p −2 n−1n−1 = p nn−1 p n−1n = p −1 nn we get x 2 n−1 x n x n−1 x n ≡ n p n−1n (p [3] n−1n−1 x n−1 x n x 2 n−1 x n − p n−1n−1 x 2 n−1 x 2 n x n−1 ). Let us first multiply this relation by x 2 n−2 from the left and then apply (33) to the underlined sub-word. Taking into account the relation x 2 n−2 x 3 n−1 ≡ n−1 0, we get that the left hand side of the multiplied (81) equals p n−1n p nn (1 + p nn ) −1 x 2 n−2 x 2 n−1 x 2 n x n−1 up to ≡ n−1 , i.e. it is proportional to the second term of the right hand side. As a result the relation below with α = p −1 n−1n−1 (1 + p nn ) = 0 is correct. Lemma 7.21. If k < s < m ≤ n and as above v kn = u kn then v km v ks ≡ k+1 εv ks v km , ε = 0, Proof. Let us use downward induction on k. For this we first transpose the second letter x k of v km v ks as far to the left as possible by means of (35), and then change the onset x k x k+1 x k according to (72). We get For k+1 < s we may apply the inductive supposition to the word in the parentheses, and then transpose x k to its former position by (72), (35). For k + 1 = s we will use downward induction on s. Let k + 1 = s = n − 1. In this case m = n and (84) becomes: v n−2 n v n−2 n−1 ≡ n−1 βx 2 n−2 (x n−1 x n x n−1 x n x n−1 ). Let us replace the underlined part according to (33). Since x 2 n−2 x n−1 x 2 n ≡ n 0, we may continue by (82): Proof. According to the Super-letter Crystallisation Lemma and Lemma 7.11 it is sufficient to show that words of the form u 0 , u 1 , u 2 , w 3 , w 4 , w 5 , w 6 are linear combinations of lesser words in U P (g). The words u 0 are diminished by (35). The words u 1 , u 2 have been diminished in Theorem A n since in the case C n the words u 2 are independent of x n , while u 1 depends on x n only if u 1 = x n−1 x 2 n . The relation (79) shows that w 3 ≡ k+1 0, w 4 ≡ k+1 0. Lemma 7.21 with s = m − 1 gives the required representation for u 5 . Consider the words w 6 . For k = n − 1 the relation (69) defines the required decomposition. Let k < n − 1. Since x 1 , . . . , x n−1 generate a sub-algebra of the type A n−1 , the crystal decomposition of u 3 kn−2 x n−1 has the form where In particular, if m 1 = k then m 2 = . . . = m t = k and, due to the homogeneity, t = 3, s 1 = n − 1, s 2 = s 3 = n − 2. Therefore Along similar lines, the following relations are valid as well Now let us multiply (33) with i = n − 2 by x n−1 from the right, and then add to the result the same relation multiplied by p n−2 n−1 (1 + p n−1n−1 )x n−1 from the left. We get the following relation with α = p 2 n−2 n−1 p [3] n−1n−1 = 0. Further, we may write where for k = n − 2 the term u k n−3 is absent. Let us apply (33) with i = n − 2 to the underlined word. Since u k n−2 u k n−3 x 2 n−1 ≡ n−1 0, we have got Let us apply (88). Taking into account the second of (87) we get Let us multiply this relation from the right by x n . By (69) we have By means of (86) and (87) we have got Proof. Note that for n > 3 the algebra generated by x 2 , . . . x n is a sub-algebra of the type C n−1 . Therefore we may use induction on n with additional supposition that the theorem statements 1 and 2 are valid for the lesser values of n. We will formally consider the sub-algebra generated by x n−1 , x n as an algebra of the type C 2 , and the sub-algebra generated by x n as an algebra of type C 1 . In this case for n = 1 the present lemma and the statements 1 and 2 are valid in obvious way. If the first letter x k of v is less than x 1 then we may use the inductive supposition directly. If v = u 1m then one may literally repeat arguments of Lemma 7.6 starting at (39). If v = v 1m and n > 3 then we may repeat arguments of Lemma 7.16 starting at (63) up to replacing w with v. For n = 3 in these arguments the formula (66) assumes the form Therefore the left component of the tensor Σ is a non-zero linear combination of the basis elements. For n = 2 the set B has no elements v 1m at all. Consider the last case, v = v 1 = [u 2 1n−1 x n ]. Let S k be the sum of all tensors of ∆([u kn ]) = u (1) ⊗ u (2) with deg n (w (1) ) = 1, deg k (w (1) ) = 0, k < n. Evidently S n = x n ⊗1. Let us show by downward induction on k that S k = (1−p −1 11 )g(u kn−1 )x n ⊗ [u kn−1 ] at k < n. We have Consequently, This implies the required formula since by (70) at k < n − 1 we have . In a similar manner, consider the sum S of all tensors of ∆([u 2 kn x n ]) = w (1) ⊗w (2) with deg n (w (1) ) = 1, deg i (w (1) ) = 0, at i < n. Since we now S 1 , we may calculate S : By (70), using the bicharacter property of p, we have Proof of Theorem C n . For the first statement it will suffice to prove that all superletters (67) are hard in U P (g). Since none of u km , v km contains a sub-word (28) ] = 0. Since deg n (v k ) = 1 and deg n−1 (v k ) = 2, the equality [v k ] = 0 is valid in the algebra C ′ which is defined by all relations of U P (g), but ones of degree greater than 1 in x n and ones of degree greater than 2 in x n−1 , that is in the algebra defined by (33), (34) with i < n − 1, and (35). These relations do not reverse the order of x n−1 and x n in monomials since none of them has both x n−1 and x n . This implies that the sum of all monomials of in which x n is prefixed to x n−1 equals zero in C ′ , that is [u kn ] · [u kn−1 ] = 0. Especially, this equality is valid in U P (g). Since, by Theorem 4.5, the super-word [u kn ] · [u kn−1 ] is a basis element, the first statement is proved. [v] h . This, together with Lemma 7.23, proves the fourth statement and, for p 11 = 1, the second one too. If p 11 = 1 then according to (70) we have p ii = p ij p ji = 1 at all i, j. In particular, the skew commutator is a quantum operation. Hence all elements of B are skew primitive. These elements span a colour Lie algebra. Now, as in Theorem A n , we may use the coloured PBW theorem. The third statement will follow from Theorem 5.2 and Lemmas 5.3, 7.18 provided we note that all super-letters (68) are zero in U P (g). We have proved already that these super-letters are non-hard. So it remains to use first the homogeneous version of Definition 4.3 and then Lemma 7.26. ✷ Theorem D n . Let g be of the type D n , and p ii = −1, 1 ≤ i ≤ n. Denote by B the set of the following super-letters: The statements given below are valid. 1. The values of (98) in U P (g) form the PBW-generators set. 2. Each of the super-letters (98) has infinite height in U P (g). 3. The relations (21) together with the following ones form a Groebner-Shirshov system that determines the crystal basis of U P (g). 4. If p 11 = 1, then the generators x i , their powers x t i , x tl k i , such that p ii is a primitive t-th root of 1, together with the constants 1 − g, g ∈ G form a basis of g P = L(U P (g)). 5. If p 11 = 1, then the elements of B and, for l > 0, their l k -th powers together with the constants 1 − g, g ∈ G form a basis of g P . In the case D n the algebra U b P (g) can be defined by the condition that the subalgebras U n−1 and U n generated, respectively, by x 1 , . . . , x n−1 and x 1 , . . . , x n−2 , x ′ n−1 = x n are quantum universal enveloping algebras of the type A n−1 , and by the only additional relation The existence conditions take up the form Proof. The formulae (102) coincides with (49) at k = n − 1 up to replacing e by w. The inequality e km > e rs is set up by the same conditions, k < r ∨ (k = r&m < s), as the inequality w km > w rs does. Likewise u km > e rs is set up by the same condition, k ≤ r, as u km > w rs does. Therefore Lemmas 7.8, 7.9, 7.10 remain valid with e in place of w : By looking over all of these possibilities we get the lemma statement. ✷ Proof. The proof is similar to the one of Lemma 7.5 with the tableaux Proof. If i < m − 1, m = n, or m = n, i < n − 2, then with the help of (35) and (100) it is possible to permute y to the left beyond x n and then to use Lemma 7.2 for U n−1 . If m = n, i = n − 2 then we may use Lemma 7.2 for U n . If y = x 2 i , m − 1 = i > k then for m < n by the above case we get For m = n we have e kn x 2 n−1 = αu k n−2 x 2 n−1 x n ≡ n−1 0 since the underlined part belongs to U n−1 . If y = x i , i = m > k then for m = n we may use Lemma 7.2 applied to U n ; for m = n − 1 we may use the same lemma applied to U n−1 provided that beforehand we permute x n with y by (100); for m < n − 1 we may first rewrite e km y = e k m+1 y 1 , where y 1 = x 2 m , and then use (106) with m + 1 in place of m. If y = x i , i > m > k then for i < n we have e km y = αe ki+1 x i x i−1 x i · v. Replacing the underlined word by (33) in U n−1 , we may use the previously considered cases: For i = n, and m = n − 1 we have e k n−1 x n = αu k n−2 x 2 n x n−1 and one may apply Lemma 7.2 to U n . Finally, for i = n and m < n − 1 we get e km x n = β 1 u k n−2 x n x n−1 x n−2 x n · v = β 2 u k n−2 x n−1 x n x n−1 x n · v = β 3 u k n−2 x n−1 x n−2 x 2 n · v + β 4 u k n−2 x n−1 x 2 n x n−2 · v. One may apply first Lemma 7.2 for U n−1 to the underlined sub-word of the first term, and then, after (100), Lemma 7.2 for U n to the second term. ✷ Lemma 7.28. If k < s < m ≤ n then e km e ks ≡ k+1 εe ks e km , ε = 0. Proof. Let us carry out downward induction on k. The largest value of k equals n − 2. In this case s = n − 1, m = n and we have Let us first transpose the second letter x k of e km e ks as far to the left as possible by (35), and then replace the onset x k x k+1 x k by (36). We get e km e ks ≡ k+1 αx 2 k (e k+1m e k+1s ), α = 0. For k + 1 < s it suffices to apply the inductive supposition to the word in the parentheses and then by (36) and (35) to put x k to the proper place. For k + 1 = s one may use downward induction on s. The basis of this induction, s = n − 1, has been proved, see (107). For k < n − 3 the inductive step on s coincides with the one of Lemma 7.14 with e in place of w since in this case the active variables x k , x k+1 q-commute with x n . If k = n − 3 then in consideration of Lemma 7.14 the variable x k+1 = x n−2 is transposed across x n twice: in (58) and in the second word of (60). In (58) with k = n − 3 we have s = n − 2, m = n; and (58) becomes e n−3n e n−3n−2 ≡ n−2 βe n−3n−1 x n−3 x n−2 x n x n−2 . (109) In view of Lemma 7.27, we may transform the underlined part in U n neglecting the words starting with x 2 n−2 and x n in much the same way as in (59), with x n in place of x k+1 . So (109) reduces to the required form. The second word of (60) with k = n − 3 assumes the form e 2 n−3n x n−2 x 2 n−1 = e n−3n x n−3 x n−2 x n x n−2 x 2 n−1 . By Lemma 7.2 applied to U n , the underlined word is a linear combination of words starting with x n−2 and x n . However, by Lemma 7.27 both e n−3n x n−2 and e n−3n x n equal zero up to ≡ n−2 . ✷ Lemma 7.29. The set B satisfies the conditions of Lemma 4.8. Proof. By Lemmas 7.25 and 4.7 one need show only that in U b P (g) the words (99) are linear combinations of lesser ones. The words v 6 with m = n − 2, and u 0 , u 1 , u ′ 1 , u 2 have the required decomposition since they belong either to U n−1 or to U n . Lemma 7.27 shows that v 3 ≡ k+1 0, v 4 ≡ k+1 0, v ′ 4 ≡ k+1 0. Lemma 7.28 with s = m − 1 yields the required representation for v 5 . Consider v 6 with m = n − 1. Let us prove by downward induction on k that u k n−1 e kn ≡ k+1 εe kn u k n−1 , ε = 0. For k = n−1 this equality assumes the form (100). Let k < n−1. Let us transpose the second letter x k of u k n−1 e kn as far to the left as possible in U n−1 . After an application of (33) we get u k n−1 e kn ≡ k+1 αx 2 k (u k+1n−1 e k+1n ), α = 0. It suffices to apply the inductive supposition to the term in the parentheses, and then by (33) and (35) for U n to move x k to the proper place. Proof. One need consider only super-letters that belong neither to U n−1 nor to U n . That is [e km ] with m < n. We use induction on n. For n = 3 the algebra of the type D 3 reduces to the algebra of the type A 3 with a new ordering of variables x 2 > x 1 > x 3 . Therefore we may use Theorem A n , after the decomposition below of e 12 in the PBW-basis: Let n > 3. If k > 1 then the inductive supposition works. For k = 1, m > 2 we have e 1m = [x 1 [e 2m ]], and one may repeat the arguments of Lemma 7.6 with e in place of u starting at (39). If m = 2 then we may repeat the arguments of Lemma 7.16 with e on place of w starting at (63). ✷ Proof of Theorem D n . For the first statement it will suffice to prove that all superletters (98) are hard in U b P (g). Since none of u km contains sub-words (28), [u km ] are hard. Suppose [e km ] is non-hard. By Lemmas 7.29 and 4.8 all hard super-letters belong to B. Thus, by Lemma 7.26, we get [e km ] = 0. Since deg n (e km ) =deg n−1 (e km ) = 1, the equality [e km ] = 0 is also valid in the algebra D ′ defined by the same relations as U b P (g) is, but [x n−2 x 2 n ] = 0 and [x n−2 x 2 n−1 ] = 0. Let us equate to zero all monomials in all the defining relations of D ′ , but [x n−1 x n ] = 0. Consider the algebra R ′ defined by (100) and by the resulting system of monomial relations. It is easy to verify that the mentioned relations system Σ of R ′ is closed under the compositions. Since e km contains none of leading words of Σ, the super-letter [e km ] is non-zero in R ′ , and so in D ′ too. This contradiction proves the first statement. If If p 11 = 1 then by (101) we have p ii = p ij p ji = 1 for all i, j. This means that the skew commutator itself is a quantum operation. Hence all elements of B are skewprimitive. These elements span a colour Lie super-algebra. Now, as in Theorem A n , one may use the PBW theorem for colour Lie super-algebras. For the third statement it will suffice to show that all super-letters (99) are zero in U P (g). We have proved already that they are non-hard. Therefore it remains to use the homogeneous version of Definition 4.3 and Lemma 7.26. ✷ Conclusion We see that in all Theorems A n -D n the lists of hard super-letters are independent of the parameters p ij . This fact signifies that the Lalonde-Ram basis of the ground Lie algebra (see, [26, Figure 1]) with the skew commutator in place of the Lie operation coincides with the set of all hard super-letters. It is very interesting to clarify how general this statement is. On the one hand, this does not hold without exception for all quantum enveloping algebras since in Theorems A n -D n a restriction does exist. If p ii = −1, 1 ≤ i < n, n > 2 then it is easy to see by means of the Diamond Lemma that the sets of hard super-letters are infinite. On the other hand, this is not a specific property of Lie algebras defined by the Serre relations. By the Shirshov theorem [36] any relation can be reduced to a linear combination of standard nonassociative words. Proof. The only relation f * = 0 forms a Groebner-Shirshov system since, according to 1s, none of onsets of its leading word, say w, coincides with a proper terminal of w. Consequently, a super-letter [u] is hard if and only if u does not contain w as a sub-word. We see that this criteria is independent of p ij as well. ✷ Furthermore, the third statement of Theorem A n shows that U b P (g) can be defined by the following relations in the PBW-generators X u = [u]. This is an argument in favour of considering the super-letters PBW-generators k[G]module as a quantum analogue of a Lie algebra. However in the cases B n , C n , D n the defining relations became more complicated. For example, It is far more interesting that for p 11 = 1 the algebra g P turns out to be very simple in structure. Only unary quantum operations can be non-zero. Other ones may be defined, but due to the homogeneity their values equal zero. In particular, if p [t] 11 = 0 then without exception all quantum operations have zero values. This provides reason enough to consider U P (g) = U(g P ) as an algebra of 'commutative' quantum polynomials. Certainly it is very interesting to elucidate to what extent this statement is still retained for the quantum universal enveloping algebras of homogeneous components of other Kac-Moody algebras defined by the Gabber-Kac relations (11). Also it is interesting to investigate the structure of other 'commutative' quantum polynomial algebras. For example, one may note that if a semi-group generated by p ij p ji does not contain 1, then G x 1 , . . . , x n itself is a 'commutative' quantum polynomial algebra merely since in this case there exists no non-zero quantum operation at all. In another extreme case when p ij p ji = 1 for all i, j, the 'commutative' quantum variables commute by x i x j = p ij x j x i . In a similar manner, the Drinfeld-Jimbo enveloping algebra can be considered as a 'quantum' Weyl algebra of (skew) differential operators (see Sec. 6). The resulting 'quantum' Weyl algebra is simple in the following sense. Corollary 8.2. Let g be a simple finite dimensional Lie algebra of the infinite series. If q [m] = 0, m ≥ 2 then every non-zero Hopf ideal I of the Drinfeld-Jimbo enveloping algebra contains all generators x i , x − i . Proof. By the Heyneman-Radford theorem, the ideal I has a non-zero skew primitive element, say a. According to Lemma 6.2 and Theorems A n -C n , the element a is either a constant, α(1−g), or proportional to one of the elements x i , x − i . In the former case I contains all x i with χ i (g) = 1 since x i a − χ i (g)a i x i = α(1 − χ i (g))x i . Here the equality χ i (g) = 1 can not be valid for all i since χ i (g j ) = q −d i a ij (see, Example 4 of Section 2) and the columns of the Cartan matrix are linearly independent. In the latter case (and now in the former one as well) we get [x i , x − i ] = ε i (1 − g 2 i ) ∈ I, i.e. as above I contains all elements y = x ± i with 1 = χ y (g 2 i ) = q ±2d j a ij . Since the Coxeter graph is connected, I contains all x i , x − i . ✷ facilities for the research and also to Dr. L.A. Bokut' and Dr. R. Bautista for helpful comments on the subject matter.
2014-10-01T00:00:00.000Z
2000-02-17T00:00:00.000
{ "year": 2000, "sha1": "5efc125b5b78a7882c2e587404804d1fe44ff033", "oa_license": null, "oa_url": "http://msp.org/pjm/2002/203-1/pjm-v203-n1-p09-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "721037231d725730b778f3a569cc877b8b5aaabe", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
40662010
pes2o/s2orc
v3-fos-license
Brucella Outer Membrane Lipoproteins Share Antigenic Determinants with Bacteria of the FamilyRhizobiaceae ABSTRACT Brucellae have been reported to be phylogenetically related to bacteria of the family Rhizobiaceae. In the present study, we used a panel of monoclonal antibodies (MAbs) to Brucellaouter membrane proteins (OMPs) to determine the presence of common OMP epitopes in some representative bacteria of this family, i.e.,Ochrobactrum anthropi, Phyllobacterium rubiacearum, Rhizobium leguminosarum, andAgrobacterium tumefaciens, and also in bacteria reported to serologically cross-react with brucella, i.e., Yersinia enterocolitica O:9, Escherichia coli O:157, andSalmonella urbana. In particular, most MAbs to theBrucella outer membrane lipoproteins Omp10, Omp16, and Omp19 cross-reacted with O. anthropi and P. rubiacearum, which are actually the closest relatives of brucellae. Some of them also cross-reacted, but to a lower extent, withR. leguminosarum and A. tumefaciens. The putative Omp16 and Omp19 homologs in these bacteria showed the same apparent molecular masses as their Brucella counterparts. None of the antilipoprotein MAbs cross-reacted with Y. enterocolitica O:9, E. coli O:157, or S. urbana. Brucellae are gram-negative, facultative, intracellular bacteria that can infect humans and many species of animals. Six species are recognized within the genus Brucella: B. abortus, B. melitensis, B. suis, B. ovis, B. canis, and B. neotomae (7). These classifications are based mainly on their differences in pathogenicity and host preference (7). The Brucella species constitute a very homogeneous group, as shown by their antigenic relatedness and by DNA-DNA hybridization studies (Ͼ90% DNA homology for all species) (8,9,25). On the basis of the 16S rRNA sequence, brucellae have been shown to belong to the family Rhizobiaceae (27). This family includes plant and animal pathogens, such as Agrobacterium, Bartonella, and Brucella, that are characteristically associated pericellularly or intracellularly with eukaryotic cells; plant endosymbionts, such as Rhizobium and Phyllobacterium; soil inhabitants, such as Mycoplana; and isolates from soil and from human clinical specimens, such as Ochrobactrum (14,18,19). Among all these bacteria, Ochrobactrum anthropi is the closest known relative of brucellae (14,24,27). This bacterium has gained interest in the past few years because of its isolation from immunocompromised hosts (1,(11)(12)(13). Recent reports have also described immunological cross-reactions between Brucella spp. and O. anthropi (23,24). The antigens containing common epitopes were described as rough lipopolysaccharide and soluble and membrane proteins of unknown nature (23,24). Since O. anthropi constitutes a heterogeneous group of bacteria on the basis of classical phenotypical characterization and DNA-DNA hybridization studies, further subdivision of the genus into two species, O. anthropi and O. intermedium, has recently been proposed (24). The latter, new species name has been suggested because of a closer genetic and antigenic relationship with brucellae than with O. anthropi (24). Additionally, brucellae also share epitopes, mainly on the smooth lipopolysaccharide (S-LPS), with bacteria reported earlier to serolog-ically cross-react with Brucella, of which the most important is Yersinia enterocolitica O:9 (7). The Brucella outer membrane contains three major proteins with molecular masses ranging from 25 to 27, 31 to 34, and 36 to 38 kDa (2, 6). The largest protein has been identified and characterized as a porin (10,17). The genes coding for these proteins have been cloned and sequenced, and the current names for these outer membrane proteins (OMPs) are Omp25, Omp31, and Omp2b, respectively (4,5,17). The other OMPs identified so far by use of monoclonal antibodies (MAbs) are less abundant (minor) proteins with molecular masses of 10, 16.5, 19, and 89 kDa (2). Gene cloning, the predicted amino acid sequences, and the presence of particular protein motifs have identified the 10-, 16.5-, and 19-kDa OMPs as outer membrane lipoproteins (21,22). The current names for these OMPs are Omp10, Omp16, and Omp19, respectively (21,22). Omp16 actually belongs to the peptidoglycan-associated lipoprotein family of proteins found in many gram-negative bacteria (22). Homologs of Omp10 and Omp19 have not yet been reported for other bacteria. All of these proteins have been found as immunogenic proteins in infected cattle, sheep, and goats (3,15,16,21,28). In the present study, we used MAbs to analyze the occurrence of epitopes common to Brucella OMPs in phylogenetically related bacteria of the family Rhizobiaceae and reported S-LPS-cross-reacting bacteria as well. The importance of the epitopes recognized by the MAbs in the antibody responses of Brucella-infected cattle and sheep has been previously shown by competitive enzyme-linked immunosorbent assay (ELISA) with these MAbs (3,28). The occurrence of common epitopes could explain some of the serologic protein cross-reactivities reported between Brucella and Ochrobactrum (23,24). In addition, the present study also led to the identification of new homologous proteins within the family Rhizobiaceae. The occurrence of cross-reacting epitopes was first screened by ELISA, performed as described previously (2,5,28). Microtiter plates were coated with bacterial suspensions in phosphate-buffered saline at an absorbance (600 nm) of 1.0. To improve accessibility of OMPs, bacteria were sonicated prior to coating (5). MAbs were used at a dilution of 1/2. Positive control MAbs were 3D6, specific for peptidoglycan (6), and A53/09G03/D02, specific for DnaK, previously shown to crossreact with O. anthropi and P. rubiacearum (26). In particular, most MAbs to the outer membrane lipoproteins Omp10, Omp16, and Omp19 cross-reacted in ELISA with both O. anthropi 3301 and 3331 and P. rubiacearum (Table 1) In immunoblotting after sodium dodecyl sulfate-polyacryl-amide gel electrophoresis, performed as described previously (2,28), the anti-Omp16 MAbs reacted strongly with O. anthropi 3301 and 3331, P. rubiacearum, and R. leguminosarum and weakly with A. tumefaciens, thus confirming the ELISA results (Fig. 1). The anti-Omp19 MAbs reacted strongly only with O. anthropi and P. rubiacearum, which is also in accordance with the ELISA results. The putative Omp16 and Omp19 homologs detected by the MAbs in these bacteria showed the same apparent molecular masses as their Brucella counterparts. The anti-Omp10 MAbs gave no positive reactions in immunoblotting and reacted only weakly with B. abortus, which was used as the control (Fig. 1). In conclusion, the present study showed the presence of epitopes cross-reactive with Brucella outer membrane lipoproteins on genetically related bacteria, of which the most important is O. anthropi. Of particular interest are the lipoproteins Omp10 and Omp19, not yet reported for other bacteria. Thus, these proteins could constitute a new family of OMPs specifically encountered in Rhizobiaceae. As suggested by Velasco et al. (23), the immunoresponse of Brucella-infected hosts to protein antigens may not necessarily be specific for brucellae, and the presence of O. anthropi or related bacteria may explain previously described reactivities to OMPs in healthy animals (16). The outer membrane lipoproteins Omp10, Omp16, and Omp19 are the first identified among these OMPs. We thank J. M. Verger and M. Grayon for supplying the strains. We also thank S. Baucheron for technical support.
2018-04-03T04:52:48.412Z
1999-07-01T00:00:00.000
{ "year": 1999, "sha1": "0de810fef83adbd7081761b7eb1b9a36c672ad0d", "oa_license": null, "oa_url": "https://cvi.asm.org/content/cdli/6/4/627.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d3bae905eba1f0e6140ee1c002d1b7a4393ee4e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6595273
pes2o/s2orc
v3-fos-license
Inter-observer agreement and diagnostic accuracy of myocardial perfusion reserve quantification by cardiovascular magnetic resonance at 3 Tesla in comparison to quantitative coronary angiography Background Quantification of cardiovascular magnetic resonance (CMR) myocardial perfusion reserve (MPR) at 1.5 Tesla has been shown to correlate to invasive evaluation of coronary artery disease (CAD) and to yield good inter-observer agreement. However, little is known about quantitative adenosine-perfusion CMR at 3 Tesla and no data about inter-observer agreement is available. Aim of our study was to evaluate inter-observer agreement and to assess the diagnostic accuracy in comparison to quantitative coronary angiography (QCA). Methods Fifty-three patients referred for coronary x-ray angiography were previously examined in a 3 Tesla whole-body scanner. Adenosine and rest perfusion CMR were acquired for the quantification of MPR in all segments. Two blinded and independent readers analyzed all images. QCA was performed in case of coronary stenosis. QCA data was used to assess diagnostic accuracy of the MPR measurements. Results Inter-observer agreement was high for all myocardial perfusion territories (ρ = 0.92 for LAD, ρ = 0.93 for CX and RCA perfused segments). Compared to QCA receiver-operating characteristics yielded an area under the curve of 0.78 and 0.73 for RCA, 0.66 and 0.69 for LAD, and 0.52 and 0.53 for LCX perfused territories. Conclusions Inter-observer agreement of MPR quantification at 3 Tesla CMR is very high for all myocardial segments. Diagnostic accuracy in comparison to QCA yields good values for the RCA and LAD perfused territories, but moderate values for the posterior LCX perfused myocardial segments. Background Visual assessment of perfusion cardiovascular magnetic resonance (CMR) at 1.5 Tesla has been shown to yield high diagnostic accuracy in comparison to coronary x-ray angiography for the detection of coronary artery disease (CAD) [1]. Its sensitivity and specificity is superior to single-photon emission computed tomography regarding the detection of CAD [2]. Perfusion CMR at 3 Tesla has increased signal-tonoise and contrast-to-noise ratios in comparison to 1.5 Tesla [3][4][5][6]. Moreover, maximum upslope for quantitative perfusion analysis has been proven to be increased at 3 Tesla [6]. These potential benefits at 3 Tesla have recently been shown to yield higher diagnostic accuracy in comparison to 1.5 Tesla for adenosine-perfusion CMR to detect CAD [7,8]. Intra-and interobserver agreement for the visual assessment of adenosine-perfusion CMR as well as interstudy reproducibility of quantitative assessment at 1.5 Tesla have been proven to be very high [9,10]. Moreover, quantitative analysis of adenosine-perfusion CMR at 3 Tesla exhibits a high correlation to invasively measured fractional flow reserve [11], which is regarded by many investigators as the standard diagnostic tool to evaluate hemodynamic significance of CAD. However, little is known about the inter-observer correlation and thus, reliability of the quantitative analysis approach of 3 Tesla perfusion imaging. The aim of our study was to evaluate the inter-observer agreement of quantitative myocardial perfusion analysis at 3 Tesla and to assess its diagnostic accuracy in comparison to quantitative coronary angiography (QCA). Study population Sixty-three consecutive patients suspected for CAD or progression of known CAD, who were referred for diagnostic coronary angiography, were prospectively recruited. Patients were excluded, if they had a recent history of myocardial infarction (within 30 days), had previously Figure 1 Example of inducible ischemia during adenosine in segments supplied by the LAD (Ia) without corresponding perfusion deficit at rest (IIa). Segmental upslope curves during adenosine and rest for both readers are shown in IIa and IIb. The coronary angiogram of the LAD stenosis and QCA of are provided in IIIa and IIIb, respectively. undergone coronary artery bypass or prosthetic valve surgery, were medically unstable, and had contraindications for gadolinium-based contrast agents, adenosine infusion, or CMR. Study patients were asked to avoid caffeine or other methylxanthines for at least 24 hours before CMR. All patients underwent CMR within 72 hours before of coronary catheterization. The study was approved by the ethics committee of the institution. All participants gave written informed consent. CMR examination All patients underwent CMR in a 3 Tesla whole-body system (Achieva, Philips Medical Systems, Best, Netherlands) using a 32-channel phased-array cardiac surface coil (Philips Medical Systems). Heart rate and blood pressure were monitored non-invasively during adenosine infusion. The CMR protocol used has been previously described in detail [8]. For functional analysis of the left and right ventricle a balanced steady-state free precession sequence was acquired in contiguous short-axis views covering the entire left and right ventricle from apex to basis (repetition time 3.4 ms, echo time 1.7 ms, acquired resolution 1.9 × 1.9 mm, flip angle α = 40°, slice thickness 8 mm, no interslice gap; acquisition in end-expirational breath-hold). For perfusion imaging a spoiled gradient-echo sequence (repetition time 2.6 ms, echo time 1.3 ms, saturate prepulse with 100 ms delay, flip angle α = 18°, acquired resolution 2.5 × 2.5 mm, slice thickness 8 mm; acquisition in end-expirational breath-hold) was acquired in three short axis (apical, midventricular, and basal). After three minutes of adenosine infusion at a constant rate of 140 μg/ kg/min, or earlier if angina pectoris was provoked, a bolus of 0.075 mmol/kg contrast agent (Dotarem, Guerbet, Villepinte, France) followed by 20 ml saline flush was administered with an injection rate of 5 ml/s. The sequence was repeated at rest ten minutes later using a second bolus of 0.075 mmol/kg contrast agent. A 3D inversion-recovery gradient-echo sequence in short axis views for late gadolinium enhancement (LGE) visualization was acquired ten minutes after the second bolus of contrast agent (repetition time 7.1 ms, echo time 3.2 ms,, flip angle α = 15, acquired resolution 1.6 × 1.6 mm°, slice thickness 8 mm; navigator-based acquisition). The inversion time was individually adjusted for complete nulling of the myocardium. CMR analysis Two experienced readers, blinded to patients' data and angiographic results, analyzed the anonymized DICOM files. All images were analyzed on a separate workstation (ViewForum, Philips Medical Systems). Functional images were analyzed for end-diastolic and end-systolic volumes. Ventricular ejection fractions were calculated, respectively. For evaluation of myocardial perfusion reserve each reader drew the endo-and epicardial left ventricular contours manually in each adenosine and rest perfusion image after correction for motion using the same software (ViewForum, Philips Medical Systems) independently from the other reader. The myocardium was divided into 16 segments according to the recommendations of the American Heart Association [12]. The resulting signal intensity-time curves were adjusted for left ventricular signal intensity and baseline signal intensity as previously reported [11,13]. Myocardial perfusion reserve was than calculated dividing the segmental upslope during adenosine and rest [11,13]. Figure 1 provides an example of inducible ischemia during adenosine, corresponding segmental upslope curves during adenosine and rest for both readers, and respective angiogram including QCA of the LAD stenosis. The myocardial segments were assigned to the respective supplying coronary artery [14]. The mean myocardial perfusion reserve for all segment supplied by one coronary artery was calculated for further analysis. As previously validated against fractional flow reserve, we defined a myocardial perfusion reserve cut-off value of ≤1.3 as consistent with relevant myocardial ischemia [11]. Quantitative coronary angiography Coronary angiography was performed within 48 hours after CMR in accordance to the ACC/AHA guidelines [15]. In case of coronary artery stenosis in a coronary artery with a diameter ≥2 mm quantitative analysis was performed by an experienced reader blinded to patients' data, clinical symptoms, and CMR results using commercially available standard software (CAAS 5.9, Pie Medical Imaging, Maastricht, Netherlands). A threshold of ≥70% luminal narrowing was used to identify significant coronary artery stenosis [16]. Statistical analysis Continuous variables were tested by the two-tailed t test after being tested for normal distribution by the D' Agostino-Pearson test. They are reported as mean value ± standard deviation. Categorical data is presented as number (%) and compared using the Fisher's exact test. Diagnostic accuracies of both readers in comparison to the quantitative coronary angiographic results were tested using receiver-operating characteristics curve analyses. Inter-observer agreement was being tested using Spearman's coefficient of correlation (ρ). Additionally, the correlation coefficient r was calculated. A p value <0.05 was considered significant for all tests. Study population Six patients had to be excluded due to obesity (N = 3), previous unknown coronary artery bypass surgery (N = 1) and uncompleted CMR exam due to technical issues (N = 2). Thus, our study population consisted of 53 patients. Mean age was 63.0 ± 9.3 years; 68% of our patients were male. Patients' characteristics including cardiovascular risk factors, Framingham 10 years risk for cardiovascular events, history of CAD, and medication are provided in Table 1. CMR No major complications were observed during CMR. The results of the left and right ventricular volumetric analysis are provided in Table 1. During adenosine a significant decrease of systolic and diastolic blood pressure could be observed in our patient cohort (Table 2). Furthermore, heart rate and rate pressure product increased, significantly. In four patients the image quality was insufficient for quantitative perfusion analysis. Latter mentioned patients were excluded from further analysis. In two patients a total of four segments had to be excluded from quantitative perfusion due to interference of the left ventricular outflow tract. Mean myocardial perfusion reserve indices of both readers per perfusion territory are provided in Table 3. Spearman's correlation coefficient yielded ρ = 0.92 for the RCA and LAD perfused territories and 0.93 for the territories supplied by the LCX. All Spearman's correlation coefficients achieved statistical significance. Figure 2 shows scattergrams of myocardial perfusion reserve indices in all perfusion territories with calculated correlation coefficient of r = 0.91 (p < 0.0001) in the RCA, r = 0.91 (p < 0.0001) in the LAD, and r = 0.90 (p < 0.0001) in the LCX perfused territories. QCA and CMR diagnostic accuracy Coronary angiography was performed in all patients without major complications. QCA revealed a coronary stenosis ≥70% in 25 (47.2%) patients (see Table 1). The RCA was affected in 13, the LAD in 11, and the LCX in 11 patients, resulting in 15 one-vessel and 10 multivessel diseases. Receiver operator characteristic (ROC) analysis of myocardial perfusion reserve quantification of both readers yielded an area under the curve of 0.78 and 0.73 for RCA, 0.66 and 0.69 for LAD, and 0.52 and 0.53 for LCX perfused territories, respectively. There were no significant differences between both readers. ROC curves for all perfusion territories are shown in Figure 3. Sensitivity, specificity, and overall accuracy for both readers are provided in Table 4. Discussion There is little data about inter-observer agreement of quantitative perfusion assessment at 1.5 Tesla, and this is to the best of our knowledge, the first study to report inter-observer agreement of quantitative myocardial perfusion analysis at 3 Tesla. We were able to demonstrate a high inter-observer agreement of quantitative myocardial perfusion reserve performed at 3 Tesla. The available studies at 1.5 Tesla report an inter-observer agreement of kappa = 0.66 [17]. Other studies yielded inter-observer agreements of 0.73 [9] and r = 0.93 [18]. Due to the large number of artifacts that can occur in 3 Tesla CMR [19], the question arises, whether interobserver reproducibility of quantitative perfusion analysis could be equal to that observed at 1.5 Tesla. Our data prove a high inter-observer agreement for all coronary perfusion territories. The diagnostic accuracy observed in our study showed good values for RCA, reduced accuracy for LAD, and poorer accuracy for LCX perfused myocardial territories. This is in concordance to similar observations with best values for RCA and moderate values for LCX supplied segments [20]. Latter study, however, evaluated qualitative myocardial perfusion assessment in comparison to QCA. The observation of reduced diagnostic accuracy in the posterior regions is probably caused by a poorer signal-to-noise ratio in those segments due to the distance to the surface coil [9,21]. A recent study proved a poor correlation of r = 0.58 between QCA and fractional flow reserve [20]. However, quantitative analysis of myocardial perfusion reserve at 1.5 [21] and 3 Tesla [11] has been shown to yield very high diagnostic accuracy in comparison to fractional flow reserve which is regarded by many investigators to be a very sensitive diagnostic tool to measure functional significance of coronary artery stenosis, whereas QCA does only provide anatomical, but no functional information about the stenosis severity. This opinion is reasonable, since it has been lately shown that fractional flow reserve is superior to QCA driven coronary intervention in preventing myocardial infarction, revascularization or death [22,23]. Myocardial perfusion reserve is similar to fractional flow reserve a diagnostic tool to measure the functional significance of CAD. Hence, the poor correlation to QCA is understandable. Quantitative CMR myocardial perfusion assessment could thus serve as a non-invasive surrogate to fractional flow reserve to measure the functional significance of CAD, as it already has been shown to yield high diagnostic accuracy in comparison to fractional flow reserve [11]. Limitations Patients who previously had undergone coronary artery bypass or prosthetic valve surgery were excluded from the study. Hence, this might be a limitation because the results of the study at hand might not be translatable to this population of patients. Moreover, we had to exclude four patients from quantitative perfusion analysis because of poor image quality. In two other patients a total of four segments were excluded due to interference of the left ventricular outflow tract. This was done to allow for good and reliable quantitative perfusion analysis. However, this is another possible limitation to our study in terms of selection bias. Conclusions Quantification of myocardial perfusion reserve at 3 Tesla yields very high inter-observer agreement, as could be shown in the present study. Diagnostic accuracy in comparison to quantitative coronary analysis for the LAD and RCA perfused myocardial territories is good and moderate for the LCX perfused territories. Competing interests The authors declare that they have no competing interests. Authors' contributions KI analysis and interpretation of data, drafting the manuscript. TW analysis and interpretation of data. LS analysis and interpretation of data. DB analysis and interpretation of data, drafting the manuscript. WR final approval of the manuscript, revising the manuscript critically for important intellectual content. PB conception and design, analysis and interpretation of data, drafting the manuscript. All authors read and approved the final manuscript.
2017-06-23T08:49:14.861Z
2013-03-27T00:00:00.000
{ "year": 2013, "sha1": "4c5ffe347948a3a0cdfa12d9c7d73bb7619162d5", "oa_license": "CCBY", "oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/1532-429X-15-25", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9daecd8755318f1736dee4741c9d5ef6c0d438fc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246486152
pes2o/s2orc
v3-fos-license
Less intensive antileukemic therapies (monotherapy and/or combination) for older adults with acute myeloid leukemia who are not candidates for intensive antileukemic therapy: A systematic review and meta-analysis Introduction Elderly patients with acute myeloid leukemia not eligible for intensive antileukemic therapy are treated with less intensive therapies, uncertainty remains regarding their relative merits. Objectives To compare the effectiveness and safety of less intensive antileukemic therapies for older adults with newly diagnosed AML not candidates for intensive therapies. Methods We included randomized controlled trials (RCTs) and non-randomized studies (NRS) comparing less intensive therapies in adults over 55 years with newly diagnosed AML. We searched MEDLINE and EMBASE from inception to August 2021. We assessed risk of bias of RCTs with a modified Cochrane Risk of Bias tool, and NRS with the Non-Randomized Studies of Interventions tool (ROBINS-I). We calculated pooled hazard ratios (HRs), risk ratios (RRs), mean differences (MD) and their 95% confidence intervals (CIs) using a random-effects pairwise meta-analyses and assessed the certainty of evidence using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. Results We included 27 studies (17 RCTs, 10 NRS; n = 5,698), which reported 9 comparisons. Patients were treated with azacitidine, decitabine, and low-dose cytarabine (LDAC), as monotherapies or in combination with other agents. Moderate certainty of evidence suggests no convincing difference in overall survival of patients who receive azacitidine monotherapy compared to LDAC monotherapy (HR 0.69; 95% CI, 0.31–1.53), fewer febrile neutropenia events occurred between azacitidine monotherapy to azacitidine combination (RR 0.45; 95% CI, 0.31–0.65), and, fewer neutropenia events occurred between LDAC monotherapy to decitabine monotherapy (RR 0.62; 95% CI 0.44–0.86). All other comparisons and outcomes had low or very low certainty of evidence. Conclusion There is no convincing superiority in OS when comparing less intensive therapies. Azacitidine monotherapy is likely to have fewer adverse events than azacitidine combination (febrile neutropenia), and LDAC monotherapy is likely to have fewer adverse events than decitabine monotherapy (neutropenia). Introduction Elderly patients with acute myeloid leukemia not eligible for intensive antileukemic therapy are treated with less intensive therapies, uncertainty remains regarding their relative merits. Objectives To compare the effectiveness and safety of less intensive antileukemic therapies for older adults with newly diagnosed AML not candidates for intensive therapies. Methods We included randomized controlled trials (RCTs) and non-randomized studies (NRS) comparing less intensive therapies in adults over 55 years with newly diagnosed AML. We searched MEDLINE and EMBASE from inception to August 2021. We assessed risk of bias of RCTs with a modified Cochrane Risk of Bias tool, and NRS with the Non-Randomized Studies of Interventions tool (ROBINS-I). We calculated pooled hazard ratios (HRs), risk ratios (RRs), mean differences (MD) and their 95% confidence intervals (CIs) using a random-effects pairwise meta-analyses and assessed the certainty of evidence using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. Introduction Acute myeloid leukemia (AML) is a heterogeneous hematopoietic stem cell cancer with incomplete maturation of blood cells and a reduced production of normal hematopoietic elements [1]. AML is more common in older adults with a median age at diagnosis of 67 years old; one-third of cases occur in patients older than 75 years [2]. Overall survival (OS) is strongly linked to clinical and biologic characteristics; age, performance status (PS), karyotype, mutational status and response to induction therapy [3]. For example, younger patients (2 to 30 years) have a much better 5-year OS than older patients (65 to >85 years) (57% to 42%, compared to 6.8% to 1.2%) [4,5]. Some older patients diagnosed with AML are not eligible for intensive treatment, limiting their therapeutic options [6]. Less intensive therapy with hypomethylating agents or low-dose cytarabine, as examples, has been used to treat older AML patients who are not candidates for intensive therapy [7]. In their 2020 guidelines, the American Society of Hematology (ASH) provided recommendations for the treatment of older adults with newly diagnosed AML who are considered appropriate for antileukemic therapy, but not intensive antileukemic therapy [8]. When choosing between monotherapies, the guideline panel conditionally recommended the use of either hypomethylating-agents (azacitidine or decitabine) or low-dose cytarabine and, when choosing between monotherapies or combinations, the guideline panel conditionally recommend using monotherapy [8]. To inform the recommendations provided by the ASH 2020 guideline for Treating Newly Diagnosed Acute Myeloid Leukemia in Older Adults [8]. We conducted a systematic review to compared the comparative effectiveness and safety of low-intensity antileukemic therapies (monotherapy and/or combination) in older adults with newly diagnosed AML who are not candidates for intensive therapy. Eligibility criteria We included randomized clinical trials (RCTs) and comparative non-randomized studies (NRS) of adults 55 years or older, with newly-diagnosed AML published in any language comparing the following less intensive therapies against each other, either as a monotherapy or in combination with any secondary agent: gemtuzumab ozogamicin, low dose cytarabine (LDCA), azacitidine (AZA) and decitabine (DEC). Outcomes of interest were mortality, quality of life, functional status, recurrence, morphologic complete remission, severe toxicity (CTC adverse effects grade 3 or higher), or burden on caregivers, measured in any way. We excluded studies that enrolled patient with acute promyelocytic leukemia, or myeloid proliferations related to Down syndrome and those in which researchers combined any of the interventions of interest with any agent considered a component of intensive antileukemic therapy regimens. Detailed description of the eligibility criteria-type of studies, participants, interventions and outcomes-is reported in S1 Appendix. Information sources and search We searched MEDLINE and EMBASE from inception to August 2021 without restrictions on language of publication. For informing the ASH recommendations, we searched for studies published through July 2019. We conducted an umbrella search that encompassed all the questions addressed in the guideline [8]. The supporting information file describes the search strategies items (S2 Appendix). We checked the reference lists of reviewed studies and contacted clinical experts for additional references. Study selection and data collection process Pairs of reviewers screened titles and abstracts obtained through the electronic searches and identified those potentially eligible. We then grouped studies according to the question they addressed and conducted full text screening specifically for our question. Four reviewers, independently working in pairs (BPR, NKF, AA, LECL) made eligibility decisions. If reviewers could not resolve disagreement through discussion, a third reviewer adjudicated (RBP). Pairs of reviewers independently abstracted data on a standardized form. We extracted the following information: type of study, recruitment time-frame, follow-up (months), sample size, participant characteristics, as age (years), gender, cytogenetics (intermediate or poor), performance status (ECOG or WHO classification), white cell count, AML diagnosis criteria, trial location, source of funding, trial registry interventions (main agent, dose and second agent for combination therapy groups), comparisons (main agent, dose and second agent for combination therapy groups), and outcomes (mortality, quality of life, functional status, recurrence, morphologic complete remission, severe toxicity (CTC adverse effects grade 3 or higher), or burden on caregivers, at any time point. If reviewers could not resolve disagreement through discussion, a third reviewer adjudicated (RBP). Risk of bias in individual studies Pairs of reviewers (BPR, NKF, AA, LECL), independently assessed risk of bias for each randomized controlled trial using a modified version of the Cochrane risk of bias tool for randomized trials [12] and, for nonrandomized studies, the Risk of Bias Assessment Tool for Non-Randomized Studies of Interventions ROBINS-I tool [13]. Data analysis We calculated the relative effect of less intensive therapies using hazard ratios (HR) for time to event data, relative risk (RR) for dichotomous outcomes, and mean difference for continuous outcomes, with their 95% confidence intervals (CIs). We used random-effects models with the DerSimonian-Laird estimate of heterogeneity to pool data across studies reporting the same comparison and outcome [10]. We used forest plots to display comparisons with two or more pooled studies. We carried out all statistical analyses using Review Manager 5.3 [14]. We planned to conduct a network meta-analysis to compare all interventions against each other, but there was no sufficient data to conduct such analysis (data not shown). We analyzed data from RCTs and NRSs separately. Dealing with missing data When details about study design or descriptive statistics for outcomes were not presented in original publications, we did not impute data but rather contacted authors for additional information. Assessment of the certainty of evidence by outcome We used the Grading of Recommendation, Assessment, Development, and Evaluation (GRADE) methodology to rate the certainty of evidence (also known as quality of evidence) for each outcome as high, moderate, low, or very low [15]. The assessment included judgments addressing risk of bias, imprecision, inconsistency, indirectness, and publication bias [15]. In addition, we assessed the magnitude of the effect, the presence of dose-response relationships, and residual confounding when rating the certainty of evidence from NRS [16]. We estimated absolute effect measures to facilitate the decision-making process [17]. Using absolute effects that we calculated based on the baseline risk of the comparator arms in the included studies, we rated the certainty that there was any benefit or any harm using a minimally contextualized approached [18]. We rated down due to imprecision if the confidence intervals crossed the null effect, and if the effect estimate was obtained from a small number of participants or events [19]. We assessed inconsistency between studies by visual inspection of forest plots, in particular the extent of overlap of confidence intervals (CI), the Q statistic (with a p value � 0.1 as a suggestion of important statistical heterogeneity), and the I 2 value [20]. We planned, if ten or more studies were available for a particular outcome, to create a funnel plot to assess publication bias by visual inspection [21]. Because we had multiple comparisons, we created Summary of Findings Tables for each comparison [22] and outcome using GRADEpro GDT (www.gradepro.org) [23]. Subgroup and sensitivity analysis We pooled and reported results from RCTs and NRS separately. We planned to conduct sensitivity analyses to explore the impact of the risk of bias in the effect estimates. We performed a subgroup analysis to explore the impact of the secondary agent (when comparing a combination therapy group) in the effect estimates, when there were sufficient studies. The number of studies per comparisons did not allow us to explore any subgroup analysis based on patients' characteristics (e.g., gender) Risk of bias of the included studies We provide a detailed description of the risk of bias assessment per study and domain in S4 Appendix. All NRS had serious risk of bias due to confounding because patient baseline characteristics were different between the treatment groups [36,[38][39][40][41][42]44,45,50,51]; two of the 10 studies had bias in the selection of participants into the study (serious (36) and moderate [51]); three of the studies had moderate risk of bias in the classification of the interventions, [38, 42,45]; and seven of the studies had bias due to deviations from the intended interventions (serious [42] and moderate [38,39,41,[43][44][45]). None of the studies had risk of bias due to missing data, outcomes measurements and selective reporting (S4 Appendix). All RCTs had low or probably low risk of bias in the sequence generation domain [24][25][26][27][28][29][30][31][32][33][34][35]37,[46][47][48][49]; three of the 17 studies had high risk of bias in the allocation concealment domain [26-28]; all the studies had low or probably low risk of bias in the blinding domains (performance and outcome measurement), missing data and selective reporting (S4 Appendix). Effects of the interventions We summarize the effects of the interventions and their associated certainty of the evidence by creating one table per outcome. Table 2 summarize the effect of the interventions on the overall survival of the participants, Table 3 summarizes the effect of the interventions on the infectious severe adverse events (CTC adverse effects grade 3 or higher), and Table 4 summarizes the effect of the interventions on the non-infectious severe adverse events (CTC adverse effects grade 3 or higher). S1 Table summarizes the effect of the interventions on 1-year mortality, 30-days mortality, complete remission and length of hospital stay, and S2 Table summarizes the certainty of evidence from the sub-group analyses. NA 10 NA 10 DECC compared to AZAC may not have little or no effect on mortality however, we are very uncertain about this effect. Baseline risk information came from control group from the included studies. HR; hazard ratio, RR, relative risk, RCT, Randomized controlled studies, NRS, Nonrandomized trials. [42,45]). However, the certainty of the evidence was low, and very low, respectively, which means that we are not certain about the true effect of the interventions (S1 Table). DEEC compared to AZAC may have little or no effect on sepsis, however, we are very uncertain about this effect. Baseline risk was obtained from the control group from the included studies. 1. We decided to rate down two levels due to imprecision: effect estimate is not consistent with benefit or harm and effect estimate comes from a single study. 2. We decided to rate down two levels due to risk of bias and imprecision: Allocation concealment was not described; adaptive randomization based on results, increase likelihood to be predicted and effect estimate comes from a single study and effect estimate is not consistent with benefit and harm. 3. We decided to rate down by one level due to imprecision: effect estimate is not consistent with benefit or harm. 4. We decided to rate down by one level due to imprecision: effect estimate comes from a single study. 5. We decided to rate down two levels due to inconsistency and imprecision: I2 62% (p-value 0.05) and effect estimate is not consistent with benefit or harms. 6. We decided to rate down two levels due to risk of bias and imprecision: Some of the covariates were not equal distribute among the participants (e. g. Hydroxyurea before study initiation) and The interventions related to the second agent might influence the treatment in the comparisons; Different proportions of patients in each group received granulocyte colony-stimulating factor or prophylactic non-azole antifungal agents. Venetoclax dose could be modified according to toxicity and effect estimate is not consistent with benefit or harms. 7. We decided to rate down two levels due risk of bias and imprecision: Performance status is different between the treatments under comparison (ECOG 3; 35.8% vs 0%), intervention status is well defined but some aspects of the assignments of intervention status were determined retrospectively and not clear if switches in treatment happen or co-interventions, also not clear if this was adjusted in the analysis and effect estimate comes from a single study. (Continued ) LDACM compared to LDACC may have little or no effect on Hypoxia/Respiratory Failure. Baseline risk was obtained from the control group from the included studies. 1. We decided to rate down one level due to imprecision: effect estimate is not consistent with benefit or harm. 2. We decided to rate down two levels due to imprecision: Effect estimate comes from single study and is not consistent with benefit or harm. 3. We decided to rate down two levels due to inconsistency and imprecision. I2 41% (p-value 0.18) and effect estimate is not consistent with benefit or harm. 4. We decided to rate down two levels due to serious inconsistency and imprecision; effect estimate not consistent with benefit or harm and I2 of 45%. 5. We decided to rate down two levels due to risk of bias and imprecision. Some of the covariates were not equal distribute among the participants (e. g. Hydroxyurea before study initiation) and The interventions related to the second agent might influence the treatment in the comparisons; Different proportions of patients in each group received granulocyte colony-stimulating factor or prophylactic non-azole antifungal agents. Venetoclax dose could be modified according to toxicity and effect estimate is not consistent with benefit or harms. 6. We decided to rate down one level due to imprecision; effect estimate come from a single study. 7. We decided to rate down two levels due to inconsistency and imprecision: I2 84% (p-value 0.01) and effect estimate is not consistent with benefit and harm. 8. We decided to rate down by two levels due to risk of bias and imprecision. Confounding expected due to imbalance in the compared groups. ( 49]). The comparisons suggested have little or no difference on patient mortality at 30 days. However, the certainty of the evidence was low (S1 Table). Infectious adverse events (AEs) Septic shock. Two RCTs (421 patients) addressing two comparisons reported septic shock (AZAM monotherapy vs LDAC monotherapy [24], and, AZA monotherapy vs AZA plus vorinostat [33]). The comparisons suggested little or no difference in the development of septic shock. However, the certainty of the evidence was low, and very low respectively (Table 3). Hospitalization and hypoxia. One NRS (478 patients) [40] and one RCT (87 patients) [30] addressing two comparisons reported on hospitalization (very low certainty evidence) and hypoxia/respiratory failure (low certainty evidence). When comparing LDAC monotherapy vs LDAC combination no difference was found in hypoxia/respiratory failure development. When comparing AZA monotherapy vs DEC monotherapy, fewer hospitalizations occurred in favor of AZA monotherapy. However, we are very uncertain about this effect (Table 4). Subgroup and sensitivity analysis The included studies did not provide sufficient information to performed a sensitivity analysis base on the risk of bias. We observed important inconsistency in two comparisons from two outcomes: Overall survival (DEC monotherapy vs DEC combination [37, 46,47] [46] has little or no effect in the overall survival of participants compared to DEC monotherapy. When comparing all-trans retinoic acid (HR 0.58, 95% CI 0.37-0.91, N = 1 RCT arm) and all-trans retinoic acid plus valproate (HR 0.62, 95% CI 0.40-0.96, N = 1 RCT arm) against DEC monotherapy, patients treated with the combination therapy shown higher overall survival (S1 Fig) [46]. However, we are uncertain about the true effect of these comparisons (S2 Table). Complete remission. We identified four secondary agents from four RCTs (843 patients) reporting the 12-month relapse-free survival. All the comparisons are low certainty of evidence. Gemtuzumab ozogamicin plus LDAC against LDAC monotherapy (HR 1.11, 95% CI 0.73-1.69, N = 1 RCT, 494 participants) has little or no effect in the 12-month relapse-free sur- [49] we found an improvement of the 12-month relapse-free survival on patients treated with LDAC combination therapy (S3 Fig). However, we are uncertain about the true effect of these comparisons (S3 Table). Discussion The elderly population diagnosed with AML who are not candidates for intensive antileukemic therapy propose an important challenge. In the last two decades' new therapeutic options have become available with a reasonable effectiveness and excellent toxicity profile. However, uncertainty remains about the comparative effectiveness and safety of the different available options. In order to help clinicians and patients during the decision-making process, we summarize the best available evidence by conducting a systematic review with several metaanalyses. Summary of the evidence Our systematic review identified three main drugs (azacitidine, decitabine and low-dose cytarabine), as monotherapies or in combination, addressing nine comparisons. We found information on patients´OS, 1-year mortality, 30-days' mortality, infectious and non-infectious AEs, complete remission and length of hospital stay. We found no evidence regarding quality of life, functional status and burden of caregiver for any comparison. Most of the evidence comes from RCTs (3,902 patients). However, due to the small number of patients per comparison (imprecision), and inconsistency between the treatment effects reported by different studies, most of the evidence was judged as low or very low certainty. Evidence about the effects on OS was available for all nine comparisons, with no compelling evidence in favor of any of the available options. There is moderate certainty in one of the comparisons (AZA monotherapy vs LDAC monotherapy), showing little no differences in the OS between the patients treated with these drugs. We performed two subgroup analyses for this outcome (DEC monotherapy vs DEC combination, and LDAC monotherapy vs LDAC combination). Also, we performed another subgroup analysis for the complete remission outcome (LDAC monotherapy and LDAC combination). Overall, we found single studies with favorable effects in combination therapy groups (LDAC combination and, DEC combination). However, due to the number of studies, the sample size, and the inconsistency between the pooled estimates, we classified the evidence as low certainty ( Table 2). The evidence for other outcomes and comparisons was scarce and we could not conduct more of these analyses. Toxicity is a very important feature during the decision-making process. We observed a similar prevalence of severe adverse events (CTC grade 3 or higher), except for two. AZA combination therapy (venetoclax) had more febrile neutropenia events when compared against AZA monotherapy (Table 3), and DEC monotherapy had more neutropenia events when compared against LDAC monotherapy (Table 4). Strengths and limitations No prior SRs addressed alternative chemotherapy for older patients with AML in whom intensive therapy was not an option. We conducted a comprehensive database search; specified explicit eligibility criteria; and conducted duplicate, independent study selection, data extraction and risk of bias assessment with resolution of disagreement with discussion and thirdparty adjudication where necessary. We used the GRADE approach to assess the quality of the evidence for NRS and RCT studies and where informative included both relative and absolute effects. We included all the relevant options that either RCTs or NRS had addressed. We faced an important challenge when conducting our meta-analysis: The secondary agents varied across the studies within each comparison and, for most of the comparisons the type of secondary agent was not the same. We decided to pool studies within the comparisons regardless the secondary agent, and to explore if the secondary agent was associated with the treatment effect when comparing monotherapies vs. combination therapies. During the clinical practice guideline development, we planned additional analyses based on the input from the panel members. Unfortunately, the number of studies within comparisons and outcomes was insufficient to conduct such analyses. With the available evidence when developing the recommendations, the panel believed that any extra analyses, including sensitivity analyses that would exclude specific studies (e.g., diagnostic criteria for AML), was unlikely to change their conclusions. Also, we planned to performed a network meta-analysis (NMA) to compare all interventions against each other. At the end of data extraction, we identified insufficient evidence to do so (data not shown). This decision created the challenge to summarize all the useful evidence across the nine comparisons; we provided a summary on main text but also provide extensive supplementary information in the appendices. Implications Treating older AML patients can be challenging, as clinicians and patients must balance the goal of increasing longevity with the risk that more aggressive treatment may increase adverse events and hospitalization. During the recommendation formulation process, with the evidence available at that time, the guideline panel found no compelling evidence of additional benefit with more aggressive treatment with more than one agent, and instances in which such therapy did increase adverse events. After the meeting, however, some new studies (RCTs and NRS) reported benefits of combinations over monotherapy, for example, DEC combined with ATRA and VPA+ATRA may result in better survival than DEC monotherapy [Lubbert 2020] [46], and AZA combined with venetoclax may also result in better survival than AZA monotherapy [DiNardo 2020] [48]. Because these results were inconsistent with the previously identified studies, when including these new studies in the meta-analyses, the certainty of the overall evidence decreased. It is important to notice, however, that the certainty of evidence for each of these specific comparisons is low. Therapy selection for older adults with AML who are not candidates for intensive antileukemic therapy is based on the patient fitness, patients' characteristics (cytogenic and molecular profiles), the trade-off between drug safety and toxicity, and patients' values and preferences [52]. The scientific community agrees on offering therapies based on HMA agents (e.g., azacytidine, decitabine) with some exceptions: liver and kidney severe disease, prior HMA therapy, and the presence of an actionable mutation [52,53]. For these populations other options are available (e.g., Low-dose cytarabine). Currently, combination therapy has become the standard of care for unfit AML older patients. However, the secondary agent depends on their availability in each setting and the presence of specific genetic mutations. Venetoclax (BCL2 inhibitor) is the preferred secondary agent to add to the HMA therapies, this is based on promising results from NRS and RCTs (mentioned previously). In our review, we identify benefits from the combination therapy with venetoclax. However, the certainty of the effect was judged to be low after creating a pooled estimate (imprecision and inconsistency). The same situation was identified with other secondary agents. We are aware that creating pooled estimates without stratifying based on the second agent may impact the effect estimate of a specific agent (e.g., venetoclax). In the comparison with enough studies, we undertook a subgroup analysis to explore their effect. However, the AZA monotherapy vs AZA combination did not have sufficient studies to explore it. Our evidence suggests HMA therapies are acceptable options with similar efficacy and safety to other less-intensive treatment options. The certainty of the evidence was, however, low for most comparisons and outcomes, and there was no published evidence for several outcomes considered critical for decision-making. The limitations of the evidence also highlight the need for additional randomized trials including a wider range of patient-important outcomes-in particular quality of life-to definitively establish the relative merits of alternative regimens in older patients with AML in whom more aggressive therapy is not an option.
2022-02-04T05:29:11.652Z
2022-02-02T00:00:00.000
{ "year": 2022, "sha1": "2660adab70b02f11ba5d5c4f15ad4e1dd9ceb1b7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0263240&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2660adab70b02f11ba5d5c4f15ad4e1dd9ceb1b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12759081
pes2o/s2orc
v3-fos-license
Practical Tools to Implement Massive Parallel Pyrosequencing of PCR Products in Next Generation Molecular Diagnostics Despite improvements in terms of sequence quality and price per basepair, Sanger sequencing remains restricted to screening of individual disease genes. The development of massively parallel sequencing (MPS) technologies heralded an era in which molecular diagnostics for multigenic disorders becomes reality. Here, we outline different PCR amplification based strategies for the screening of a multitude of genes in a patient cohort. We performed a thorough evaluation in terms of set-up, coverage and sequencing variants on the data of 10 GS-FLX experiments (over 200 patients). Crucially, we determined the actual coverage that is required for reliable diagnostic results using MPS, and provide a tool to calculate the number of patients that can be screened in a single run. Finally, we provide an overview of factors contributing to false negative or false positive mutation calls and suggest ways to maximize sensitivity and specificity, both important in a routine setting. By describing practical strategies for screening of multigenic disorders in a multitude of samples and providing answers to questions about minimum required coverage, the number of patients that can be screened in a single run and the factors that may affect sensitivity and specificity we hope to facilitate the implementation of MPS technology in molecular diagnostics. Introduction A multitude of laboratory technologies for the detection of DNA mutations have been developed over the last decades. In current diagnostic settings, most frequently a combination of a mutation scanning technique, followed by Sanger sequencing of the abnormal DNA fragments is used. Well known examples of widely used methods to identify the aberrant fragments are single strand conformation polymorphism (SSCP), conformation sensitive gel electrophoresis (CSGE), high performance liquid chromatography (HPLC) and more recently high resolution melting curve analysis (HRMCA) [1,2,3,4]. Despite its higher cost, Sanger sequencing [5] of DNA fragments remains the preferred method for mutation analysis because of its superior sensitivity and specificity and the detailed sequence information that can be obtained in a single step approach. Improvements on sequencing chemistries, instruments and data analysis software, as well as increases in throughput and reductions in cost resulted in the adoption of this technology for routine mutation analysis for monogenic diseases. However, expansion of molecular diagnostics to the realm of multigenic disorders requires the implementation of new methods with increased mutation detection efficiency but without a decrease in cost efficiency. Massively parallel sequencing (MPS) technologies (see [6,7] for an overview) are an interesting alternative because of their higher throughput and lower cost per base as compared to Sanger sequencing. In addition, throughput and cost for MPS technologies per base are rapidly evolving (from 0.1 Gb per run for the Roche Genome Sequencer at the end of 2006 to 150-300 Gb per run for Illumina's HiSeq2000 and ABI's 5500XL platform in 2011) at a speed vastly surpassing the evolution rate seen in semiconductor industries (Moore's law). In order for MPS to take over the role of Sanger sequencing and to evolve into the method of choice for next generation molecular diagnostics (NGMD), a number of hurdles need to be taken and questions be answered. The goal of this paper is to remove a number of these obstructions by describing strategies which enable mutation analysis through MPS, by presenting tools for determination of the required coverage and the number of patients who can be screened in a single run, and by listing possible sources of false negative or false positive mutation calls along with possible solutions. The guidelines and tools provided in this study were formulated or calculated based on pyrosequencing data obtained on the GS-FLX instrument (454-Roche), but may provide better insights into applications with other MPS chemistries as well. Materials and Methods Generation of sequencing data Sample preparation. The data presented in this article are derived from 10 GS-FLX sequencing runs (using both Standard and Titanium chemistries) on samples prepared with different approaches. In total over 200 patient samples were evaluated in these 10 experiments. To pool different patients in a single experiment multiplex identifier (MID)-tags were attached on all patients' samples. Different approaches were evaluated to attach these tags: Approach 1: the samples investigated for recessive congenital deafness (15 genes: GJB2, SLC26A4, MYO15A, OTOF, CDH23, TMC1, TMPRSS3, TECTA, TRIOBP, TMIE, PJVK, ESPN, PCDH15, ESRRB, MYO7A -643 amplicons) were prepared with PCR (Kapa Taq kit (Sopachem)) followed by an adapter ligation approach. All PCR products for a given sample are pooled, thereby reducing the number of parallel reactions in the next step from the number of sample-amplicon combination (SAC) to the number of samples. The next step involves ligation of adapters containing the sequencing recognition sites (A & B) followed by a sample specific barcode (ligation was performed according to GS FLX Shotgun DNA library preparation quick guide). Once MID containing adapters are ligated, samples can be pooled into a single tube for MPS (see below). During the first PCR, gene-specific amplicons are generated, using primers modified at their 59 end with a universal M13 linker sequence. In the first experiments (2 out of 10 experiments), we equimolarly pooled singleplex reactions. In further experiments the first amplification step was replaced by a multiplex PCR in which several amplicons of the same patient are combined (we typically aimed for 10-plex PCR reactions) to reduce the workload and consumable cost. After 1/1000 dilution of the PCR products, a second round of PCR is performed. In the second PCR, primers containing the common A or B sequence, a patient specific barcode sequence (MID) and a universal linker sequence (M13) were used to amplify the initial PCR products, thereby extending them with the sequences that are required to initiate sequencing and to distinguish reads from the different patients. Primer sequences, reaction conditions and constitution of the multiplex reactions are described by De Leeneer et al. [8] and Baetens et al. [9]. Pooling prior to sequencing. PCRs prior to pooling were performed in the presence of a saturating dye (LCgreen+, Idaho Technology Inc) on a real-time PCR instrument (CFX384, Bio-RAD). PCRs were normalized and equimolarly pooled in relation to the RFU data (endpoint fluorescence). This pool was purified on a High Pure PCR Cleanup Micro kit (Roche). During optimization of the multiplex reactions FAM labeled MID primers were used to evaluate equimolarity between amplicons within one reaction and fluorescent peaks were separated on an ABI3730 capillary system. Sequencing reaction and data analysis Emulsion PCR and sequencing reactions on the GS-FLX (454-Roche) were performed according to the manufacturer's instructions. On average 380,000 (range: 290,000-520,000) reads were obtained in a standard GS-FLX run and 1,000,000 when the Titanium chemistry was used (range: 800,000-1,200,000). In each experiment, a minimum of 90% of all reads mapped to the reference sequence. FASTA files were analyzed with the in house developed variant interpretation pipeline (VIP) software (version 1.3) [10]. Distribution plots and log-normal curve fitting were performed using the GraphPad PRISM 5 software. Statistical analysis of the potential bias introduced during emulsion PCR and pyrosequencing was performed using the R package. The mean of both relative coverages (obtained after sequencing on the GS-FLX) and relative fluorescent signals (obtained on capillary electrophoresis on an ABI3730) was used to center both data sets for each multiplex prior to principal component analysis to remove the effect of the different multiplex sizes. Calculation of coverage depth in function of sensitivity With Sanger sequencing a two-fold (forward and reverse) coverage is considered to be sufficient for molecular diagnostics, provided that sequences are of high quality. At this moment there is no clear consensus on the required minimum coverage (MC) to reliably detect heterozygous variations using MPS technologies. Current guidelines typically suggest a 20-fold coverage [11], with little justification on the proposed value or how it would require adjustment depending on sequencing and analysis procedures or context. Because MPS is based on the sequencing of single, clonally amplified molecules, sampling effects need to be taken into account at low coverage. At one fold coverage there is a 50% chance to detect a heterozygous variant and a 50% (1/2 ' 1 ) chance to miss it. At two fold coverage, there is a 25% chance to detect only the mutant allele, 50% chance to detect both and 25% (1/2 ' 2 ) to detect only the wild type allele. Even at 10-fold coverage there is a chance of about 1/1000 (1/2 ' 10 ) to miss the variant allele completely. Since data analysis usually involves filtering out low frequency variants to reduce false positives resulting from sequencing errors (see below), the minimal number of reads for detection of heterozygous variants depends on the applied filter settings. Table 1 shows an overview of the theoretically required minimal coverage (MC) to reliably detect heterozygous variants at varying minimum allele frequencies with a given power. Calculations were based on the following: the interpretation of a specific base has only two possible outcomes (equal to or different from the reference sequence). Theoretically, the probability to observe a variant in a specific number of reads (#Rv) out of all reads for a SAC (total coverage) can be derived from a binominal distribution with success probability equal to the expected mutant variant frequency in the total number of reads (50% for heterozygous variants without variant related alignment errors). The binomial distribution can also be used to tabulate the cumulative probabilities in function of the total coverage and the relative variant frequency that is deemed sufficient to indicate a real variant, i.e. above the filter level below which variants are thought to be sequencing errors (#Rv/total coverage). Hence, one can simply look up the coverage that is required for detecting a heterozygous variant at a minimum defined variant frequency with a predefined power. This coverage is referred to as the minimum coverage (MC) for a given SAC. To facilitate interpretation, power values (P) were converted into scores (Q) (similar to calculation of PHRED scores [12]): Q = 210*log(12P). Not surprisingly, MC values increase as the required power to detect heterozygous variants increases. There is also a strong dependency on the sequencing error filter level: if only variants present in 30% of the reads are considered as true variants, a 61-fold MC is required, while a coverage depth of only 27 is needed if the filter threshold is lowered to 20% (both for or P = 99.90%, corresponding to a Phred score of 30, required for standard molecular diagnostics). When plotting obtained variant frequencies vs. coverage of unfiltered data, the largest deviations from the binomial distribution are observed at the lower allele frequencies. Because the majority of such data points are sequencing errors, especially related to homopolymers (see below), dispersion can best be evaluated at frequencies above 50%. Allele specific amplification biases during sample preparation or emulsion PCR are the most likely cause of any remaining dispersion. A stepwise analysis starting from unfiltered variant data in one experiment (9721 variants) to determine the dispersion is shown in Supporting information Tool S2. We calculated the overall fraction of heterozygous variants with a frequency deviating from the expected 50% ratio, and this was estimated to be 10%, after correcting for sequencing errors being interpreted as heterozygous variants. Number of samples per run in function of MC Determination of the required minimum coverage is not sufficient to calculate the number of SAC that can be analyzed with a given number of reads because the coverage may differ between SAC. In an ideal experiment, all SAC have exactly the same coverage, matching the theoretically determined required MC. In practice, some SAC will display a lower coverage than others. Since these require at least the MC as well, other SAC will have a higher coverage than absolutely required wasting sequencing capacity. The correction factor to convert the minimum coverage into the required average coverage can be derived from an evaluation of the distribution of the coverage. Figure 1A plots the distribution of coverage to the number of SAC and shows that the variation in coverage depth is log normally distributed. Coverage data of 3300 SAC were used to generate this plot (3 genes: FBN1, TGFBR1, TGFBR2 for 30 patients). Only at low coverage (,40), the distribution deviates from its Gaussian fit. This reflects a low number of reactions that failed to give a normal coverage. By calculating this variation in coverage depth, one can dictate how many extra reads are needed to cover all sequences at the required level. By plotting the cumulative distribution of the fold difference of the mean coverage to the SAC coverage, one can determine the correction factor by which the mean coverage needs to be multiplied in order to have a given fraction of SAC with at least the minimum coverage (Figure 1b). The value on the X-axis at which the histogram passes the 90% threshold is defined as the correction factor (F 90 ). More stringent correction can be obtained by calculating a correction factor at higher thresholds (e.g. F 95 ). Supplemental Tool S2 provides an easy to use calculation template (MS Excel). Based on the coverage obtained in a proof of principle experiment, one can simply calculate the spread correction factor and the number of patients that can be screened in a run, ensuring sufficient power to detect heterozygous variants. The 'spread correction factor sheet' calculates the spread correction factor obtained (see also figure 1B) based on the coverage of different SAC in an experiment. In the 'Samples per screening sheet', additional requirements like predefined power, threshold for sequencing error filtering, instrument specifications and number of amplicons can be filled in. For example, for a BRCA1/BRCA2 screening of 111 amplicons using P = 99.90%, threshold = 25% and spread correction factor 2.5 the tool determines that 83 samples can be screened in a single GS-FLX (Titanium chemistry) run with 90% of sequences covered sufficiently to provide a minimum power of 99.9%. This number decreases to 65 samples if 95% of the sequences need to be covered sufficiently. Emulsion PCR We assumed a more narrow spread in coverage would be obtained by sequencing an equimolar pool of fragments or amplicons. To test the assumption that the emulsion PCR does not introduce a substantial bias we compared the relative peak intensities (determined by fragment analysis on ABI3730xl) of 9 different fluorescently labeled multiplex PCRs (6 to 11-plexes), amplified on 5 different samples (total of 360 SACs) with the corresponding relative coverage after sequencing. Overall there seems to be good 1:1 relationship between the relative fluorescence and the relative coverage, indicating that a certain increase in relative fluorescence on average induces an equal increase in relative coverage (Figure 2). In contrast to the findings obtained for shotgun sequencing [13], our data indicate that sequencing bias is limited and that sequencing cost efficacy can be improved by generating more equimolar input pools. Equimolarity can be achieved by optimizing amplification conditions or by normalizing PCR product concentrations. Although normalization can potentially increase sequencing efficiency, one may lose on overall processing efficiency due to the required effort to normalize the SAC. With good primer design tools one should be able to get similar DNA quantities (as measured by end point fluorescence in a qPCR reaction with saturating DNA binding dye) for the 90% best assays. For such screenings, the majority of amplicons do not require any normalization and a significant portion of all remaining amplicons can be made equimolar by a simple normalization. Figure 3 shows the distribution of the relative end point fluorescence intensities (RFU, relative to the maximum fluorescence), across 627 different qPCR reactions on a single sample amplified for 15 genes associated with hearing loss. It is important to notice that comparison of end point fluorescence values is only valid for singleplex PCR products of comparable length. Sequence quality analysis Sequence quality was determined using the GS-FLX basecaller. Quality scores per base were averaged across all reads within a single run (,700,000 reads of 1 GS-FLX Titanium experiment for BRCA1/2 and FBN1, TGFBR1, TGFBR2 amplicons), and plotted in function of the sequenced base ( Figure 4A). Because of the setup of this amplicon sequencing run, the number of reads longer than 400 bp was too low to provide accurate quality estimations in that range. Quality scores (Q) were converted into probabilities of erroneous basecalls (P) as follows: P = 10 ' (2Q/10), corresponding to the better known Phred scores. Pyrosequencing reactions are characterized by a low false call rate for substitutions, but also by a higher error rate for insertions and deletions -especially in homopolymeric regions [14]. A combination of quality and allele frequency filters may eliminate most errors, but fails to distinguish real insertions/deletions from sequencing errors in case of longer homopolymers (7 or more repeats) ( Figure 4B). Discussion As massively parallel sequencing has the ability to become the standard for next generation molecular diagnostics, more insight is urgently needed in the limitations of the technology and tools are required to standardize the quality of the diagnostic tests offered in various laboratories. In this study, we thoroughly evaluated data obtained with 10 GS-FLX experiments allowing us to shed light on a number of important issues and provide workarounds. Current massively parallel sequencers offer a throughput per run that is insufficient for complete genome sequencing at affordable cost in a diagnostic setting, but mostly supersedes the requirements for targeted resequencing of single DNA samples. Strategies for next generation molecular diagnostics will therefore have to deal with both the selection of regions of interest and with sample multiplexing. Regions can be selected by either hybridization based enrichment or PCR amplification. Enrichment by capturing DNA fragments on oligonucleotides -on array (e.g. NimbleGen, Febit) or in solution (e.g. Agilent, Illumina) -has the advantage that many regions can be targeted in parallel (target multiplexing). While this allows enrichment of a high number of regions of interest (up to an entire human exome), it is well known to introduce large variations in coverage [15,16]. In addition, enrichment is rarely complete: some regions are not captured whereas other unwanted regions may be copurified. The main drawbacks of this technology for molecular diagnostics are its high cost and the large quantities of high quality DNA that are required. Commercially available sample preparation approaches like Raindance, Fluidigm Acces array or more recently Haloplex PCR can increase throughput tremendously, but are less cost efficient for smaller experiments. Using the more classical approach of small scale, self-designed PCR assays has the advantage that the same set-up as for Sanger sequencing can be maintained, facilitating confirmation of the detected mutations afterwards. For these reasons, we evaluated the latter for NGMD. Sample multiplexing can be achieved by physically separating samples in the sequencing reaction or by tagging the amplicons with different sample specific sequences during library preparation. Physical separation on current MPS instruments offers limited flexibility in the number of samples to be multiplexed (up to 16 in GS-FLX) and may reduce the available sequencing capacity by blocking parts of the available sequencing space. Therefore, a sample tagging approach is preferred. For applications where different samples are analyzed for different genes, no special multiplexing modifications need to be done when sequences can be easily attributed to the different samples based on correct alignment to the gene of interest. Four major amplification based approaches for NGMD are currently used worldwide: 1) PCR with fusion primers (GS-FLX), 2) PCR followed by adapter ligation (GS-FLX), 3) two consecutive rounds of PCR (GS-FLX), and 4) shearing of concatenated PCR products followed by adapter ligation (various MPS platforms). It must be noted that other approaches or variations on the methods described may be used as well. In this study, we evaluated This fraction of amplicons can be increased to 96% by using a double volume for the PCRs in the 0.5-0.25 RFU range, and to 97% by using a quadruple volume for the PCRs in the 0.25-0.125 RFU range. The concentration of the remaining 3% of PCR reactions is too low to be efficiently used. doi:10.1371/journal.pone.0025531.g003 approach 2 and 3. The main advantages of approach 2 are its simplicity and ease of set-up. The drawback is the large number of individual PCR reactions that need to be performed. Hence, we concluded that this approach is best suited if a screening only needs to be performed a few times or when results are quickly required and one cannot afford optimization. As soon as a few hundred samples need to be screened, approach 3 may be the preferred alternative. By multiplexing PCR reactions in approach 3, one can reduce the workload and consumable cost for sample preparation. Although optimization of multiplex PCR may be challenging, there is a good return in increased efficiency (in terms of cost and workload to prepare samples) for tests that will be run many times -as is the case in diagnostic sequencing. Further optimization may be achieved if the first and second round of PCR can be combined into a single PCR containing the two types of primers (inner target specific and outer sample specific primers). Because of fundamental differences between the traditional and the so called next-generation sequencing methods, people are uncertain on how to deal with coverage and how to interpret variants, errors and quality scores. Despite the availability of some guidelines on required coverage provided by sequencing instrument suppliers, there was no theoretical framework to actually calculate the required minimum coverage. We here provide such a framework and implement it into a spreadsheet template that can be used to determine the required coverage and the number of patients that can be screened in a single run. A number of sources of false positives and false negatives are identical for both Sanger and massively parallel sequencing and hence independent on the fold coverage. However, because MPS is based on the sequencing of single, clonally amplified molecules and uses a completely different sequencing chemistry, new types of error sources must be taken into account. Knowing the possible sources of error, one may optimize sample preparation and sequencing protocols, and take measures to adjust the data analysis pipeline for these new types of errors. Table 1 shows an overview of the theoretically required MC to reliably detect heterozygous variants at varying minimum allele frequencies with a given power. Note that this theoretical MC value only accounts for allelic drop out due to sampling effects and that it should be treated as a lower limit for the actual MC that may be larger because of additional variation affecting allele frequencies. Because of inter-lab variation we cannot propose a single value for the required minimum coverage, but labs can determine their own MC value based on their sequencing error rate (filter setting) and the required power to detect variants ( Table 1, Supplemental Tool S1). When new to NGMD, filtering at 25% and aiming for 99.9% power (resulting in an MC of 38) may be a good starting point. A 5-fold coverage is expected to be sufficient to tolerate occasional sequencing errors when screening for homozygous variations only. Based on the strategies and methods described in this paper we successfully developed and validated the screening of the complete coding region of the BRCA1 and BRCA2 genes in a diagnostic setting [8], demonstrating the feasibility of performing more efficient molecular diagnostics using massively parallel sequencing. The application of massive parallel sequencing for clinical sequencing of BRCA1/2 on the Illumina GAII has been recently described [17,18]. We agree with Morgan et al., that the major remaining hurdle is the availability of data analysis tools that provide the required high quality for in-vitro diagnostics and that are really tailored towards a routine diagnostic setting. The availability of commercial software packages and the advent of smaller scaled MPS instruments such as the GS-Junior and Illumina Miseq and the development of the socalled third generation sequencers like Ion Torrent are expected to push this new sequencing technology into the field of diagnostics, starting with the multigenic disorders for which there are no good alternatives available at this moment. However, because of its proven track record, its superior flexibility and its large install base, Sanger sequencing is unlikely to be replaced in the near future for smaller screening projects and it will remain a valuable technology for confirmation of mutations observed by other technologies. . GS-FLX sequence quality analysis. a) Average quality score in function of the position within the reads for a representative dataset (full Titanium run with amplicons for breast cancer and for familial aorta aneurysmata screenings). Across the first 400 bp there is an average quality of 35.3 corresponding to a predicted error rate of 0.029%. b) Comparison of the observed homopolymer length in a series of sequencing runs to the expected length based on the reference sequence. Results are plotted as the fraction of reads having correct homopolymer length estimation (n), an underestimation of the homopolymer length (n21, n22, n23) or an overestimation (n+1, n+2, n+3). The vast majority of reads for homopolymers of up to 6 repeats has correct length estimation, less than 2% are overcalls and less than 10% are undercalls. For homopolymers of 7 repeats, three quarters of the reads are correctly called and over 20% of the reads are interpreted to be missing one repeat. Only by filtering for low allele frequencies can these repeats be analyzed. At 8 repeats only about half of the reads are correctly called, at even larger homopolymer lengths only a minority of reads have a correct basecalling. doi:10.1371/journal.pone.0025531.g004
2016-05-12T22:15:10.714Z
2011-09-30T00:00:00.000
{ "year": 2011, "sha1": "4942283bf1e9517e71f5cca8c77fc8c16182f192", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0025531&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90195e0748733260ae5771b2ca3cc2b17e5402f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16459885
pes2o/s2orc
v3-fos-license
Non-Enhanced MR Imaging of Cerebral Aneurysms: 7 Tesla versus 1.5 Tesla Purpose To prospectively evaluate 7 Tesla time-of-flight (TOF) magnetic resonance angiography (MRA) in comparison to 1.5 Tesla TOF MRA and 7 Tesla non-contrast enhanced magnetization-prepared rapid acquisition gradient-echo (MPRAGE) for delineation of unruptured intracranial aneurysms (UIA). Material and Methods Sixteen neurosurgical patients (male n = 5, female n = 11) with single or multiple UIA were enrolled in this trial. All patients were accordingly examined at 7 Tesla and 1.5 Tesla MRI utilizing dedicated head coils. The following sequences were obtained: 7 Tesla TOF MRA, 1.5 Tesla TOF MRA and 7 Tesla non-contrast enhanced MPRAGE. Image analysis was performed by two radiologists with regard to delineation of aneurysm features (dome, neck, parent vessel), presence of artifacts, vessel-tissue-contrast and overall image quality. Interobserver accordance and intermethod comparisons were calculated by kappa coefficient and Lin's concordance correlation coefficient. Results A total of 20 intracranial aneurysms were detected in 16 patients, with two patients showing multiple aneurysms (n = 2, n = 4). Out of 20 intracranial aneurysms, 14 aneurysms were located in the anterior circulation and 6 aneurysms in the posterior circulation. 7 Tesla MPRAGE imaging was superior over 1.5 and 7 Tesla TOF MRA in the assessment of all considered aneurysm and image quality features (e.g. image quality: mean MPRAGE7T: 5.0; mean TOF7T: 4.3; mean TOF1.5T: 4.3). Ratings for 7 Tesla TOF MRA were equal or higher over 1.5 Tesla TOF MRA for all assessed features except for artifact delineation (mean TOF7T: 4.3; mean TOF1.5T 4.4). Interobserver accordance was good to excellent for most ratings. Conclusion 7 Tesla MPRAGE imaging demonstrated its superiority in the detection and assessment of UIA as well as overall imaging features, offering excellent interobserver accordance and highest scores for all ratings. Hence, it may bear the potential to serve as a high-quality diagnostic tool for pretherapeutic assessment and follow-up of untreated UIA. Introduction Rupture of intracranial aneurysm is associated with high morbidity and mortality rates, as it is known to be accountable for 80% of all subarachnaoid hemorrhages (SAH), causing 25% of all cerebrovascular-related deaths, [1]. Size and shape of unruptured intracranial aneurysms (UIA) are known to be significantly affiliated with rupture rates, hence, high-quality assessment of UIA and its related features displays an important role on potential aneurysm treatment [2][3][4]. Digital subtraction angiography (DSA) is considered the gold standard for detection of UIA. Nevertheless, due to the application of ionizing radiation and iodinated contrast agent as well as the general risk affiliated to invasive interventional procedures, DSA is associated with a 0.2%-0.5% risk for severe permanent neurological complications [5,6]. Within the past 15 years, 1.5 Tesla magnetic resonance angiography (MRA) has evolved to become an excellent noninvasive diagnostic alternative to DSA, yielding sensitivity rates of 79-97% for the detection of small UIA [7][8][9][10]. With the successful introduction of (ultra-) highfield non-enhanced MRA of the intracranial vasculature, recent studies performed at 3 and 7 Tesla reported improved depiction of UIA with sensitivity rates comparable to the gold standard DSA [11][12][13][14]. The increase of the magnetic field strength from 1.5 to 3 Tesla, and respectively to 7 Tesla, allowed for a successful transition of the increased signalto-noise (SNR) and contrast-to-noise ratio (CNR) to improvements in spatial resolution and vessel contrast. The purpose of this prospective study was to evaluate the image quality and diagnostic ability in the assessment of UIA of 1.5 Tesla TOF MRA in comparison to ultra-high-field TOF MRA and nonenhanced MPRAGE imaging. Ethics Statement The study was conducted according to the principles expressed in the Declaration of Helsinki and was approved by the authorized ethical review board of the University Duisburg-Essen. Written informed consent was obtained before each examination. Study Design and Population This prospective study evaluates the diagnostic ability of 7 Tesla TOF MRA in comparison to 1.5 Tesla TOF MRA and 7 Tesla non-contrast enhanced MPRAGE for delineation of UIA. The study group comprised 16 neurosurgical patients (male n = 5, female n = 11, average age 53.38 years; range 38-70 years). Inclusion criteria were: 1) single or multiple UIA, 2) age 18-80 years, 3) ability to give informed consent and 4) legal Scanners and Coil Systems Ultra-high-field examinations were acquired on a 7 Tesla whole-body MRI system (Magnetom 7T, Siemens Healthcare, Erlangen, Germany) utilizing a 32-channel Tx/Rx head coil (Nova Medical, Wilmington, USA). The scanner is equipped with a gradient system of 45 mT/m maximum amplitude and a slew rate of 200 mT/m/ms. Concomitant 1.5 Tesla examinations were acquired on a wholebody MRI system (Espree, Siemens Healthcare) equipped with a 12-channel Rx head coil (Siemens Healthcare, Erlangen, Germany). The scanner is equipped with a gradient system of 33 mT/ m maximum amplitude and a slew rate of 200 mT/m/ms. Examination at 7 Tesla Prior to the acquisition of the diagnostic sequences B 0 shimming was performed using a vendor-provided gradient echo sequence and algorithm based on the work of Schar [15]. For B 1 field mapping and local flip angle optimization a vender provided spinecho type sequence was used. After a slice selective excitation, two refocusing pulses generate a spin-echo and a stimulated echo, respectively. The algorithm is mainly based on the work of Hoult [16]. TOF MRA sequence at 7 Tesla The TOF MRA sequence is based on a 3D FLASH sequence with flow compensation and tilt-optimized non-saturated excitation (TONE) across the slab [17]. Datasets were acquired with an excitation flip angle of a = 18u, TE = 4.34 ms, TR = 20 ms, FOV 200 mm 6169 mm 646 mm, 112 slices per slab (oversampling 14%), GRAPPA acceleration factor R = 4 (phase direction), partial Fourier 6/8 in both slice and phase directions, matrix of 8966756 (non-interpolated), and a voxel size of 0.2260.2260.41 mm 3 in a total acquisition time of 6 min 22 s. The variable-rate selective excitation (VERSE) algorithm {Conolly, 1988#1168} was used to reduce SAR contribution of excitation and venous saturation RF pulses {Schmitter, 2011#1484}. The flip angle of the saturation RF pulses was additionally reduced (a SAT = 35u instead of 90u which is normally used) to further ameliorate SAR constraints [17]. MPRAGE sequence at 7 Tesla MPRAGE imaging was obtained with the following sequence parameters: TR = 2500 ms, TE = 1.54 ms, TI = 1100 ms, Table 3. Ratings for dome, neck and parent vessel delineation (mean ratings from both readers). TOF MRA sequence at 1.5 Tesla The TOF MRA sequence was based on a clinically used standard 3D gradient echo sequence. Datasets were acquired with an excitation flip angle of a = 25u, TE = 7 ms, TR = 26 ms, matrix 5126448 (interpolated), FOV 180 mm 6157 mm, 3 slabs with 44 slices per slab (oversampling 18.2%) and a voxel size of 0.3560.3560.7 mm 3 in a total acquisition time of 4 min 3 s. Image Evaluation Image evaluation was performed separately and independently by two experienced radiologists on standard post-processing Picture Archiving and Communcation system (PACS) workstations (Centricity RIS 4.0i, GE Healthcare, USA). Both radiologists were blinded to image acquisition methods and intracranial pathologies. Visual evaluation was performed using 3D image reconstructions; all measurements were performed on 2D multiplanar reconstructions of the 3D datasets. The total number of aneurysms, the maximal diameter as well as the diameter of neck and dome of each aneurysm were assessed. For qualitative analysis the following features were evaluated, utilizing a five-point scale Vessel-tissue contrast ratio VTCR~S ignal MCA {Signal GM Signal MCA zSignal GM of the middle cerebral artery (MCA) was assessed in correlation to surrounding gray matter (GM) for 7 Tesla MPRAGE sequences, 7 Tesla TOF sequences and for 1.5 Tesla TOF sequences. Therefore, regions of interest (ROI) were placed in the largest diameter of the proximal left M1 segments Signal MCA ð Þand adjacent gray matter Signal GM ð Þ . The average diameter for the ROI of the vessel was 3-5 mm; the ROI for brain parenchyma amounted to approximately 10 mm. Results All 1.5 Tesla and 7 Tesla scans were performed successfully without any relevant side effects. Both readers identified twenty intracranial aneurysms in 1.5 Tesla and 7 Tesla TOF MRA and 7 Tesla MPRAGE imaging. Fourteen of the twenty intracranial aneurysms were located in the anterior circulation: middle cerebral artery (n = 7), anterior cerebral artery (n = 2), internal carotid artery (n = 4) and posterior communicating artery (n = 1). Six aneurysms were detected in the posterior circulation: basilar tip (n = 2), posterior cerebral artery (n = 2), posterior inferior cerebellar artery (n = 1) and superior cerebellar artery (n = 1). Two patients had multiple intracranial aneurysms (2, respectively 4 aneurysms). Ten of twenty identified aneurysms were defined as small (3-5 mm), five as medium-sized (6-10 mm), three as large (11-25 mm) and one was rated a giant cerebral aneurysm (. 35 Quantitative measurements for dome and neck diameter showed larger dome/neck ratio for 7 Tesla MPRAGE (p = 0.0597) and 7 Tesla TOF MRA (p = 0.0305) compared to 1.5 Tesla TOF MRA. There was no significant difference between dome/neck ratio in 7 Tesla MPRAGE and 7 Tesla TOF MRA (p = 0.5586). In accordance with the delineation of the aneurysm dome and neck, 7 Tesla MPRAGE also offered best assessment of the parent vessel 4.8 (excellent) (SE 0.128). TOF MRA yielded good assessment of the parent vessel at both field strengths (1.5 Tesla TOF MRA mean: 4.4 (SE 0.129); 7 Tesla TOF MRA mean: 3.9 (SE 0.146). Interobserver accordance was substantial to almost perfect for most readings, with slightly lower accordance (fair) for delineation of the dome for 1.5 Tesla TOF MRA and 7 Tesla TOF MRA. Details are shown in Table 2. Table 3 shows mean ratings for delineation of dome, neck and parent vessel of both readers. Table 4 lists the combined readings of both raters for dome and neck diameter in mm and calculated dome/neck ratio for all aneurysms. Examples for aneurysm dome, neck and parent vessel delineation in all three MRI sequences are shown in Figures 1 and 2. Artifacts, vessel-tissue contrast and overall image quality Seven Tesla MPRAGE imaging was the sequence to be least impaired by artifacts (excellent) with mean value of 4.9 (SE 0.000). 7 Tesla and 1.5 Tesla TOF MRA showed equivalent artifact impairment (good) with mean values of 4.3 (SE 0.147) for 7 Tesla and mean 4.4 (SE 0.124) for 1.5 Tesla imaging. Wilcoxon matched-pairs two-sided signed-ranks test showed significant differences between 7 Tesla MPRAGE and 7 Tesla TOF MRA (p = 0.0002) ratings and between 7 Tesla MPRAGE and 1.5 Tesla TOF MRA (p = 0.0005) ratings. No significant differences were detected between 7 Tesla TOF MRA and 1.5 Tesla TOF MRA (p = 0.3438) ratings. Figure 3 shows examples of pulsation artifacts for 1.5 Tesla and 7 Tesla TOF MRA sequences. Interobserver accordance was almost perfect (kappa coefficient) for most readings with slightly lower accordance (substantial) for artifact and overall image quality assessment in 1.5 Tesla TOF MRA Details are shown in Table 2. Table 5 shows mean ratings for artifact delineation, vessel-tissue contrast and overall image quality. Table 6 lists the combined readings of both raters for signal intensities of left middle cerebral artery, adjacent gray matter and calculated vessel-tissue contrast ratio for all subjects. Discussion With DSA remaining to be the gold standard, 1.5 Tesla TOF MRA has evolved to become a reliable and equivalent noninvasive technique for detection and follow-up of UIA larger than 3 mm [1,[23][24][25][26][27]. The increase in SNR and CNR affiliated to the increase in magnetic field strength has been shown to result in superior vessel (disease) assessment at 3 Tesla compared to 1.5 Tesla [26]. With further increase of the field strength to 7 Tesla, the combination of the associated increase in SNR (up to 4-5 fold higher than at 1.5 Tesla) and longer T1 relaxation times [28] are known to offer an improved vessel-tissue contrast based on more efficient background tissue suppression [29]. Studies in healthy volunteers at 7 Tesla have shown superior vessel delineation compared to 1.5 Tesla [30,31]. Furthermore, patient studies demonstrated the high diagnostic ability and superiority of 7 Tesla non-enhanced TOF MRA for evaluation of intracranial vasculature and aneurysm detection compared to DSA and/or 1.5 Tesla TOF MRA [13,32]. Initial studies of 7 Tesla non-enhanced T1w brain imaging revealed an incidental finding, by means of a homogeneously hyperintense signal of arterial vasculature. This incidental finding bears a strong diagnostic potential for non-enhanced high-quality vessel imaging at 7 Tesla, as demonstrated in numerous studies [18,[32][33][34][35][36]. In a study presented at the ISMRM 2010, Grinstead et al. analyzed the primary source of the hyperintense vessel signal. Their investigations showed an association of the high vessel signal to the lack of body RF transmit coils at 7 Tesla, resulting in the utilization of head coils for transmit and receive. Hence, non-selective infrared pulses effectively become slabselective infrared-pulses. Furthermore a combination of steady state and inflow effects seems to be accountable. To investigate the diagnostic ability of T1w non-enhanced 7 Tesla MRI, Maderwald et al. [32] published an intra-individual comparison trial of 7 Tesla TOF MRA, VIBE imaging (threedimensional volume interpolated breath hold examination) and MPRAGE imaging of the intracranial vasculature in 25 subjects. Their results demonstrated the superiority of MPRAGE imaging in the assessment of non-enhanced vasculature, providing highquality delineation of all vessel segments and least impairment due to intraluminal signal variations. Furthermore, MPRAGE and VIBE imaging offered full brain coverage in contrast to TOF MRA. Zwanenburg et al. [37] confirmed the high diagnostic potential of non-enhanced MPRAGE MRI at 7 Tesla, yielding excellent assessment of cerebral perforating arteries and related anatomical parenchymatous structures. In another recent study, the potential diagnostic benefit of the application of contrast agent to 7 Tesla MPRAGE MRI was investigated. The study results revealed only minor non-significant improvement based on the administration of contrast agent, underlining the high-diagnostic potential of non-enhanced 7 Tesla MPRAGE MRI [36]. Based on these previous study results, we decided to include non-enhanced MPRAGE imaging to our 7 Tesla protocol and compare its diagnostic ability to 7 Tesla and 1.5 Tesla TOF MRA regarding the assessment of intracranial aneurysms and their related features. Our study results go in line with previous publications regarding the superiority of 7 Tesla TOF MRA over 1.5 Tesla TOF MRA as well as 7 Tesla MPRAGE over 7 Tesla and 1.5 Tesla TOF MRA. However, while previous studies mainly focused on the evaluation of the overall image quality and overall delineation of the aneurysms, our study results deepen the assessment of the diagnostic ability based on a dedicated analysis of numerous aneurysm features and image quality parameters. Due to its high spatial resolution and excellent vessel-to-tissue contrast MPRAGE MRI offered best delineation of all assessed aneurysm features. It also yielded highest scores in overall image quality and least artifact impairment with significant difference to TOF MRA at 7 Tesla and 1.5 Tesla. Furthermore, aside from excellent vessel delineation, it also offers the potential for simultaneous high quality assessment of related anatomical parenchymatous structures and full brain coverage. While 7 Tesla TOF MRA yielded superior diagnostics of the aneurysm dome and neck over 1.5 Tesla, it was slightly inferior in the assessment of the parent vessel. This inferiority was mainly due to amplified intraluminal signal variations at 7 Tesla, resulting in impaired parent vessel delineation. Clearly, our study is not free of limitations. The study group comprised a rather small population of 16 patients with a total of 20 untreated IA. Nevertheless, to our knowledge this is one of the largest neurosurgical patients cohorts suffering from UIA scanned at 7 Tesla and 1.5 Tesla MRI, published in literature. Further investigations with larger patient cohorts, also including patients with clipped or coiled intracranial aneurysms should be the focus of future studies. Ultrahighfield imaging in patients with treated aneurysms has been a restricted so far, as neither Guglielmi detachable coils nor aneurysm clips are certified for 7 Tesla MR imaging. Nevertheless, first promising preliminary results on implant safety in cerebral 7 Tesla MRI have been recently demonstrated {Kraff, 2013#1639}{Noureddine, 2012#1640}{Noureddine, 2013#1641}. Future studies on the diagnostic potential of 7 Tesla MRI for follow-up of coiled aneurysms would be of high scientific and clinical interest with special focus on aneurysm recanalization and its precise detection, comparing MRA at different magnetic field strengths to DSA. Furthermore, another limitation is posed by the lack of a comparison to the diagnostic gold standard, in terms of digital subtraction angiography. However, with MRA offering equivalent non-invasive vessel diagnostics to DSA, particularly applied for aneurysm monitoring (as in our patient cohort), the main focus of this trial was set on a direct comparison of the diagnostic ability of different magnetic field strengths. Finally, as known from previous studies, 3 Tesla MRA is considered to provide improved vessel delineation over 1.5 Tesla. Hence, a comparison of 7 Tesla MRA to 3 Tesla MRA instead of 1.5 Tesla imaging would have been desirable. Unfortunately, this was not applicable due to availability reasons in this clinical setting. Nevertheless, 1.5 Tesla MRI is still considered the worldwide clinical standard. Hence, in order to investigate the diagnostic ability of 7 Tesla MRA, a comparison to the clinical worldwide standard (1.5 Tesla) may be a fair comparison after all. In conclusion, we believe our study demonstrates the superiority of 7 Tesla MRA over 1.5 Tesla MRA and underlines the high diagnostic potential of 7 Tesla non-enhanced MPRAGE imaging for assessment, screening and follow-up of UIA.
2016-05-12T22:15:10.714Z
2014-01-06T00:00:00.000
{ "year": 2014, "sha1": "3a16d69114ddfce04b2af1e26df2203850f37f54", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084562&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a16d69114ddfce04b2af1e26df2203850f37f54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226217325
pes2o/s2orc
v3-fos-license
Low Cost MR Compatible Haptic Stimulation with Application to fMRI Neurofeedback The most common feedback displays in the fMRI environment are visual, e.g., in which participants try to increase or decrease the level of a thermometer. However, haptic feedback is increasingly valued in computer interaction tasks, particularly for real-time fMRI feedback. fMRI-neurofeedback is a clinical intervention that has not yet taken advantage of this trend. Here we describe a low-cost, user-friendly, MR-compatible system that can provide graded haptic vibrotactile stimulation in an initial application to fMRI neurofeedback. We also present a feasibility demonstration showing that we could successfully set up the system and obtain data in the context of a neurofeedback paradigm. We conclude that vibrotactile stimulation using this low-cost system is a viable method of feedback presentation, and encourage neurofeedback researchers to incorporate this type of feedback into their studies. Introduction Haptic, or tactile, feedback is an increasingly common modality for interacting with computer systems given its potential to increase learning [1,2], particularly in virtual reality clinical contexts [3], to provide analogs of real-world experiences [4], and to provide physiologically reactive stimulation [5]. For all of these reasons, haptic feedback systems have been implemented for mechanistic studies using neuroimaging [6][7][8][9], particularly including devices that include vibrotactile stimulation [9][10][11][12][13]. Neurofeedback, in which individuals learn to manipulate brain function, has specifically been shown to benefit from such vibrotactile haptic feedback [14][15][16][17]; biofeedback studies more generally have also shown the benefits of this vibratory modality [18]. There are many neuroimaging methods for which haptic feedback could be applicable [19]; here we focus on fMRI, in which, despite a number of proofs-of-concept, vibrotactile stimulation during fMRI is not yet common [20]. Here, we consider the application of vibrotactile stimulation specifically for fMRI neurofeedback as an example domain in which to overcome the commonly perceived obstacles to its implementation. Real-time functional magnetic resonance imaging neurofeedback (rtfMRI-nf) is becoming a commonly used tool to manipulate hemodynamic activity, with the goals of both better understanding brain-behavior/cognition relationships as well as creating new interventions for clinical illnesses [20][21][22]. In a typical neurofeedback setup, participants are instructed to increase or decrease a feedback signal presented to them. This signal is generated by extracting images from the MR scanner, analyzing activity or connectivity in specific brain regions online, and quantifying this activity to a simple feedback signal (most commonly a single value). The most commonly used feedback modality is a visual display [22]. While a large variety of displays have been used (including computer games, brain activation maps and social reinforcement; see [22]), the most commonly used visual display is a thermometer. We have conducted several studies with the goal of training participants to increase hemodynamic activity in their amygdala while recalling positive autobiographical memories [23][24][25]. These studies have demonstrated that both healthy and depressed participants can increase their amygdala hemodynamic response during positive memory recall, and this has large effects on depressive symptoms and processing biases [25,26]. While participants are generally successful in the neurofeedback task, many comment that they wished they could have closed their eyes during the feedback so as to more fully immerse themselves in the memories. Haptic feedback would solve this issue, and indeed would be especially useful for paradigms that involve savoring or rumination. Haptic feedback also offers a method for probing touch senses, such as c-afferent fibers (rather than just visual), and offers an alternative for those who are visually-impaired. Haptic feedback may remain rare in fMRI studies, particularly fMRI neurofeedback, because fMRI-compatible haptic systems are often high-cost or are perceived to be too niche or complicated for easy implementation by non-engineers. For example, many of the primary publications cited above use custom systems and are published in engineering journals. The publicly available systems tend to be much more complex and sensitive than what is needed for a simple vibrotactile stimulation or neurofeedback, wherein the primary requirement is to be able to sense different amplitudes or frequencies of stimulation. Here, our goals are to describe how a non-engineer can build a simple fMRI-compatible haptic feedback system for under USD 150 in approximately 2 h, and to show the feasibility of using this system in the context of a real-time fMRI neurofeedback protocol in a case series with N = 3. Our specific questions included (1) whether, in a small sample, neurofeedback effects could be detected with haptic as well as visual stimulation (i.e., non-inferiority), (2) whether the effect of the haptic stimulation, in the absence of neurofeedback, would likely occlude other signals in areas of interest (here, the amygdala and intraparietal sulcus (IPS)), and (3) whether the effect of haptic stimulation would be detectable in regions associated with responses to somatic stimuli (insula and somatosensory cortex) to validate the interpretation of question 2. These questions are important to answer given the wealth of data suggesting that haptic neurofeedback could offer advantages that visual feedback cannot, e.g., allowing participants to close their eyes in the scanner and having inherent primary hedonic features. Haptic Setup for MRI The goal for the MRI haptic setup is to stay within the budget and capabilities of a technically-interested undergraduate who has no special electrical skills. Typical MRI software geared to either provoke different levels of reactivity or for neurofeedback provides a numeric output, based on some neural activity, that is commonly used to create visual feedback. This value can also be scaled to operate haptic feedback representing the strength of desired haptic stimulation (e.g., higher when activity is higher). Specifically, that number can be translated into vibration. In addition, metals cannot safely exist in the scanner environment. To address this constraint, this vibration is generated outside the scanner, and a rigid form transmits the vibration into the scanner bore. The system thus has three parts: a controller (Arduino microcontroller), a vibrating element (vibrating motor), and a way to get the vibration from outside the scanner to inside the scanner (PVC tubing). Each of these elements is described below. Supplementary Materials Figure S1 contains our full parts list with information about at least one source from which each piece can be ordered. Controller (USD 50)-Arduino Uno (~USD 25; Arduino LLC, Boston, MA, USA) and an Adafruitmotor shield (~USD 20; Arduino LLC, Boston, MA, USA) ( Figure 1, which shows our vibration-controller setup as we assembled it). We have used both the original Arduino Motor Shield (which one can buy pre-assembled) and the Adafruit Motor Shield v2. The Adafruit may be supported for longer, but requires some soldering for connections to be robust. In addition, wires will need to be screwed into the Arduino, which could go right to what you use to generate the vibration signal. Alternatively, and easier to take apart, the wires from the Adafruit can be converted to end with a Radio Corporation of American (RCA) jack that is commonly included in consumer audio equipment. Such pre-assembled RCA to screw terminal adapters are inexpensive (~USD 7) and require no soldering (Figure 2a). The motor shield also requires a power source. We use an external 5 volt 2 Amp direct current power supply (~USD 8). These standard plugs can be purchased, then the ends cut off, wires stripped, and wires run into the motor shield's power input receptacles (Figure 2b). To preserve robustness in the scanning environment, we hot glued over all of the screw terminals once the relevant wires were connected. Brain Sci. 2020, 10, x FOR PEER REVIEW 3 of 10 signal. Alternatively, and easier to take apart, the wires from the Adafruit can be converted to end with a Radio Corporation of American (RCA) jack that is commonly included in consumer audio equipment. Such pre-assembled RCA to screw terminal adapters are inexpensive (~USD 7) and require no soldering (Figure 2a). The motor shield also requires a power source. We use an external 5 volt 2 Amp direct current power supply (~USD 8). These standard plugs can be purchased, then the ends cut off, wires stripped, and wires run into the motor shield's power input receptacles ( Figure 2b). To preserve robustness in the scanning environment, we hot glued over all of the screw terminals once the relevant wires were connected. Computer is connected, via USB, to an Arduino Uno with an Adafruit Motor shield (bottom). The motor outputs are connected to an RCA jack for ease of disassembly (center), which is connected, via another RCA jack (center), to a motor, which is inserted and glued into a PVC tube (top left) in the control room. This setup is sufficient to produce vibrations through the length of the PVC tube when they are triggered by the computer. Brain Sci. 2020, 10, x FOR PEER REVIEW 3 of 10 signal. Alternatively, and easier to take apart, the wires from the Adafruit can be converted to end with a Radio Corporation of American (RCA) jack that is commonly included in consumer audio equipment. Such pre-assembled RCA to screw terminal adapters are inexpensive (~USD 7) and require no soldering ( Figure 2a). The motor shield also requires a power source. We use an external 5 volt 2 Amp direct current power supply (~USD 8). These standard plugs can be purchased, then the ends cut off, wires stripped, and wires run into the motor shield's power input receptacles ( Figure 2b). To preserve robustness in the scanning environment, we hot glued over all of the screw terminals once the relevant wires were connected. 1A AC to DC power source. We added RCA jacks to the power source and Arduino (in A) for easy assembly/disassembly. (c) The motor outputs of the Arduino are connected, via RCA jacks, to a 3V motor, which is inserted and glued into a PVC tube (Side and top views shown). Software-The Arduino is a microcontroller, which must be loaded with a program that allows it to be controlled by a computer. Software that uploads firmware from any computer to the Arduino is freely available (Arduino 1.8.13, Arduino, Boston, MA, USA). Examples of motor control programs to upload are numerous and freely available. We have provided Matlab software (Matlab R2018b, Mathworks, Natick, MA, USA) for neurofeedback control, which interfaces with the Turbo-BrainVoyager™ neurofeedback module (Turbo Brain Voyager 4.0, Brain Innovations, Maastricht, The Netherlands), and which uses the default Arduino control software for Matlab, at https://www.mathworks.com/ matlabcentral/fileexchange/74339-haptic-feedback-for-turbo-brainvoyager. This software assumes that Turbo-BrainVoyager™ creates a file representing the level of activity in a region of interest at each repetition time (TR) within a known range. The software continuously polls for the existence of such a file, and when a new file is found, generates a vibration of corresponding intensity. Open source options for neurofeedback software, for example Open NFT, could also be used. GNU Octave (GNU 5.2.0, GNU Operating System, Boston, MA, USA) could be used as an open source alternative to Matlab. Vibrating Element (USD 30)-Any 3-5 volt DC vibrating motor should yield sufficient vibration to be felt at the scanner bore. We used a uxcell 5 volt DC, 3200 revolutions per minute motor (USD 6). From this motor we connected its wires (solder or hot glue can connect these) to the RCA female plug (see above). We mounted the motor and wires inside a piece of 1" pvc through which we drilled a hole for the wires, and hot glued the motor in place, leaving sufficient unglued area for air circulation to account for heating in the motor (Figure 2c). Tube to transfer vibration to the subject (USD 30)-To get from a typical MRI control room, through the waveguide, to our scanner bore, we use approximately 30 feet of 1 inch PVC tube (USD 15). We included angle couplings to navigate around the room's objects as necessary. The length and topology for a given scanner will depend on measurements specific to the users' scanning environment. If the PVC touches the waveguide, some of the vibration being transmitted will be lost/dampened. To reduce the loss of vibratory energy, we suspend the PVC through the waveguide from the scanner room ceiling (Figure 3a) using string, paracord, or medical tape (USD < 10) (Figure 3b). The participant may receive the vibration by adding any number of coverings or endpoints to the PVC tubing, e.g., allowing a participant to hold a tennis ball with a hole into which the PVC tube is inserted (Figure 3c). Design alternatives: We have explored many design alternatives to the current system. Prause et al. (2012) [13] used a system with an air compressor, tubing and an air-powered imbalanced turbine in lieu of the vibrating motor and PVC [12]. This system worked well for us, but was much louder and had less power. For applications which do not require graded vibration, but rather can use a simple on-off switch, a simpler approach can work to achieve even stronger haptic feedback. In lieu of the motor shield and motor, the Arduino's digital outputs and ground can be connected to a controllable outlet power relay (IoT Power Relay available from AdaFruit for USD 25), into which many commercially available vibrating devices can be attached (e.g., personal massager with a reducing coupler for the head or a sheet pad sander, for which copper brackets can be screwed into the sander plate and around the PVC to tightly and stably connect to the PVC tube ( Figure S2)). Arduino code for these applications is available from GJS by request. environment. If the PVC touches the waveguide, some of the vibration being transmitted will be lost/dampened. To reduce the loss of vibratory energy, we suspend the PVC through the waveguide from the scanner room ceiling (Figure 3a) using string, paracord, or medical tape (USD < 10) ( Figure 3b). The participant may receive the vibration by adding any number of coverings or endpoints to the PVC tubing, e.g., allowing a participant to hold a tennis ball with a hole into which the PVC tube is inserted (Figure 3c). Procedure To assess the feasibility of the system for fMRI neurofeedback, three medically healthy individuals performed neurofeedback with alternating runs of vibration off and on (Subject 1, F, age 36; Subject 2, F, age 23; Subject 3, M, age 34). Written informed consent was obtained from the participants. The study was approved by the University of Pittsburgh Institutional Review Board (Identification Code STUDY19050176) and carried out in accordance with the Declaration of Helsinki for experiments involving humans. We used the commercially available Turbo-BrainVoyager™ software for real-time imaging and processing. The rtfMRI-nf procedure consisted of five fMRI runs each lasting 8 min and 40 s, a baseline run in which no neurofeedback information was provided, and four training runs. During training runs 1 and 3, no vibration was provided and the standard thermometer was used, while in runs 2 and 4 the thermometer was visible and vibration feedback was also provided. This design has been published previously and fully described elsewhere [22,23]. Briefly, all runs consisted of alternating blocks of Rest (5 40 s blocks), Count Backwards (4 40 s blocks of counting backwards from 300 by an integer), and Happy/Regulate (4 40 s blocks). During the Happy condition, participants were instructed to silently recall and contemplate positive autobiographical memories while also attempting to increase the level of the thermometer and/or the strength of the vibration felt. An empty thermometer was displayed during the Count and Rest conditions and no vibration was felt. The neurofeedback signal for each Happy block was computed as the fMRI percent signal change relative to the average fMRI signal for the preceding Rest block. This was provided as output over every 2 s window during happy recall and presented to the participant both visually (thermometer) and haptically (vibration) to their right hand. To reduce fluctuations due to noise in the fMRI signal, the thermometer level and strength of vibration was computed at every time point as a moving average of the current and two preceding values. These percent signal change values obtained during neurofeedback were averaged over each run and used as a performance measure (the signal the participants received). These values were used to compare amygdala activity during vibration on vs. off, as we were interested in how vibration affected the signal being trained. To examine differences between the vibration on and off conditions, we performed an area under the curve (AUC) test of the mean of the vibration on condition (minus the preceding rest run) versus the no vibration condition (minus the preceding rest run). The amygdala region-of-interest was defined as a sphere of 7 mm radius centered at −21, −5, −16 in the stereotaxic array of Talairach and Tournoux, and was transformed to the EPI image space using each subject's high-resolution MPRAGE structural data. The resulting region-of-interest in the EPI space contained approximately 140 voxels. We performed a visual inspection of the regions-of-interest prior to the start of neurofeedback. No adjustments were performed as a result of visual inspection. After the feedback task was complete, the participants received variable vibration that was not associated with their own amygdala activity. Specifically, each received the neurofeedback vibration of another participant during an 8 min 40 s resting state run, during which the instructions were to simply relax and not think of anything in particular. This was done to examine whether the amygdala was activated by vibration in the absence of a task. This yoked sham neurofeedback signal was created from the first training run of a female subject from another study with depression who completed our standard fMRI-neurofeedback paradigm. The first training run was selected so as to have the most variance in the vibration, as this was the run from that study wherein the participants were just beginning to learn how to effectively control the signal. fMRI analysis for the resting state data was performed using AFNI (http://afni.nimh.nih.gov/afni). The single-subject analysis steps consisted of slice timing correction, within-subject realignment, coregistration between anatomical and functional images, spatial normalization to the stereotaxic array of Talairach and Tournoux, spatial smoothing (Gaussian kernel, 4 mm full width at half maximum), and finally the voxel time series were low pass filtered (cutoff 0.10 Hz). Standard general linear model (GLM) analysis was applied with the following regressors included in the GLM model: two block stimulus conditions for the vibration analysis (on and off), six motion parameters as nuisance covariates to take into account possible artifacts caused by head motion, and five polynomial terms for modeling the baseline. The regressors were convolved with the canonical hemodynamic response function provided with Analysis of Functional NeuroImages (AFNI, Washington, DC, USA). The hemodynamic response estimates (GLM ß coefficients) were computed for each voxel within the amygdala, intraparietal sulcus, insula, and postcentral gyrus regions of interest (ROIs) using the 3dDeconvolve AFNI program and then converted to percent signal changes for vibration on versus off. The voxel-wise percent signal change data were averaged within each ROI. Table 1 shows the average amygdala values calculated on-line by Turbo-BrainVoyager™ for the Happy-Rest condition during each run and Figure 4 shows the AUC for each subject. For each participant, the observed amygdala feedback signal was higher with either type of feedback compared to the baseline (visual vs. baseline t(2) = 3.62, p = 0.06, d = 2.69; haptic vs. baseline (t(2) = 5.91, p = 0.03, d = 3.82). There was a moderate effect size for the effect of vibration, compared to visual stimulation only, across participants (d = 0.44) that was non-significant due to the small sample (t(2) = 0.54, p = 0.62). Q2: Does Haptic Stimuliation Occlude Effects of Interest? Resting BOLD Response with vs. without Haptic Stimulation in Neurofeedback Regions for Which Detection of Haptic Stimuliation Would Be Problamatic (Amygdala, Intraparietal Sulcus) Amygdala Activity: Table 2 shows the average amygdala values for haptic on-haptic off during an 8 min 40 s eyes open resting state run. The percent signal change between the two conditions was very small (one-sample t-test comparing mean to 0 change; mean = 0.005; t(2) = 0.31, p = 0.78) and was not in a consistent direction, and the difference between on and off showed a very small effect size (d = 0.01). Intraparietal Sulcus Activity: We also examined the BOLD response in the control region we used in our other neurofeedback experiments, which was the left horizontal segment of the intraparietal sulcus (defined as a 7 mm sphere centered at (−42, −48, 48) in the stereotaxic array of Talairach and Tournoux). As can be seen in Table 2, the difference between the two conditions was very small (one-sample t-test comparing mean to 0 change; mean = −0.0004; t = 0.22 p = 0.98), not in a consistent direction, and had a very small effect size (d = 0.001). Q3: Are Effects of Haptic Stimuliation Detecable? Resting BOLD Response with vs. without Haptic Stimulation in Regions Where Detection of Haptic Stimuliation is Expected (Insula, Somatosensory Cortex) Insula and Somatosensory Activity: We examined the BOLD response in two regions in which we expected to see greater activity when vibration was on versus off-the insula and the somatosensory cortex (postcentral gyrus). As can be seen in Table 2, in both regions bilaterally, there was increased activity when vibration was on relative to when it was off (ts > 9.47, ps < 0.001, ds > 6.86). Amygdala Activity: Table 2 shows the average amygdala values for haptic on-haptic off during an 8 min 40 s eyes open resting state run. The percent signal change between the two conditions was very small (one-sample t-test comparing mean to 0 change; mean = 0.005; t(2) = 0.31, p = 0.78) and was not in a consistent direction, and the difference between on and off showed a very small effect size (d = 0.01). Intraparietal Sulcus Activity: We also examined the BOLD response in the control region we used in our other neurofeedback experiments, which was the left horizontal segment of the intraparietal sulcus (defined as a 7 mm sphere centered at (−42, −48, 48) in the stereotaxic array of Talairach and Tournoux). As can be seen in Table 2, the difference between the two conditions was very small (one-sample t-test comparing mean to 0 change; mean = −0.0004; t = 0.22 p = 0.98), not in a consistent direction, and had a very small effect size (d = 0.001). Insula and Somatosensory Activity: We examined the BOLD response in two regions in which we expected to see greater activity when vibration was on versus off-the insula and the somatosensory cortex (postcentral gyrus). As can be seen in Table 2, in both regions bilaterally, there was increased activity when vibration was on relative to when it was off (ts > 9.47, ps < 0.001, ds > 6.86). Discussion We have responded to multiple theoretical papers suggesting that haptic fMRI neurofeedback could be of interest in terms of constructing a low-cost, reproducible, portable system for providing haptic feedback during rtfMRI-nf training. We made use of relatively inexpensive components that are available to consumers. The produced haptic stimulation-here vibration-has sufficient displacement and magnitude to be strongly felt while allowing for gradations indicating the amount of hemodynamic activity. The total cost was USD 130. These feasibility data indicate that participants can increase their amygdala signal during positive autobiographical memory recall relative to a rest baseline to a similar extent when vibratory feedback is provided as when visual feedback alone is provided. Future studies should examine whether neurofeedback performance is superior with haptic relative to visual feedback. Furthermore, vibration alone during rest did not change the activity in the regions of interest for our neurofeedback studies, but did change activity in the regions we would expect to be responsive to vibration (insula and somatosensory cortex). This suggests that vibratory feedback is appropriate for neurofeedback studies targeting regions involved in emotion regulation, but caution should be used when the target is a region that is also sensitive to interoceptive signals, such as the insula, as it is possible that the effects of vibration could interfere with neurofeedback learning. This work is a first step in bringing haptic feedback to rt-fMRI. Haptic feedback is likely to vary in its effect on results. In particular, different patterns and locations of vibration are well known to be experienced as emotionally positive or negative [27], to have different neural correlates, e.g., [9], and to have different physiological effects (e.g., whereas body vibration in the 6-10 Hz range is associated with increased indicators of sympathetic tone [28], vibration in the 89 Hz range on the face is associated with increased parasympathetic tone [29]). Thus, research establishing parameters for how haptic neurofeedback is used is a prudent next-step and a future direction of our research. Of course, larger studies in non-biased samples with a randomized block design are needed. The importance of the current work is to establish that these next steps are worth taking. Conclusions In conclusion, we have demonstrated that vibration during fMRI neurofeedback is well motivated, affordable, easy to implement, feasible to use, and is likely to yield interpretable results which are at least comparable to current neurofeedback methods, allow participants to close their eyes, and are not compromised by producing spurious brain activity. In our future rtfMRI neurofeedback studies, we plan to incorporate this haptic feedback. Supplementary Materials: The following are available online at http://www.mdpi.com/2076-3425/10/11/790/s1, Figure S1: Haptic Part List: full parts list with information about at least once source from which each piece can be ordered; Figure S2
2020-10-29T09:02:26.373Z
2020-10-28T00:00:00.000
{ "year": 2020, "sha1": "99228833b881900d65208e9e9cfc1f249eaf5a6e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/10/11/790/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9a6387a2bbd62719f42bfc8c2e95ec51c95657d", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
15625892
pes2o/s2orc
v3-fos-license
Survival results of a multicentre phase II study to evaluate D2 gastrectomy for gastric cancer Curative resection is the treatment of choice for potentially curable gastric cancer. Two major Western studies in the 1990s failed to show a benefit from D2 dissection. They showed extremely high postoperative mortality after D2 dissection, and were criticised for the potential inadequacy of the pretrial training in the new technique of D2 dissection, prior to the phase III studies being initiated. The inclusion of pancreatectomy and splenectomy in D2 dissection was associated with increased morbidity and mortality. Following these results, we started a phase II trial to evaluate the safety and efficacy of pancreas-preserving D2 dissection. The results of this trial regarding the safety of pancreas preserving D2 dissection were published in 1998. In this paper, we present the survival results of this phase II trial to confirm the rationale of carrying out a phase III study comparing D1 vs D2 dissection for curable gastric cancer. Italian patients with histologically proven gastric adenocarcinoma were registered in the Italian Gastric Cancer Study Group Multicenter trial. The study was carried out based on the General Rules of the Japanese Research Society for Gastric Cancer. A strict quality control system was achieved by a supervising surgeon of the reference centre who had stayed at the National Cancer Center Hospital, Tokyo, to learn the standard D2 gastrectomy and the postoperative management. The standard procedure entailed removal of the first and second tier lymph nodes. During total gastrectomy, the pancreas was preserved according to the Maruyama technique. Complete follow-up was available to death or 5 years in 100% of patients and the median follow-up time was 4.38 years. Out of 297 consecutive patients registered, 191 patients were enrolled in the study between May 1994 and December 1996. The overall morbidity rate was 20.9%. The postoperative in-hospital mortality was 3.1%. The overall 5-year survival rate among all eligible patients was 55%. Survival was strictly related to stage, depth of wall invasion, lymph node involvement and type of gastrectomy (distal vs total). Our results suggest a survival benefit for pancreas-preserving D2 dissection in Italian patients with gastric cancer if performed in experienced centres. A phase III trial among exclusively experienced centres is urgently needed. Gastric cancer, which is the commonest cancer in Japan, remains a major cause of death also in Western countries. In Italy, it represents the third most frequent cause of death from cancer in both male and female patients (Decarli et al, 1998). Data from Italian Cancer Registries show a 27% 5-year survival rate (Rosso et al, 2001). This is consistent with other survival rates reported in Western countries. On the contrary, large retrospective Japanese series have shown significantly higher 5-year survival rates after radical gastrectomy. This impressive difference is largely related to earlier diagnosis, but it is possible that the more extensive lymph node dissection performed in Japan, where the stomach is usually removed along with the first and second tier nodal stations (D2 gastrectomy) (Sasako et al, 1997), also contributes. Favourable patient survival after D2 gastrectomy has also been reported by some other non-Japanese retrospective nonrandomised trials (Pacelli et al, 1993;Siewert et al, 1993). Nevertheless, the two large prospective randomised trials recently performed in the West (the MRC and the Dutch randomised surgical trials) failed to demonstrate a survival benefit for D2 gastrectomy as compared to D1 resection (Bonenkamp et al, 1999;Cuschieri et al, 1999). Furthermore, these trials showed a significant increase in post-operative morbidity and mortality after extended dissection. These unfavourable results have been attributed mainly to the en bloc removal of the spleen and the tail of the pancreas for middle and upper third tumours in the D2 arms of both trials. Furthermore, the lack of experience in this technique of dissection and in postoperative care by each surgeon participating in these trials has been claimed as one of the reasons for the results (Bonenkamp et al, 1995;Cuschieri et al, 1996). Both studies were carried out without pretrial training and without preliminary studies to confirm the safety of the procedure locally, and were concluded before many surgeons would have reached the plateau of the learning curve. The Italian Gastric Cancer Study Group (IGCSG) was set up in 1994 to confirm the safety and efficacy in survival of D2 resection with pancreas preservation, and a strict quality control system was implemented in a prospective one-arm phase II study. In 1998, we showed comparable postoperative morbidity and mortality rates with those reported after the standard resection, and documented that the D2 resection with preservation of the pancreas could be offered as a safe radical treatment of gastric cancer for Western patients in experienced centres (Degiuli et al, 1998). We now report the survival data of the patients of the same trial. Eligibility and assessment of curability Patients eligible for participation in this study were to have histologically proven and preoperatively potentially curable adenocarcinoma of the stomach. Patients who required emergency procedures, who harboured a coexisting cancer, who were 480 years old or who had a comorbid cardiorespiratory dysfunction that would preclude more extensive dissection were excluded. After preoperative staging to exclude clinical evidence of distant metastasis, all patients were registered and underwent staging laparotomy. Eligible cases were those without any evidence of peritoneal and/or liver metastasis, involvement of the oesophagus, cardias or duodenum, and biopsy-proven metastasis in para-aortic and/or retropancreatic nodes. Treatment The surgical protocol was based on the general rules of the Japanese Research Society for Gastric Cancer (JRSGC, 1981a, b). The D2 dissection entailed removal of the first and second tier nodes along with the lymph nodes of the left side of the hepatoduodenal ligament. During total gastrectomy, the spleen was removed while the tail of the pancreas was preserved according to the technique described by Maruyama et al (1995), unless it was suspected to be invaded by the tumour. In the case of a clinical T1 tumour, splenectomy was not carried out. Distal gastrectomy was performed in cases of early gastric cancer (EGC) or well demarcated advanced gastric cancer (AGC), such as Borrman type 1 or 2, with a tumour-free margin of at least 2 cm, or in case of infiltrative AGC, type 3 or 4, with a tumour-free margin of at least 5 cm to the proximal resection line. A total gastrectomy was performed in all other cases. For all enrolled patients, chemotherapy was not given until recurrence was diagnosed. Pathological classification As compared with our previous papers, tumours were restaged according to the fifth edition of UICC TNM Classification of Malignant Tumours and the Japanese Classification of Gastric Carcinoma, 2nd English edition (UICC, 1997;JGCA, 1998). Quality control A surgeon from the reference centre (MD) stayed at the National Cancer Center Hospital, Tokyo, to learn the D2 dissection from a specialist Japanese surgeon (MS). He was given didactic videos, papers and explanatory booklets edited by Japanese authors. MD became the supervisor of the trial. The IGCSG was set up in April 1994 and nine institutions participated. Each centre had two surgeons attending all the operations. Before starting the trial, several meetings were organised among participating centres to explain the terminology, to debate the proper indications and demonstrate the surgical technique. At least one of the two surgeons of each participating institution observed the first 10 procedures in this trial, which were performed at the reference centre. Afterwards, MD attended the first three operations performed at each institution. Registration The study was organised and directed from a central office at the reference centre (Department of Oncology, Division of Surgery, Turin, Italy). Data on enrolment, surgical procedures, histopathologic findings, postoperative course and patient follow-up evaluation were collected by the surgeon at each institution and posted to the data centre at the central office. Patients were followed up at regular intervals: every 3 months during the first 2 years and every 6 months thereafter. In addition, an enquiry on vital status and cause of death was collected for all patients at the municipal roster office. The final follow-up date was 31 December 2002. Complete follow-up was available in 100% of patients; the median follow-up time for those alive at the end of the study was 7.4 years. Statistical methods Sample size calculations were performed assuming to achieve a 5year overall survival of 50%, intermediate between Western and Japanese series. The required number for enrolment was then set to about 200 patients, based on the desired level of power precision in estimating this parameter (95% confidence interval: 42.9 -57.1%, power 80%). Confidence intervals are based on exact binomial probabilities. Overall survival was computed by the Kaplan -Meier method using the BMDP statistical package for all eligible subjects and for subpopulations grouped on the basis of selected variables. Both deaths due to the disease and deaths without evidence of recurrence were counted as events in the analysis of survival. The gastric cancer-specific survival curve was also calculated, with deaths due to other causes being censored. RESULTS In total, 297 patients with histologically proven adenocarcinoma of the stomach were registered from the nine institutions over 2 1 2 years (May 1994-December 1996. Of these, 106 patients were found ineligible for the study mostly because more advanced disease was identified at laparotomy, as outlined in the protocol. In all, 191 patients fulfilled the criteria of eligibility and were entered into the study. Table 1 briefly summarises the characteristics of the eligible patients (median age: years), the procedures performed, the pathologic stage of the disease and the early outcome. No patients were lost to follow-up. The median follow-up time of all patients alive at the end of the study was 7.4 years (range 6 -8.7 years). All patients were followed up till death or for at least 6 years. Of the 191 resected patients, 96 (50.3%) died. Six out of these 96 patients died with early postoperative complications (3.1%). During the follow-up, 26 patients (13.6%) died without recurrence of gastric cancer. Death with recurrence of gastric cancer occurred in 70 patients (36.7%). Overall survival For calculating the incidence of deaths due to the disease (n ¼ 70), the cause of death according to clinical records was used. In those few records where the cause was missing, the cause of death listed in the Piemonte Cancer Registry (from the municipal roster office) was used. Survival by TNM stages The 5-year survival rate was significantly dependent upon the stage of the disease (Po0.001). It was 95, 87.5, 57.5, 42.5, 22.5 and 2.5% in patients with TNM stage IA, IB, II, IIIA, IIIB and IV, respectively ( Figure 2). To allow comparison of these results with other reports, the results using the previous TNM classification are also shown in Table 2. Survival by nodal involvement We analysed patient survival according to the two nodal staging systems: the 1997 TNM and the 1998 JGCA classification. Survival by type of gastrectomy Patients who underwent distal gastrectomy showed a higher 5-year survival rate (70%) as compared with those who received total resection (40%) (Po0.001). DISCUSSION The role of the extended lymph node dissection in improving longterm survival after gastrectomy for gastric cancer is still not proven by RCTs. Moreover, the Dutch and British trials have shown increased morbidity and mortality figures after D2 gastrectomy (Bonenkamp et al, 1995;Cuschieri et al, 1996). Potential reasons for this unfavourable outcome include the lack of surgical skilfulness/training and poor quality control, and the routine removal of the spleen and tail of the pancreas in total gastrectomy (Cuschieri et al, 1996). In our previous paper, we showed that it is possible to achieve low morbidity and mortality after extended lymph node dissection, if the operation is performed in specialised centres with a strict quality control system, and without removing the pancreas during total gastrectomy unless it is suspected to be involved by the tumour (Degiuli et al, 1998). The present study has also shown good survival data. The overall 5-year survival rate was 55%. Moreover, the disease-specific 5-year survival was 65%. Our results are almost equivalent to those reported by Sasako after 2541 extended gastrectomies performed at the National Cancer Center Hospital, Tokyo, during the period '1982 -1991' (66%) (Sasako et al, 1997, pp 223 -248). Not only the overall survival rate but also the stage-specific survival rates after D2 dissection were much better in this study than those of the D2 arm of the Dutch and MRC trials (Table 2). Survival of IGCSG D2 gastrectomy phase II study M Degiuli et al The discrepancy between our data and data from other Western series could be explained by differences in the patient population or by differences in surgical technique. Regarding the patient populations, the eligibility criteria from the two large prospective randomised series are totally comparable to those adopted in our trial. With respect to the clinical and pathological stages, no major differences appear in the reported series apart from a clear prevalence of early gastric cancer in the Japanese series. The prevalence of early tumours (stage I disease) is close to 50% in the Japanese series, while it is 35.6% in our population, 36% in the MRC series, 26% in the Dutch trial and 19.6% in an American patient care study (16). Siewert gives the figures for IA and IB stages, which are, respectively, 13.8 and 13.4% (3). In the present series, the number of patients with TNM stage less than III is substantial (106 patients, 55.4%) and might be partly responsible for our good survival data. To avoid the confounding effect of stage migration, we should compare the results of series reporting D2 dissection with each other. Our results are similar to those previously reported by Pacelli et al. (1993) in their retrospective trial and by Siewert et al (1993) in their prospective nonrandomised trial. The main criticism that has been directed towards the recent prospective randomised European trials has been the lack of experience of the surgeons participating in the study. The contrast in postoperative mortality between the Dutch or British trials and our own study clearly demonstrated the danger of carrying out this procedure, let alone an RCT, without sufficient pretrial training. Clearly a one-arm study, equivalent to the phase II study in medical treatment, is an appropriate preliminary to a phase III trial of complex and potentially hazardous surgery. MS, who was supervisor of both the Dutch and the Italian study, believes that the Dutch study was flawed by early randomisation of patients, and the inclusion of many smallvolume hospitals. It is suggested that a new surgical technique requiring not only surgical skills but also good experience in postoperative care should only be tested in an RCT after completion of sufficient training to carry it out safely. In fact, the reported perioperative mortalities in these two major RCTs on D2 dissection were over 10%. Pancreaticoduodenectomy for pancreatic cancer or radical oesophagectomy for oesophageal cancer are more surgically aggressive procedures than D2 gastrectomy and are recommended to be performed exclusively in specialised centres. They do not carry a risk of hospital mortality of over 10% in such centres (Altorki and Skinner, 1997;Gordon et al, 1998;Bottger and Junginger, 1999;Lerut et al, 1999;Tsiotos et al, 1999;Gouma et al, 2000;Karl et al, 2000). Postoperative mortality of over 10% is no longer acceptable in any kind of cancer surgery. Our own experience correlates well with the data given by Parikh et al (1996) about the duration of the learning curve for D2 dissection, which should be more than 15 procedures. Each participating centre treated 15 to more than 25 patients (seven procedures per year on an average) (Table 3), and in every centre each patient was always treated by the same two surgeons. Therefore, each centre and each surgeon should have reached an optimal experience level, acquiring sufficient technical skills regarding intra-and postoperative care during this trial. Our results support the argument for training the surgeons prior to the initiation of a clinical trial although, at a practical level, a study target of 700 -1000 patients would be very difficult to conduct, and it might take more than 10 years to recruit all the patients. We observed an overall postoperative in-hospital mortality of 3.1%: this rate has been decreasing from 5.2% in 1994, to 2.11% in 1995 and finally to 1.7% in 1996. While not statistically significant, this trend supports the concept of a learning curve. As already indicated, subset analysis of the Dutch and MRC trials documented that the higher morbidity in the D2 arm is mostly due to pancreas and spleen removal (Cuschieri et al, 1996). Hence, pancreas preservation was adopted as standard procedure in D2 dissection in the present trial. Therefore, the pancreas was removed only when it was suspected to be involved by the tumour (T4). Furthermore, during total gastrectomy, splenectomy was not carried out in patients with clinical T1 tumour (Table 4). After confirming the low mortality and acceptable morbidity of pancreas-preserving D2 dissection, we started a phase III trail, comparing D1 vs D2 in 1998. The survival results shown in this paper suggest the benefits of D2 dissection, although a statistically significant survival advantage needs to be confirmed through this new randomised phase III trial. The aim of this phase III trial is to document an increase of survival in the D2 arm with acceptable increase of morbidity and without increase of mortality.
2014-10-01T00:00:00.000Z
2004-04-06T00:00:00.000
{ "year": 2004, "sha1": "5000bc566a6072e975c70f42c00a374f5323c0e1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/sj.bjc.6601761", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5000bc566a6072e975c70f42c00a374f5323c0e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10400261
pes2o/s2orc
v3-fos-license
Measurement Based Quantum Computation on Fractal Lattices In this article we extend on work which establishes an analology between one-way quantum computation and thermodynamics to see how the former can be performed on fractal lattices. We find fractals lattices of arbitrary dimension greater than one which do all act as good resources for one-way quantum computation, and sets of fractal lattices with dimension greater than one all of which do not. The difference is put down to other topological factors such as ramification and connectivity. This work adds confidence to the analogy and highlights new features to what we require for universal resources for one-way quantum computation. Introduction Drawing analogies is often a very powerful tool in science. It can allow not only deepened understanding through new perspectives opened up, but it can also allow technical tools from one discipline to be applied to another, often very fruitfully. In [1] an analogy between measurement based quantum computation and thermodynamics is made, viewing the computation itself as a phase transition. This is in spirit reverse to the thought provoking analogy made by Toffoli [13] where physics is viewed as a computation, rather [1] tries to understand computation itself as a physical process. In doing so key features of useful resources for one-way quantum computers were identified, in direct analogy to the identification of critical systems in thermodynamics. In particular rather elegant and simple methods first developed by Peierls [11] to show that one dimensional spin chains are not critical, where as two dimensional chains are, were translated into arguments of the dependence of dimension on universal resources for oneway quantum computation. In this work we extend this theme to look for other important features of universal resources, following the work on fractal lattices by Gefen et. al. [5]. There it is shown that critical behaviour in spin systems relies not just on the dimension of the lattice, but also other features such as order of ramification and connectivity. We again see an exact mirroring of results, highlighting these also as features crucial for universal resources for one-way quantum computation. As examples we will see that there exists a set of fractal lattices (Sierpinski carpets) for which any dimension greater than one guarantees it can act as a universal resource. On the other hand we will also see examples of dimension greater than one which are not universal, highlighting the importance of the other topological features (ramification, connectivity and lacunarity). The analogy: Phase Transition and Measurement Based Quantum Computation We start by reviewing the problems addressed in this analogy. In the case of thermodynamics and manybody physics, the problem which is of interest is the existence, or not, of some critical phenomena or phase transition. Simply put, a phase transition is when a small change in some parameters of a given system gives rise to a large macroscopic change of state, or phase. For example, at just below zero degrees water becomes ice, and just above it becomes water again. These two phases of matter are clearly very different. In spin systems the macroscopic property of interest is whether the system is magnetised or not. This happens when sufficiently many spins point in the same direction -we call this the 'ordered state'. The effect is witnessed by the amount of magnetisation M present -which is called an 'order parameter'. In the Ising model the ground state (corresponding to zero temperature) is ordered, and for high temperatures, the orientation of the spin becomes random and it is not ordered -its magnetisation is zero. The question is then whether or not there is a finite, non-zero, temperature T crit below which the system is ordered. If this is the case we say there can be a phase transition from non-magnetised to magnetised at temperature T crit . It is known that for one-dimensional spin chains with nearest neighbour interactions only, there is no phase transitions, where as for two dimensional lattices there are. This will be explained via the Peierls argument below. In the case of measurement based quantum computation, the problem of interest is the ability, or not, to perform universal quantum computation. In one-way quantum computation (1-way QC, in this work we take it to be synonymous with measurement based quantum computation) [12], a computation is carried out, first by preparing a highly entangled multipartite quantum state (which we call a 'resource state', and which is independant of the actual computation to be performed) and then performing local measurements and local corrections on individual sites. The choice of measurements, and how they depend on each other determines the computation which is performed. During the process of measurement entanglement is destroyed, and in this sense consumed by the computation. At the end of the computation, the classical information I of the solution is obtained as the measurement outcomes of the last few measurements. Since its invention a large amount of effort has gone into finding out what constitutes a good initial resource state (see e.g. [9,7]). Given a particular set of states, the question is, whether or not it can act as a universal resource for quantum computation. The analogy we will now draw goes towards answering this question. Note that we will always consider resource states as graph states in this work. A list of the analogous quantities between thermodynamics on the one hand, and one-way quantum computation on the other is given in Fig. 1. The second law of thermodynamics states that systems interacting with a thermal bath will always conspire to minimise the free energy F, given by Intuitively we can understand the second law as saying that by the process of thermalisation, nature insists that at a given temperature T , the energy U is spread out as much as possible, by maximising the entropy S. When making our analogy the quality we are insisting upon for our 1-way QC computation is that it should be universal. To this end, we postulate a kind of 'law of 1-way QC' whereby we insist (as 'mother Figure 1: (Borrowed from [1]) On the left, a set of quantities from thermodynamics and on the right their analogous counterparts for one-way quantum computation. nature' of the 1-way QC -it is after all us who designs and controls it) that the quantum computer be as universal at each step as possible. That is, we insist that at each time the computation is carried out in such a way as to maximise the potential. In terms of quantities, we say that for a given amount of entanglement E, we insist that any computation at any time t maximises the number of ways it can be used -which we call the computational capacity C. We thus phrase our 'second law of universal 1-way QC' as that at each time t, the potential P, given by must be minimised (i.e. the potential should be consumed as fully as possible). Following an argument first put forward by Peierls [11], and cleaned up by Griffiths [6], which shows that one dimensional spin chains are not critical, but two dimensional spin chains are, an intuitive argument as to why a one dimensional cluster state is not a universal resource for 1-way QC, where as a two dimensional cluster state is a universal resource was given [1]. Peierls' argument goes as follows. If we want to test whether an 'ordered state' (i.e. one with a large number of spins pointing in the same direction such that there is overall a positive magnetisation) is possible for some nonzero temperature T , we simply check wether small pertubations to this state will raise or lower the free energy of that state (the very physicsy 'shake it and see' approach). Any such pertubation will change the free energy If by perturbing it we can reduce the free energy, the state clearly is not a valid thermal state, by the 2nd law. In terms of equation (3) this is then a question of balance between the change in energy and the change in entropy. If perturbing the system increases the entropy more than the energy, the state before pertubation was not a valid thermal state. In the case of a one dimensional spin chain, the cost of any pertubation in terms of entropy is much greater (it scales with the number of spins n) than the cost in energy (which is fixed). In the case of a two dimensional spin, they scale with n in the same way, hence a balance can be found. By finding the fixed point of the free energy (the point where the pertubation makes no change -by setting (3) to zero), a critical temperature can be found above which the system is not ordered. Remarkably, given the simplicity of this approach, this is very close to the actual critical temperature, below which it can be shown also that the system is ordered. When testing if a system can be used for 1-way QC, the ordered state is the 'solution state' (the state after all measurements have been made in the 1-way QC) and the test is, if it is possible for some finite time t. Again, we test this by perturbing it and seeing if it violates our 2nd law of 1-way QC. If it does, it is definitely not a valid state according to our second law -that is, no computation satisfying our 'law of 1-way QC' can find such a state at a time t. Any partubation results in a change in computational potential This then bares out as a balance between the entanglement E and the number of ways of using it C, for a given t. As above, in the case of a one dimensional cluster state, this is balance can not be met -the number of ways of using the entanglement is larger than the entanglement available, hence some choice must be made about its use, sacrificing universality. Alternatively it says that there is no finite time length at which it could be achieved, so if it were possible, it would take an infinite amount of time. On the other hand as for the spin case, a two dimensional lattice these quantities do balance. Again as above it is possible to approximate a critical time t crit below which the computation cannot be completed in a universal fashion, by setting by setting (4) to zero. In fact by seeing how both of these factors scale with dimension D, it is possible to arrive at the following formula which agrees with both our intuition and examples that higher dimensional states can allow for greater speed in computation. Computing on Fractal Lattices We can now extend this analogy to cover another interesting set of examples from many-body physics, where it is shown that not only does dimension play a role in spin criticality, but also other topological features. In [5] similar techniques to those of Peierls and Griffiths described above are used to test the criticality of spin systems, this time based over several self-similar fractal lattices. Again the arguments are testing the ability of a lattice to balance the change in energy and entropy for small pertubations. Examples are presented which both do and do not allow criticality for all (fractal) dimensions greater than one. The additional features which capture the existence of criticality are shown to be topological including ramification, connectivity and lacunarity. We follow the same analogy as before to show exact mirrors of these results in 1-way QC. We see that graph states of the fractal lattices of Koch curves and Sierpinski gaskets are not universal resources for 1-way QC, where as Sierpinski carpets are, independent of dimension. As in [5] we can interpret this as the role of other topological factors including the ramification. The Koch curve is illustrated in Fig. 2. For our purposes this behaves exactly as in the 1D case in the previous section, where we argue it is indeed not a good universal resource [1]. Proofs to this effect are also known in the literature ( [9]). The Sierpinski gasket is shown in Fig. 3. Again, the same Peierls like arguments show it is not a valid possibility for a universal resource follow as those made above. That is the balance between the entanglement present and the number of ways to use the entanglement can not be found for some finite t. In analogy to the spin case [5], a significant pertubation of the solution state by adding entanglement can be done in many more ways than the amount of entanglement that is added, causing a negative change in P (equation (4)). This is unfortunately not a rigorous proof of non-universality, since our analogy (and in particular our 'law of 1-way QC') is not proven, rather it is justified. We can however prove that the Sierpinski gasket is not a universal resource by methods introduced in [10]. There it is shown that if the entanglement does not scale with a family of resource states (such as our lattices), then it cannot be a universal resource for 1-way QC [10,9]. The entanglement measure they use is the entanglement width E wd defined as where E bi T,e (|ψ ) is the bipartite entropy of entanglement across the bipartite cut defined by T and e. T is a subcubic graph with n leaves (edges not leading to a vertex at one end) and e is an edge of T . Each leaf corresponds to a qubit. The bipartite cut is defined by removing edge e to give two separate trees. The leaves of one tree correspond to one side of the cut, and the other tree the other side of the cut. It can easily be seen that for the Sierpinski gasket |ψ SG a tree can be defined with the same self similar properties, such that the best cut e also has self similar properties and gives entanglement E bi T,e (|ψ SG ) = 3 which does not grow. Hence the entanglement width is bounded E wd (|ψ SG ) ≤ 3 for any lattice size. Since it does not scale with the size of the lattice, it cannot be a universal resource. On the other hand Sierpinski carpets (see Fig. 4) are universal for all dimensions greater than one. The arguments to show it is a valid possibility for a universal resource follow along the same lines as the Peierls like argument made above. That is the balance of entanglement present and the number of ways to use the entanglement can always be found for some finite t. This is of course not a proof that it is a universal resource, since, even if we assume our 'law of 1-way QC', it only shows that it is a possible resource, i.e. that it doesn't violate the law of 1-way QC. We can however construct exact proofs for all cases. To show explicitly that these are universal, we adopt a similar technique to that used in [2], which is to actively construct a standard 2D lattice by taking out vertices using local Z and local X measurements, which in turn is known to be a universal resource [12]. The idea is that, given an arbitrary lattice (which may even be irregular, as in the case of [2]), if we can draw a standard 2D grid over this lattice, we can measure away the extra qubits to leave only the ideal 2D lattice. This is possible because of the way X and Z measurements convert one graph state to another (see e.g. [8]). It is easy to see by looking at the Sierpinksi carpet Fig. 4, it is always possible to draw a 2D grid which grows with the size of the carpet. Thus we always have a way to get a known universal resource for any dimension greater than one. Here we illustrate a Sierpinski carpet with b = 3, l = 1. The ramification is infinite. We thus see that the ability of a lattice to act as a universal resource for 1-way QC does not just depend on dimension. In particular, for any dimension between 1 and 2 we can find a Sierpinski carpet which is a universal resource, where as we have seen two examples with dimension in this range which are not universal. As in [5] we can then infer that it is down to other topological properties. One such property that resonates in the case of 1-way QC is ramification. The ramification R is the minimal number of edges that must be removed to separate a part of the lattice of arbitrary size. It tells us something about how globally connected the lattice is or how easily the lattice can be separated into chunks. The lower the ramification the easier it is to separate parts of the lattice off and the less globally connected it is. From our examples, those lattices with finite ramification are not universal resources, where as those with infinite ramification are (the 2D lattice also has infinite ramification). This is very similar in flavour to the idea behind the entanglement width introduced in [10] to check for universality of resource states (and used above). In a sense this also looks for some global connectedness, by the nature of the min max definition above. Here too an infinite scaling is required for universality. We may imagine there could be a connection between the two. We may also wonder whether an alternative entanglement measure can be defined with respect to ramification (or indeed other topological properties), which could be used to see their usefulness as resources for 1-way QC. This is beyond the scope of the current manuscript, but poses interesting possibilities. Conclusions We have seen that the analogy developed in [1] can be used to argue that fractal lattices can also act as universal resources for 1-way QC, and that not only is dimensionality important, but also other topological features such as ramification. By providing further examples where the analogy succeeds we have strengthened its validity. It also highlights new features that we can expect good resources should posses for 1-way QC. We can also ask how this corresponds to known conditions for good resources, such as the entanglement conditions in [9]. In this context perhaps it is possible that the important topological fea-tures could also correspond to particular entanglement features. Another possible connection to existing conditions would be to the existence of Flow on these lattices. Flow (and gFlow) are known sufficient conditions for a lattice, or graph to allow 1-way QC [4,3]. The fact that for example in the Sierpinski carpets we can always reduce them to a 2D lattice implies that we can always extract a flow in some sense. Perhaps the topological features presented here are also important for the existence of flow. We also note that the techniques, and indeed the lattices used are very similar to those in [2], which arise in the context of 2D lattices with noise. In a sense this is no surprise since it is similar situations which may give rise to fractal lattices in many-body physics also. But it may also indicate that the analogy used could be useful in treating noise over fixed lattices. On foundational level, this analogy, and its reenforcement by this work, opens up many interesting questions and possibilities. For example, can these analogies be made more solid by a kind of path integral approach to 1-way QC? How deep can we take these analogies beyond 1-way QC, can it work for example for other models of computation? We hope this work will stimulate further research in these areas.
2010-06-07T18:11:50.000Z
2010-06-07T00:00:00.000
{ "year": 2010, "sha1": "28c6410f7522e184e71e2eb619fd5b02a2a15218", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.4204/eptcs.26.10", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "28c6410f7522e184e71e2eb619fd5b02a2a15218", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
205608497
pes2o/s2orc
v3-fos-license
Nkx2.5+ Cardiomyoblasts Contribute to Cardiomyogenesis in the Neonatal Heart During normal lifespan, the mammalian heart undergoes limited renewal of cardiomyocytes. While the exact mechanism for this renewal remains unclear, two possibilities have been proposed: differentiated myocyte replication and progenitor/immature cell differentiation. This study aimed to characterize a population of cardiomyocyte precursors in the neonatal heart and to determine their requirement for cardiac development. By tracking the expression of an embryonic Nkx2.5 cardiac enhancer, we identified cardiomyoblasts capable of differentiation into striated cardiomyocytes in vitro. Genome-wide expression profile of neonatal Nkx2.5+ cardiomyoblasts showed the absence of sarcomeric gene and the presence of cardiac transcription factors. To determine the lineage contribution of the Nkx2.5+ cardiomyoblasts, we generated a doxycycline suppressible Cre transgenic mouse under the regulation of the Nkx2.5 enhancer and showed that neonatal Nkx2.5+ cardiomyoblasts mature into cardiomyocytes in vivo. Ablation of neonatal cardiomyoblasts resulted in ventricular hypertrophy and dilation, supporting a functional requirement of the Nkx2.5+ cardiomyoblasts. This study provides direct lineage tracing evidence that a cardiomyoblast population contributes to cardiogenesis in the neonatal heart. The cell population identified here may serve as a promising therapeutic for pediatric cardiac regeneration. they possess. Hence, this lack of a defined precursor cell population in the neonatal heart that can mediate de novo cardiomyogenesis has limited current efforts in cardiac regenerative therapy [15][16][17][18] . To identify a population of CM precursors that might be present in neonatal heart, we utilize a previously generated transgenic mouse model that expresses an eGFP reporter under the regulatory control of a 2.1 kb cardiac-specific enhancer of Nkx2.5, a key transcription factor in early cardiac development 19 . Distinct from the endogenous expression of Nkx2.5, which is initiated in cardiac progenitor cells and sustained throughout CM maturation, the eGFP expression in Nkx2.5 cardiac enhancer-eGFP transgenic mice (hereto referred as Nkx2.5 enh-eGFP) is restricted to cardiac progenitor cells and early immature CMs 19,20 . Consequently, Nkx2.5 enh-eGFP+ cells represent cardiac progenitor cells in the early fetal heart and we postulate that it may also label a population of cardiomyogenic precursors in the postnatal heart. Cardiac progenitor cells, such as the Islet-1 (Isl-1)-positive cell population, has been described in the neonatal heart 21 . However, the direct contribution of Isl-1+ cells to cardiomyogenesis in the postnatal heart in vivo has not been demonstrated 22,23 . Given the cardiomyoblast-restricted expression of Nkx2.5 enh-eGFP transgene in the fetal heart, we explored whether a rare number of these cells may be present in the neonatal heart and contribute to normal development of the myocardium. In this study we identified a neonatal Nkx2.5 enh-eGFP+ cardiomyoblast population and demonstrated their phenotypic and functional contribution to making new CMs. We further showed, by prospective lineage tracing using a doxycycline suppressible Nkx2.5 enhancer-Cre transgenic mouse line, that Nkx2.5 enh-eGFP+ cardiomyoblasts reside in the subepicardium and contribute directly to cardiomyogenesis in vivo. Furthermore, the ablation of neonatal Nkx2.5 enh-eGFP+ cardiomyoblasts led to early heart failure phenotypes, including ventricular dilation and hypertrophy, consistent with a requirement for these cells in normal neonatal heart development. Results Isolation and in vitro characterization of a putative cardiomyoblast population in the neonatal heart. To determine the growth rate of the neonatal heart and its relationship with the growth of the overall body weight, we measured the heart weight and body weight in neonatal mice from birth to 21 days of life. We found a rapid rise in heart weight during this time period. The ratio of heart weight to body weight appeared to be stable during this developmental time frame (Fig. 1A-C). This finding demonstrated that a rapid growth occurs in the developing heart after birth. We hypothesized that postnatal cardiomyoblasts may contribute to the proliferating cells in the neonatal heart. Previously described Nkx2.5 enh-eGFP transgenic mice were used to isolate and characterize these cells 19,20 . The expression of eGFP in Nkx2.5 enh-eGFP mice labels cardiac precursor cells in the developing embryo and wanes when these cells mature into striated CMs 20 . Interestingly, by flow cytometric analysis of neonatal hearts from Nkx2.5 enh-eGFP mice, we found a resurgence of eGFP+ cell population during the first three weeks after birth (Fig. 1D,E). qPCR analysis of sorted eGFP+ and eGFP-cells at postnatal day 6 demonstrated significant differences in the gene expression profiles of these cells (Fig. 1F). The proliferation capacity of the eGFP+ cells was quantified by determining the proportion of these cells in S or G 2 phase of the cell cycle as measured by their incorporation of the DNA analog BrdU vs their DNA content (7-AAD) (Fig. 1G,H). We found that the proportion of eGFP+ cells in S phase declined from 8.4% at postnatal day 6 to 0.75% at day 22 while the proportion in G 0/1 phase increased from 83% at day 6 to 89% at day 22. P6 eGFP+ were also isolated, cultured in vitro, and immunostained for proliferation markers including Ki67 and pH3 at days 1 and 5 in culture (Supplementary Figure S1). Confocal microscopy demonstrated a notable percentage of eGFP+ cells were Ki67+ (~30% at day 1 and 20% at day 5) and pH3+ (~18% at day 1 and 12% at day 5). We further characterized these (total) neonatal eGFP+ cells by flow cytometric analysis to determine their surface marker expression. About 33.7% of these cells specifically expressed PDGF receptor alpha (Pdgfrα), ~76.8% non-specifically expressed integrin beta-1 (Intgβ1), and ~22.0% expressed stem cell antigen-1 (Sca-1) ( Fig. 2A,B). Moreover, these cells did not express CD45, a pan-hematopoietic marker; Thy1.1, a mesenchyme/ fibroblast marker; or hematopoietic stem/progenitor cell markers such as CD41 or c-Kit. Given that Pdgfrα has previously been described as a fibroblast or mesenchymal stem cell marker in the adult heart 24,25 , we compared the genome-wide transcriptional profile of eGFP+ cells isolated at embryonic days 13.5 (e13.5 GFP+ ) and 16.5 (e16.5 GFP+ ) of development and from neonatal heart (neo P7 GFP+ ) with control neonatal CMs (neo CM) and cardiac fibroblasts from the adult heart (adult cardiac fib.) (Fig. 2C). Neonatal P7 eGFP+ cells expressed a distinct transcription profile from embryonic eGFP+ cells, neonatal CMs, or cardiac fibroblasts. To further probe the identity of these neonatal eGFP+ cells, we compared directly the genome-wide expression profile of embryonic day 10.5 (e10.5) CMs with P7 eGFP+ cells (Fig. 2D). The expression profile of P7 eGFP+ cells appeared quite distinct from that of e10.5 CMs. This was further supported by quantitative RT-PCR analysis showing that P7 eGFP+ cells express a number of cardiac transcription factors (e.g. MEF2C, GATA4, and GATA6) without a matching level of sarcomeric gene expression (e.g. troponin C1, T2, I3, cardiac actin, myl2, and myh6 and 7) (Fig. 2E,F). Neonatal Nkx2.5 enh-eGFP+ cells possess the functional characteristics of neonatal cardiomyoblasts. To address whether these neonatal Nkx2.5 enh-eGFP+ cells harbor a capacity for cardiovascular lineage differentiation, eGFP+ cells were isolated from 6-7 day-old hearts, FACS-purified, and subjected to either spontaneous differentiation or differentiation in coculture with embryonic CMs (eCMs), smooth muscle cells (SMCs), mouse embryonic fibroblasts (MEFs), or endothelial cells (ECs) (Fig. 3 and Supplementary Figure S2). eGFP+ cells cultured alone expressed little to no cardiac troponin T, whereas eGFP+ cells cocultured with eCMs for 8 days expressed both cardiac troponin T (~51% of cells, Supplementary Figure S2B,C, top row) and sarcomeric actinin (~28% of cells) and adopted a striated CM phenotype (Fig. 3B-F). Single cell electrophysiological assessment of an eCM-cocultured and subsequently disbursed eGFP+ cell revealed its ability to generate Differentiation of Nkx2.5 enh-eGFP+ cells into CMs in vivo. To address whether neonatal eGFP+ cells are able to expand and differentiate into mature CMs in vivo, we engineered a transgenic mouse line that expresses a Cre-eGFP fusion protein under the control of both Nkx2.5 cardiac enhancer and the reverse tetracycline transactivator (hereto referred as Nkx2.5 enh-Cre mouse) (Fig. 4A). By oral administration of doxycycline, the Cre-eGFP fusion protein expression can be silenced, thus providing a temporal level of gene regulation. Similar to the Nkx2.5 enh-eGFP transgenic embryos at the same stages of development, the expression of eGFP in the Nkx2.5 enh-Cre transgenic embryos was restricted to the developing heart (Fig. 4B). When the Nkx2.5 enh-Cre mice were mated with the ROSA26 FS LacZ reporter mice, the double transgenic embryos exhibited cardiac-specific LacZ labeling (Fig. 4Ca-c). With gestational administration of doxycycline, the embryonic LacZ expression was completely abolished (Fig. 4Cd-f). The ability to completely silence Nkx2.5 enh-Cre expression during embryonic development allowed us to determine whether the neonatal eGFP+ cardiomyoblasts were able to contribute to CM formation in the neonatal heart. To address this, we treated pregnant Nkx2.5 enh-Cre females that were mated with ROSA26 FS LacZ males with doxycycline from conception until birth to suppress the embryonic expression of Cre-eGFP (Fig. 5A) and fully label Nkx2.5 enh-eGFP+ cells from P4 onwards when the doxycycline suppression of Cre expression is completely lost. We then assayed for the presence of LacZ+ cells at postnatal day 7 and 21 to determine whether neonatal Nkx2.5 enh-eGFP+ cardiomyoblasts had given rise to new CMs in the neonatal heart. We found that the Cre+ cardiomyoblasts and their descendant CMs were located in the subepicardial region in the neonatal heart at day 7 after birth ( Fig. 5B). By day 21, many of these cells had migrated and differentiated into mature CMs and could be identified in the right and left ventricles (RV, LV) as well as in the interventricular septum IVS (Fig. 5C, c-h). Interestingly, sparse LacZ+ coronary SMCs were also be found within the vessel walls ( Fig. 5C, g,h). Embryonic origin of Nkx2.5 enh-eGFP+ cardiomyoblast. The subepicardial localization of new CMs from neonatal Nkx2.5 enh-eGFP+ cardiomyoblasts raises the possibility that the Nkx2.5 enh-eGFP+ cardiomyoblasts might have originated from the developing epicardium. This would be consistent with recent studies showing the ability of developing and postnatal epicardial cells to differentiate into CMs [26][27][28] . We investigated whether subepicardial cardiomyoblasts in the neonatal heart originated from embryonic epicardial cells using a previously described inducible WT1-CreERT2 mice 28 and found no evidence that these cells came from the developing epicardium (Supplementary Figure S3). This is consistent with results from a recent lineage tracing study of postnatal cardiac regeneration in the zebrafish heart 6 . We further examined whether neonatal cardiomyoblasts descended from other precursor populations such as endothelial/endocardial (Tie2-Cre) or mature myocardial (alpha-myosin heavy chain-Cre) (αMHC-Cre) cell populations. No lineage relationship was found between developing endothelial/endocardial cells or mature CMs and the neonatal eGFP+ cardiomyoblasts. Instead, we show that neonatal eGFP+ cells are descendants from embryonic Nkx2.5 enh-eGFP+ cells in the fetal heart (Supplementary Figure S3). Nkx2.5 enh-eGFP+ cardiomyoblast-mediated cardiomyogenesis in the neonatal heart is developmentally significant. To determine the consequences of the loss-of-function of Nkx2.5 enh-eGFP+ cardiomyoblasts during neonatal heart formation, we treated compound heterozygous Nkx2.5 enh-Cre;ROSA26 FS DTA mouse embryos with doxycycline from conception until birth to suppress embryonic Cre expression (Fig. 6A). The ROSA26 FS DTA mouse expresses diphtheria toxin upon Cre-mediated excision of the LoxP flanked stopper cassette 29 . The expression of Cre upon the cessation of doxycycline administration at birth results in excision of the stopper cassette in Cre+ cells, the production of DTA, and shortly after, the death of Nkx2.5 enh-Cre+ cells. As shown in Fig. 6B, mice with ablation of neonatal cardiomyoblasts (Cre+ /DTA+ mice -black bar) exhibited increased heart weight at 3, 6, and 9 weeks after birth compared with their littermate control (Cre-/DTA+ mice -white bar), without significant difference in their body weight (Fig. 6B). This suggests that ablation of neonatal Nkx2.5+ cardiomyoblasts leads to early remodeling changes including ventricular hypertrophy. With further maturation (at 9 weeks), the ablated hearts exhibit mild ventricular enlargement as well (Fig. 6C). Ablation of Nkx2.5 enh-eGFP+ cell population did not compromise mice viability or health. Discussion The neonatal heart grows rapidly in both size and weight in order to meet the metabolic demands of the newborn. Beyond the first one or two weeks after birth, these increases in heart size and weight are thought to be mediated entirely by myocyte hypertrophy rather than proliferation [30][31][32] . In this study, we found a population of Nkx2.5 enhancer+ cardiomyoblasts in the neonatal heart that can differentiate into striated CMs upon coculture with embryonic CMs. These cells are found initially in the subepicardial region and contribute progressively to new CMs in the right and left ventricles as well as the interventricular septum. Genetic ablation of these cells using a conditional diphtheria toxin A-expressing mice results in early heart failure phenotype. These data support the contribution of cardiomyoblasts to a defined proportion of the proliferating CMs in the neonatal heart and the requirement for CM proliferation to support normal neonatal heart development 33,34 . The reappearance of eGFP+ cells in the neonatal heart of Nkx2.5 enh-eGFP mice suggested a potential contribution of cardiomyoblasts to the postnatal proliferative activity (Fig. 1G). qPCR analysis of isolated GFP+ and GFP-cells from the P6 neonatal mouse heart (Fig. 1F) demonstrated significantly lower Nkx2.5 expression level in the GFP+ cells, suggesting that the presence of the fetal enhancer activity in GFP+ cells does not precisely correlate with higher endogenous Nkx2.5 gene expression in postnatal cells. This can be explained by the fact that multiple enhancers are involved in controlling the activity of endogenous Nkx2.5 expression and other Nkx2.5 35 . This can be due to the fact that different enhancers control the activity of Nkx2.5 expression and that in Nkx2.5 transcription itself is not specific to progenitors. The low expression levels of cardiac and endothelial genes in eGFP+ cells are line with our finding that these eGFP+ cells are mostly undifferentiated multipotent progenitors. BrdU pulse-labeling of isolated Nkx2.5 enh-eGFP+ cells confirmed their residual proliferative activity that continued for a period (until P21) longer than previously described 36,37 . This data indicated that the proliferative capacity of eGFP+ cells was the greatest shortly after birth and declined over the first 3 weeks of life (Fig. 1J). The in vitro culture of P6 isolated eGFP+ cells confirmed their proliferative capacity (Ki67 and pH3 immunostaining) which declined from day 1 to day 5, while the GFP-cells maintained their expression levels (Supplementary Figure S1). Phenotypic characterization of neonatal Nkx2.5 enh-eGFP+ cells, via both surface marker and genome-wide expression analysis, demonstrated remarkably distinct profile of these eGFP+ cells. Our detailed characterization of the cellular phenotype of neonatal Nkx2.5 enhancer+ cardiomyoblasts revealed distinct properties of these cells from those described previously. Their expression of Pdgfrα ( Fig. 2A,B) is consistent with their embryonic heart field origin given the previously reported labeling of embryonic 38 , postnatal 25 , as well as embryonic stem cell-derived 39 cardiac precursors with this marker. However, our genome-wide expression analysis revealed their distinct characteristic from either cardiac fibroblasts or embryonic CMs (Fig. 2C-F). It is worth noting that these eGFP+ cells exhibit a high level of expression of signaling molecules (e.g. Fgf8, Tgf-β2, Tgf-βR1, Gab1, Sema3C, Ednra) and transcription factors (e.g. Zfpm2, Nfatc4, Gli2, Pbrm1/ BAF180, Osr1) but not sarcomeric genes in comparison to that of fetal CMs (Fig. 2B,F). All together, the cell marker and genome-wide expression profiles of these neonatal Nkx2.5 enhancer+ cells are consistent with a cell population that is distinct from cardiac fibroblasts or mature CMs and suggests their role as a cardiomyoblast population. Through a series of coculture experiments, we demonstrated the ability of neonatal Nkx2.5 enh-eGFP+ cells, cocultured with eCMs, to differentiate into striated CMs, expressing CM-specific troponin T and sarcomeric actinin (Fig. 3 and Supplementary Figure S2). In the ROSA-LacZ heart, it appeared that the expression of β-gal in each cell may be variable as a few ( <10%) of cells were weakly β-gal positive (Supplementary Figure S2). This could be due to the fact that the β-gal expression from the ROSA26 locus is not uniformly strong in all cells in postnatal tissue. This differentiation of eGFP+ cells into CMs depends on paracrine and/or contact factors since little to no cardiac troponin T+ cells was generated from spontaneous differentiation of eGFP+ cells. The cardiomyogenic phenotype of eCM-cocultured eGFP+ cells was not due to cell-cell fusion since the majority of sarcomeric actinin/eGFP double positive cells exhibited only one single nucleus (Fig. 2B-D). Consistent with our previous reports 20 , the Nkx2.5 enh-eGFP+ cells demonstrated a remarkable capacity to differentiate into SMCs both in vitro (Fig. 3 and Supplementary Figure S2) and in vivo (Fig. 5). These findings provide strong support that neonatal Nkx2.5 enh-eGFP+ cells represent a population of cardiomyoblasts in the neonatal heart. For future studies, it would be of high interest to unravel the molecular pathways driving these cell-fate decisions to either become myocardium, smooth muscle or endothelium. The capacity of Nkx2.5 + cardiomyoblasts in the neonatal heart to expand, differentiate, and mature into CMs in vivo raises an interesting question regarding their importance during neonatal cardiac development. Using the newly engineered Nkx2.5 enh-Cre mouse model, we found that Nkx2.5 + cardiomyoblasts gave rise to new CMs in the subepicardium that progressively migrated inward to contribute to new CMs in the right and left ventricles and interventricular septum (Fig. 5). This is while a small fraction of LacZ + cardiomyoblasts differentiated to SMCs, residing within the vessel walls (CM to SMC percentage ratio of ~87:13) (Fig. 5C,g,h). FACS analysis of the P0 isolated eGFP+ cells which were co-stained with cardiac Troponin T (cTnT) demonstrated a negligible fraction eGFP + cells expressing cTnT (~0.01% of total cells) (Supplementary Figure S4). In support of the importance of these cells to normal cardiac development, we found that the ablation of these cells led to enlarged heart size, elevated heart/body weight ratio, and left ventricular hypertrophy that eventually dilate over time (Fig. 6C). This pattern is consistent with the progression of many pediatric cardiomyopathies and suggests that modulation of cardiomyoblast proliferation and differentiation may be therapeutically relevant in this patient population 40 . Further analyses (e.g., echocardiography or immunohistochemistry of cardiac disease markers) would be required to achieve a greater understanding of the role of Nkx2.5 enh-eGFP+ cardiomyoblasts in maintaining the function of developing heart. Moreover, we are currently investigating the potential role of Nkx2.5-enh-eGFP+ cells on the heart regenerative response in a myocardial infarction model in postnatal regenerative window (P0-P7), juvenile (P21), and adult (7-12 weeks old) mice. Taken together, these results support the requirement of neonatal Nkx2.5 enh-eGFP+ cardiomyoblasts to generate functional CMs during normal cardiac development. Methods Mice. Newborn wild type C57/BL6 mice (Jackson Laboratory, Barharbor, ME) were sacrificed at days 1-21 and their body and heart weights were measured. Euthanasia was performed by first sedating the mice via isoflurane (inhalant, 2% in 100% oxygen, neonate placed on a warm pad), followed by a secondary cervical dislocation 41 . Death was verified after euthanasia and prior to disposal. Nkx2.5 cardiac enhancer-eGFP transgenic mice (Nkx2.5 enh-eGFP) were previously described 20 . Doxycycline-regulated Nkx2.5 enhancer-Cre-eGFP transgenic mice (Nkx2.5 enh-Cre) were made by pronuclear injection of one-cell C57BL/5 mouse embryos and transferred to CD1 pseudopregnant foster females. From four original transgene-carrying founders, the line with the most robust expression of Cre-eGFP in the developing heart was further studied. ROSA26-flox-stop-flox-LacZ reporter mice (ROSA26 FS LacZ) were obtained commercially from Jackson Laboratory (Bar harbor, ME). CM-specific alpha-myosin heavy chain-Cre (α-MHC-Cre), endothelial Tie2-Cre, and ROSA26-flox-eGFP-floxdiptheria toxin A (ROSA26 FS DTA) mice were described previously 29,42,43 . All animal experiments were approved by the Subcommittee on Research Animal Care at Massachusetts General Hospital and by the animal care and use committee (APLAC) at Stanford University. All experiments were performed in accordance with relevant guidelines and regulations of Massachusetts General Hospital. Body and heart weight measurements. To determine the heart weight and body weight of neonatal mice, we sacrificed mice at the indicated age (by day) of development. Their overall body weight was measured and the hearts were dissected and cleared with deionized water and measured from day 1 to 21 after birth. In experiments involving transgenic mice that have undergone Cre-mediated ablation of neonatal Nkx2.5 + cardiomyoblasts, the body and heart weights were measured at 0.5, 3, 6, and 9 weeks after birth and compared with those of the littermate control without Cre transgene. Histology, immunohistochemistry, and immunofluorescence. Freshly isolated adult and embryonic mouse hearts were dissected from mouse chest cavity and washed in PBS to remove excess blood. For Nkx2.5 enh-eGFP and Nkx2.5 enh-Cre embryos at days 8.0 and 9.5 post coitum, their heart tube was dissected away from the body and imaged immediately with whole mount fluorescence microscopy. For late fetal and postnatal hearts, they were incubated in 30% sucrose in PBS overnight followed by step-wise incubation with a graded concentration of OCT in PBS for cryosectioning. Following cryopreservation, hearts were cut into 10 μm sections and lightly fixed in 4% paraformaldehyde in PBS prior to immunostaining. For detection of CM differentiation, antibodies against α-sarcomeric-actinin (1:200; Sigma-Aldrich, St. Louis, MO) and cardiac troponin-T (1:200; polyclonal, Chemicon) were used. For visualization, fluorescence detection with Alexa Fluor ® secondary antibodies (Invitrogen, Carlsbad, CA) towards the appropriate primary antibodies was used. For β-galactosidase staining, freshly dissected mouse hearts were prepared as described above and incubated at 37 °C in 1 mg/ml X-Gal substrate (Fisher Scientific). The X-Gal stained sections were then counterstained with Nuclear Fast Red and/or co-stained with antibodies for co-immunofluorescent studies. Hematoxylin and eosin staining of histological sections was performed according to manufacturer suggested protocol. All quantitative analyses of the histological sections were performed on numerically-coded animals in an observer-blinded fashion to prevent subjective bias in data analysis. Preparation of neonatal CMs and adult cardiac fibroblasts. Hearts were extracted from neonatal mice (P7) and immediately transferred into the dish containing 1× PBS on ice and washed twice. Subsequently, heart were transferred into the isolation medium (20 mM BDM, 0.0125% trypsin, in HBSS) and minced into small pieces (on ice). Minced hearts were transferred into a tube containing isolation medium and incubated with gentle agitation at 4 °C overnight. Predigested hearts containing tissue fragments were further digested using collagenase (15 mg in 10 mL of L15 medium) at 37 °C for 20 minutes. Neonatal CMs were then strained (40 μm), centrifuged (300 rpm-5 min), resuspended, and plated onto collagen-coated cell culture plates (Sigma C-8919) using plating medium (65% DMEM, 19% M-199, 15% fetal calf serum, 1% penicillin/streptomycin) 44 . To prepare cardiac fibroblasts, adult mouse hearts were extracted, washed twice with ice cold PBS, and minced on ice (~1 mm). Minced tissue was digested using collagenase (1% v/v collagenase II in HBSS buffer) under constant stirring at 37 °C for 20-30 min. Once fully digested, the supernatant was transferred to a tube (on ice) containing 1 ml fibroblast medium (DMEM/F12 with 10% FBS, 100 U/ml Pen/Strep, 1× L-glutamine, and 100 µM ascorbic acid), centrifuged (300 g, 5 min), and resuspend in fibroblast medium. Cardiac fibroblasts were plated into 10-cm cell culture dishes and incubated at 37 °C in a cell culture incubator with 5% CO 2 for 2 hrs. Once fibroblasts adhered onto the dish, we discarded the supernatant, rinsed the cells with PBS (3x) and added fresh fibroblast medium 45 . Microarray analysis. The oligonucleotide microarrays were performed by WELGENE Microarray Service (Taiwan). 0.2 μg of total RNA was amplified by a Low Input Quick-Amp Labeling kit (Agilent Technologies, USA) and labeled with Cy3 (CyDye, Agilent Technologies, USA) during the in vitro transcription process. 0.6 μg of Cy3-labled cRNA was fragmented to an average size of about 50-100 nucleotides by incubation with fragmentation buffer at 60 °C for 30 minutes. Correspondingly fragmented labeled cRNA was then pooled and hybridized to Agilent SurePrint G3 Mouse GE 8 × 60 K Microarray (Agilent Technologies, USA) at 65 °C for 17 h. After washing and drying by nitrogen gun blowing, microarrays were scanned with an Agilent microarray scanner (Agilent Technologies, USA) at 535 nm for Cy3. Scanned images were analyzed by Feature extraction10.5.1.1 software (Agilent Technologies, USA), an image analysis and normalization software used to quantify signal and background intensity for each feature. PCR and quantitative PCR analysis of gene expression. To determine the Cre excision status of LoxP flanked stopper sequence in the ROSA26 FS LacZ allele of Nkx2.5 enh-eGFP+ cells, eGFP+ cells from the digested hearts of α-MHC-Cre, doxycycline-regulated Nkx2.5 enh-Cre, WT1-CreERT2, and Tie2-Cre transgenic mice were purified by FACS and cultured briefly before their cellular genomic DNA was isolated using the Gentra Puregene kit (Qiagen). These purified DNA samples were PCR amplified with ROSA26 locus-specific primers for the presence of Cre-mediated excision (i.e. 1Lox). The primer sequences are TGG CTT ATC CAA CCC CTA GA (forward), and GTT TTC CCA GTC ACG ACG TT (reverse). Amplification of the HPRT locus was used as an internal PCR control. For quantitative analysis of gene expression, FACS-purified eGFP+ cells from freshly isolated and collagenase digested hearts were lysed with Trizol (Invitrogen) and stored at −80 °C. Total RNA from each sample was purified from cell lysate using the SV Total RNA kit (Promega). cDNA was made using iScript cDNA synthesis kit (BioRad). Quantitative PCR was performed using the Mastercycler EP Realplex system (Eppendorf) with SYBR Green substrate (BioRad) for 40 cycles. Data analysis. Numerical data are presented as mean ± SEM. Statistical significance was performed using a two-tailed paired t-test with equal variance. Correlation between groups was assessed with Pearson correlation coefficients (R). Values of p < 0.05 were considered statistically significant.
2018-04-03T06:13:39.763Z
2017-10-03T00:00:00.000
{ "year": 2017, "sha1": "352c0467c99825b03c1064ffb1ef6897ba4a0c44", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-017-12869-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "352c0467c99825b03c1064ffb1ef6897ba4a0c44", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
247981470
pes2o/s2orc
v3-fos-license
Interaction between ω-6 fatty acids intake and blood cadmium on the risk of low cognitive performance in older adults from National Health and Nutrition Examination Survey (NHANES) 2011–2014 Background Identifying preventable diets and environmental exposure is essential to ensuring the health of the aging population. This study evaluated the interaction effect between blood cadmium and ω-6 fatty acids intake on low cognitive performance in Americans. Method The data of this cross-sectional study were obtained from the 2011–2012 and 2013–2014 National Health and Nutritional Examination Survey (NHANES). Cognitive performance was measured by the Consortium to Establish a Registry for Alzheimer’s Disease test, Animal Fluency Test, and Digit Symbol Substitution Test. Multivariate logistic regression models were used. Results A total of 1,918 individuals were included, with 467 (24.35%) low cognitive performance. Compared with participants with normal-level blood cadmium, those with high-level blood cadmium had a higher risk of low cognitive performance [odds ratio (OR) was 1.558 with 95% confidence interval (CI): 1.144–2.123]. Low-level ω-6 fatty acids intake was positively associated with low cognitive performance [OR = 1.633 (95%CI: 1.094–2.436)] compared with normal-level intake. Moreover, there was a significant interaction between low-level ω-6 fatty acids intake and high-level blood cadmium on the risk of low cognitive performance (relative excess risk due to interaction: 0.570, 95%CI: 0.208-0.932; the attributable proportion of interaction: 0.219, 95%CI: 0.102‐0.336; synergy index: 1.552, 95%CI: 1.189‐2.027). Conclusions There was a synergistic interaction between low-level ω-6 fatty acids intake and high-level blood cadmium on low cognitive performance. Low-level ω-6 fatty acids intake may amplify the adverse effects of long-term exposure to cadmium on cognitive performance. This may have a certain significance for the prevention of cognitive decline in the elderly. Supplementary information The online version contains supplementary material available at 10.1186/s12877-022-02988-7. by 2050 [2]. The situation is more serious in the United States, where about 18% of Americans were 65 years of age or older in 2019 [3]. The elderly face many health threats. Among them, cognitive decline is a major killer that threatens the elderly following chronic diseases such as cardiovascular disease [1,4]. Cognitive impairments associated with old age are expected to impose heavy social and economic burdens [5,6]. The total cost of individuals with low cognitive performance in 2020 was estimated to be 305 billion dollars in the United States [5]. Therefore, detecting the preclinical manifestations of low cognitive performance as early as possible is important. In addition to good living habits and proper physical exercise to prevent cognitive decline, the most important is to have a reasonable diet and supplement nutrition. Studies have reported that individuals with a Mediterranean diet could effectively reduce the risk of cognitive decline, which may be related to their high intake of unsaturated fatty acids [7]. Evidence showed that a highlevel (ω-3): (ω-6) dietary intake ratio had a wide range of positive effects on health, especially the improvement of cognitive function [8]. Studies have pointed out that the intake of ω-6 fatty acids (important unsaturated fatty acids in the human brain) may be related to cognitive decline [9,10]. In addition, the severe environmental situation has led to increased exposure of people to heavy metals [11]. Cadmium, heavy metal from the Earth's crust, could cause cognitive dysfunction because it has long-term effects on the brain [12,13]. Cadmium ion poisoning can cause hippocampal damage and cognitive impairment [14]. However, the interaction between ω-6 fatty acids intake and blood cadmium on low cognitive performance has not been widely reported. Therefore, this cross-sectional study intended to identify the determinants of low cognitive performance and to explore the interaction between ω-6 fatty acids intake and blood cadmium on the risk of low cognitive performance based on the National Health and Nutrition Examination Survey (NHANES) database in the United States. Study population We analyzed the data from the 2011-2012 and 2013-2014 NHANES, a representative cross-sectional survey of all non-institutionalized civilian populations in the United States. The NHANES is a major project of the National Center for Health Statistics (NCHS), a part of the Centers for Disease Control and Prevention (CDC), and is responsible for compiling life and health statistics. The NHANES includes interviews, physical examinations, and laboratory assessments. The NCHS Ethics Review Committee granted ethical approval. All individuals provided written informed consent before participating in the study. Cognitive data on 2,934 adults aged 60 years or older were extracted from the NHANES database. We excluded those with missing blood cadmium (n = 866), and ω-6 fatty acids intake (n = 150) information. Finally, a total of 1918 participants were included in this study. Outcome variable The word learning and recall modules from the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) test, Animal Fluency Test, and Digit Symbol Substitution Test (DSST) were applied to assess cognitive performance [15,16]. The CERAD Word Learning Subtest (CERAD W-L) was used to evaluate the immediate and delayed learning ability of new language information (memory subdomain) [17]. The CERAD test consists of three consecutive learning trials and one delayed recall. After learning the test, participants were asked to recall as many words from the learning experiment as possible. The score for each test ranged from 0 to 10 points, with 1 point for each correct answer. The total score of the four tests was the CERAD score. The Animal Fluency test was used to measure absolute language fluency. Participants were asked to answer as many animals as possible within a minute, and each answer was scored one point. The DSST tested sustained attention and working memory [18]. Participants were asked to match the numbers in the 133 boxes to the corresponding symbols within 120 s according to the example given. The score was the sum of the number of correct matches, and the maximum score was 133 points. Actually, there was no gold standard for judging low cognitive performance by the CERAD, Animal Fluency, and DSST test, and we utilized the lowest quartile (the 25th percentile) of the combined scores of the three tests in the study group as the cut-off point, which was consistent with the methods used in the published literature [19]. Explanatory variables Diet recall interviews were conducted at the Mobile Examination Center (MEC) by trained interviewers using an automated data collection system to obtain ω-6 fatty acids [linoleic (18:2) and arachidonic (20:4)] intake through two 24-hour dietary recall interviews. At the end of the MEC diet interviews, the interviewers arranged for the subjects to have a telephone follow-up interview 3-10 days later. Average ω-6 fatty acids intake was calculated based on the U.S. Department of Agriculture's Dietary Study Food and Nutrition Database [20]. In the NHANES 2011-2014, ω-6 fatty acids intake was calculated only from dietary intake, and supplement usage was not collected. Blood samples were transported to laboratories across the United States and the blood cadmium was detected by these laboratories. The blood samples were collected by a phlebotomist at the MEC and processed into vials, which were then refrigerated or frozen for storage and transported to laboratories across the United States. The concentration of blood cadmium was determined by quadrupole inductively coupled plasma mass spectrometry (ICP-MS) technology. Please refer to the NHANES laboratory manual for the specific method of blood cadmium content detection [21]. ω-6 fatty acids intake and blood cadmium were categorical variables, ω-6 fatty acids below the 25th quantile was low-level ω-6 fatty acids intake, and blood cadmium above the 75th quantile was high-level blood cadmium. The cut points of ω-6 fatty acids and blood cadmium were consistent with methods used in the published literature [22,23]. Covariates Sociodemographic information, lifestyle factors, medical history, and laboratory parameters were collected. Sociodemographic information included age, gender, race, marital status (married/widowed or divorced or separated/never married/living with a partner), educational level [below high school/high school graduate or General Educational Development (GED)/above high school], and annual household income (< 20,000 dollars/ ≥20,000 dollars). Lifestyle factors included trouble sleeping, sleeping time, smoking, drinking, work activity, and recreational activity. Trouble sleeping was assessed by a question that has a doctor or other health professionals ever told him/ her had trouble sleeping. Smoking was assessed by smoking at least 100 cigarettes in one's entire life, and drinking was assessed by drinking at least 12 drinks of any type of alcoholic beverage in any one year (a drink means a 12 oz. beer, a 5 oz. glass of wine, or one and a half ounces of liquor). Work activity was divided into three categories: vigorous work activity, moderate work activity, and others [15,16]. The recreational activity was divided into three categories: vigorous recreational activity; moderate recreational activity and others [15,16]. Depression, hypertension, diabetes, stroke, congestive heart failure (CHF), coronary heart disease (CHD), and heart attack were assessed by asking participants, "Have you ever been told by a doctor or health professional that you have __?" Total cholesterol (TC), highdensity lipoprotein (HDL), glycated hemoglobin (GHb), and 25-hydroxyvitamin D [25(OH)D] were all obtained by remote laboratory testing of participants' blood. In addition, body mass index (BMI) was calculated by dividing the weight of the participant by the square of the height (kg/m 2 ). Statistical analysis WTMEC2YR, SDMVPSU, and SDMVSTRA from the NHANES database were used as weighted variables to perform weighted analysis on all data. WTMEC2YR was the two-year sample weighed. SDMVPSU was masked variance unit pseudo-PSU variable for variance estimation. SDMVSTRA was masked variance unit pseudo-stratum variable for variance estimation. Measurement data were all normally distributed after weighting, and normality was assessed by the Kolmogorov-Smirnov test [24]. Normally distributed data were described by Mean [standard error (S.E)], and the t-test was used for the comparison between groups. Counting data were shown as the number of cases and the composition ratio [n (%)], and the comparison between groups was performed by the χ2 test or Fisher's exact test. First, we conducted univariate analysis, and then multivariate logistic regression analysis was performed including the statistically different variables to explore whether low-level ω-6 fatty acids intake and high-level blood cadmium were associated with low cognitive performance. Model 1 was not adjusted for any confounders. Age, gender, and BMI were adjusted in Model 2, and in addition to variables adjusted by Model 2, variables that were statistically significant in the univariate analysis [race, marital status, educational level, annual household income, drinking, work activity, recreational activity, depression, hypertension, diabetes, stroke, CHF, heart attack, TC, GHb, and 25(OH)D] were adjusted in Model 3. Last, the interaction model was constructed to study whether the interaction existed. The synergistic interaction between low-level ω-6 fatty acids intake and high-level blood cadmium in association with low cognitive performance was measured by whether the estimated joint effect of two factors was greater than the sum of the independent effect of lowlevel ω-6 fatty acids intake and high-level blood cadmium. Relative excess risk due to interaction (RERI), the attributable proportion of interaction (AP), and synergy index (S) were utilized to assess synergistic interaction. When the confidence interval of RERI and AP contained 0 and the confidence interval of S contained 1, there was no synergistic interaction. All statistical tests were two-sided and completed using SAS v. 9.4 (SAS Institute, Cary, North Carolina) statistical analysis software. P < 0.05 was considered statistically significant. Characteristics of the study population A total of 1,918 samples were finally included in the study. The gender distribution was relatively equal, with 954 (46.45%) males and 964 (53.55%) females. Of these participants, 946 (81.49%) were non-Hispanic white, followed by other races (n = 365, 8.12%), non-Hispanic black (n = 455, 7.46%), and Mexican Americans (n = 152, 2.92%). Most participants was married [n = 1067 (63.92%)], and the highest degree of education was above high school [n = 976 (61.56%)]. What's more, 495 (22.23%) of the participants had high-level blood cadmium, and the mean (S.E.) of blood cadmium was 0.49 (0.02) µg/L. Participants with low-level ω-6 fatty acids intake were 480 (22.22%). The descriptive characteristics of the included individuals are shown in Table 1. We compared the characteristics of the population included in this analysis with those of the population excluded due to missing data. We observed that the characteristics of the samples included in this analysis were similar to those of individuals with missing data (P > 0.05), indicating that our subsequent conclusions were reliable and stable (Supplement Table 1). Comparison of the normal and low cognitive performance groups In Table 1, the distribution of age, race, marital status, educational level, annual household income, drinking, work activity, recreational activity, depression, hypertension, diabetes, stroke, CHF and heart attack, the level of GHb, 25(OH)D, blood cadmium, and ω-6 fatty acids intake in the low cognitive performance group were different from those in the normal cognitive performance group, and the differences were statistically significant (P < 0.05). Independent association of ω-6 fatty acids intake and blood cadmium with low cognitive performance Compared with participants who had normal-level blood cadmium, those who had high-level blood cadmium were associated with the greater risk of low cognitive performance [odds ratio (OR) with 95% confidence interval (CI) of 1.558 (1.144-2.123)], after adjusting for age, gender, BMI, race, marital status, educational level, annual household income, drinking, work activity, recreational activity, depression, hypertension, diabetes, stroke, CHF, heart attack, TC, GHb, and 25(OH)D. The risk of low cognitive performance in participants with low-level ω-6 fatty acids intake was 1.633 times that in those with normal-level ω-6 fatty acids intake (OR = 1.633, 95%CI: 1.094-2.436) in Model 3. The detailed results of the multivariable logistic regression models of an independent association of ω-6 fatty acids intake and blood cadmium with low cognitive performance are shown in Table 2. Interaction between ω-6 fatty acids intake and blood cadmium on low cognitive performance Table 3 showed that the interaction indicators RERI was 0.570 (95%CI: 0.208-0.932), AP was 0.219 (95%CI: 0.102-0.336) and S was 1.552 (95%CI: 1.189-2.027), indicating that the interaction of low-level ω-6 fatty acids intake and high-level blood cadmium on low cognition was statistically significant, which was a synergistic effect. Among them, after adjusting for the variables with differences in univariate analysis, AP was 0.219, indicating that 21.9% of low cognitive performance were caused by the interaction between low-level ω-6 fatty acids intake and high-level blood cadmium in the sample of this study. Figure 1 provides a visual comparison of the interaction effect of low-level ω-6 fatty acids intake and high-level blood cadmium on low cognitive performance by OR value. Discussion In this large, nationally representative sample of adults aged 60 years or older among the U.S. population, we found that there was a synergistic interaction between lowlevel ω-6 fatty acids intake and high-level blood cadmium on low cognitive performance. The results suggested that we could improve diet to enhance the mental fitness of the elderly and pay attention to reducing cadmium exposure. Cognitive decline is a complex and gradual process that goes through different stages of evolution: normal cognition, memory impairment, mild cognitive impairment, and dementia [25]. The onset usually takes a long time, so prevention is particularly important during treatment [25,26]. It is an effective way to improve the physical condition of the elderly and improve cognitive decline by improving the diet [27]. In the present study, we found that low-level ω-6 fatty acids intake might be associated with low cognitive performance after controlling for confounders, including age, gender, BMI, race, marital status, educational level, income, drinking, work activity, recreational activity, depression, hypertension, diabetes, stroke, CHF, heart attack, TC, GHb, and 25(OH)D, which was consistent with a study conducted by Xue Dong et al. [8]. The authors also used the data from the NHANES 2011-2014, but the adjusted confounders were slightly different from ours, and their conclusions were drawn by adjusting for age, gender, race, educational level, marital status, income, BMI, recreational activity, work activity, drinking status, hypertension, diabetes, and stroke [8]. We also adjusted for some additional laboratory indicators. Moreover, in other study populations from the U.S., the results identified that ω-6 fatty acids as nutrient biomarkers were associated with more favorable functional efficiency in the aging brain [28]. Increasing evidence indicated that ω-6 fatty acids would be of benefit to the brain in aging adults [29]. Although the intake of ω-6 fatty acids in the diet has a positive impact on cognitive outcomes by enhancing the nervous system, for the intake of ω-6 fatty acids, the ratio of ω-6: ω-3 is important [8,28,30]. Further research is needed to study the ratio value to further promote brain and cognitive health. The evidence for the relationship between cadmium and cognitive performance in the elderly population was limited. In this study, we found a significantly positive association between high-level blood cadmium and low cognitive performance of U.S. adults aged over 60 years from the NHANES 2011-2014, and the association did not change after controlling for potential confounders. Similarly, a study using the same database and time span as ours suggested that increased blood cadmium was significantly associated with worse cognitive performance in adults aged 60 years or older in the U.S. [13]. However, another study did not find the association between cadmium and cognitive functioning using the data from the NHANES 1999-2002 after adjustment for race, age, sex, poverty income ratio, education, and smoking status [30]. The main reason may be that the selected study population was different. The population of the study was the elderly in the NHANES 1999-2002, while that of our study was the elderly in the NHANES 2011-2014. The average level of blood cadmium has increased, leading to different results. The different results may be caused by the control of different confounders, the different study designs used, or the different effects of cadmium in the research design. In addition, the findings of a prospective cohort study in southwestern and eastern China suggested that higher cadmium exposure was associated with greater cognitive decline in Chinese adults aged 65 years or older [31]. The negative correlation between blood cadmium and cognitive performance was of great significance for proposing strategies to delay the decline of cognitive performance in the elderly. Healthy diet and behavior could change exposure to cadmium, as cadmium is a cumulative poison, mainly from food and tobacco smoke [32]. The changes may improve the cognitive performance of adults over the age of 60 years. The mechanism underlying our finding that there was a synergistic interaction between lower ω-6 fatty acids and higher blood cadmium on greater cognitive decline may be supported by studies of acetylcholine release or inflammation. Arachidonic, a member of the ω-6 series, could enhance acetylcholine release in the brain, which may be beneficial to cognitive performance [29,33]. Studies have illustrated that cadmium exposure could increase the activity of acetylcholinesterase, causing acetylcholine to be hydrolyzed and reducing its concentration [34]. Decreased acetylcholine release was related to low cognitive performance [35]. In addition, studies have shown that cadmium induced the formation of reactive oxygen species (ROS) [32]. Excessive ROS may cause inflammation and lead to neuronal damage and death ultimately [32,36]. ω-6 and ω-3 fatty acids exerted anti-inflammatory properties through a competitive relationship [37]. Studies on healthy adults have found that increasing the intake of ω-6 fatty acids did not increase the concentration of inflammatory markers [38]. Also, studies indicated that arachidonic and linoleic may be related to inflammation reduction [38,39]. The mechanism of ω-6 fatty acids intake or blood cadmium on low cognitive performance was still unclear. We only presented the possible mechanism of interaction between ω-6 fatty acids intake and blood cadmium. Therefore, further research was required to explore the relationship between blood cadmium and ω-6 fatty acids intake on cognitive performance. Nuts (sunflower, pumpkin seeds, walnuts) and vegetable (corn, sunflower, and soybean) oils are rich in ω-6 fatty acids [40]. Shellfish (oysters, bivalve mollusks, etc.) and offal products contain high concentrations of cadmium [32]. The cadmium content of plant foods depends on the degree of soil contamination, and is generally of a higher concentration than that of meat, eggs, milk, and dairy products [32]. Vegetarians and shellfish consumers may have higher cadmium intakes than omnivores. The elderly should pay attention to dietary diversity, which may have certain benefits on cognitive function. The strengths of this study were as follows. Firstly, our research sample was representative, including a relatively large sample of senior citizens in the four main races. Secondly, there were no studies about the interaction between ω-6 fatty acids intake and blood cadmium on low cognitive performance, and our research showed that there may be a synergistic effect between low-level ω-6 fatty acids intake and high-level blood cadmium. Thirdly, the database we used evaluated the cognitive performance of the elderly through three objective cognitive assessment methods, the CERAD, Animal fluency, and DSST, which were carried out in a private and standardized environment. It was more similar to a clinical rather than a household setting. However, a few limitations should be noted in our study. First, due to the cross-sectional nature of the NHANES, the confounders of unmeasured data could not be determined although confounders were excluded as much as possible in this study. Besides, there was no way to determine whether the measured low cognitive performance represented a change in an individual's cognitive performance. Second, the cognitive performance tests did not cover all domains of cognition. Adults who perform well in one cognitive test may not perform well in another. But these three cognitive tests existed for ease of management and use in other surveys. Third, the dietary data in the NHANES were obtained from two 24-hour recall interviews, which may be biased in information and could not accurately reflect daily intake of an individual.
2022-04-07T13:42:31.802Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "a1bc0997b87916fee65a1d034ddf1293bb198dec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "a1bc0997b87916fee65a1d034ddf1293bb198dec", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89426145
pes2o/s2orc
v3-fos-license
Wood properties related to pulp and paper quality in two Macaranga species naturally regenerated in secondary forests, Central Kalimantan, Indonesia The utilization of wood resources from unutilized fast-growing tree species found in secondary forests was investigated by studying the wood properties, including anatomical characteristics, of two Macaranga species—M. bancana and M. pearsonii—growing naturally in secondary forests in Central Kalimantan, Indonesia. Several wood properties related to pulp and paper quality were also evaluated, including the Runkel ratio, Luce's shape factor, flexibility coefficient, slenderness ratio, solid factor, and wall coverage ratio. The mean basic density of these two species ranged from 0.23 to 0.31 g cm, while the mean values of vessel diameter, vessel element length, fiber diameter, fiber wall thickness, and fiber length ranged from 126 to 192 μm, 0.88 to 1.19 mm, 24.5 to 29.8 μm, 0.99 to 1.14 μm, and 1.42 to 1.69 mm, respectively. The lignin content of M. bancana and M. pearsonii wood was 27.2 and 28.0%, respectively. Almost all wood properties related to pulp quality showed better values than those reported for Acacia and Eucalyptus species, although sheet density of paper might be lower due to higher solids factor and possibility of occurrence of vessel picking was probably higher due to longer vessel element length and larger vessel diameter. Based on the results, the wood from these two Macaranga species can be used as pulpwood. INTRODUCTION The demand for wood and wood-based products, including pulp and paper, has increased in Indonesia. To supply this demand, fast-growing tree species, including Acacia and Falcataria, among others, have been planted in Indonesia for production of materials such as pulpwood and plywood (Wahyudi et al. 1999, Ishiguri et al. 2007. Other previously unutilized fast-growing tree species have also been planted to increase wood resources . The Southeast Asian region is home to many fastgrowing tree species, typically found in secondary forests after shifting cultivation (Suzuki 1999ab, Adi et al. 2014, Istikowati et al. 2014. However, utilization of wood resources from these fast-growing tree species is limited because little information is available regarding the properties and anatomical characteristics of the wood (Adi et al. 2014, Istikowati et al. 2014. These wood characteristics and the corresponding pulp properties have recently been investigated for three unutilized fast-growing tree species-terap (Artocarpus elasticus), medang (Neolitsea latifolia), and balik angin (Alphitonia excelsa), which grow naturally in secondary forests in South Kalimantan-to exploit these potential wood resources as new alternative raw materials for pulp production (Istikowati et al. 2016). However, further research is needed to characterize the potential wood resources from other unutilized fast-growing tree species. The pulp and paper qualities can be evaluated by wood properties including anatomical characteristics (Amidon 1981, Ona et al. 2001, Ohshima et al. 2005, Ashori and Nourbakhsh 2009, Yahya et al. 2010, Dutt and Tyagi 2011, Pirralho et al. 2014, Istikowati et al. 2016, although these qualities are also closely related to the chemical characteristics of wood. The pulp and paper quality, based on wood properties like anatomical characteristics, can be estimated using the following indices: Runkel ratio (Runkel 1949), Luce's shape factor (Luce 1970), flexibility coefficient (Malan and Gerischer 1987), slenderness ratio (Malan and Gerischer 1987), solids factor (Barefoot et al. 1964), and wall coverage ratio (Hudson et al. 1998). These indices have also been used for fast-growing tree species, such as Acacia species (Yahya et al. 2010) and Eucalyptus species (Hudson et al. 1998, Ona et al. 2001, Ohshima et al. 2005, Dutt and Tyagi 2011, Pirralho et al. 2014). Other fast-growing but unutilized tree species are members of the genus Macaranga, (family Euphorbiaceae) and are naturally distributed in Thailand, Malaysia, New Guinea, Singapore, and Indonesia (Sosef et al. 1998). Species such as M. penangensis and M. lowii are mainly found in primary forests with lower disturbance levels, but many other Macaranga species are pioneer species that grow in secondary forests with medium to high disturbance levels (Slik et al. 2003). For example, Slik et al. (2003) pointed out that burned forests are mainly populated by Macaranga species. In Indonesia, Macaranga trees are commonly found in secondary forests that have regenerated naturally after shifting cultivation. However, little information is available regarding the wood properties and anatomical characteristics for this genus (Killmann 1990, Sosef et al. 1998, Ogata et al. 2008. The main objective of this study was to explore the potential utilization of the wood resources from unutilized fast-growing tree species found in secondary forests in Indonesia. In this paper, wood properties and anatomical characteristics were investigated for two Macaranga species (M. bancana (Miq.) Müll. Arg. and M. pearsonii Merr.) growing naturally in secondary forests in Central Kalimantan, Indonesia. The wood properties were also evaluated in terms of pulp and paper qualities to explore the possibility of using these wood resources as alternative raw materials for pulp and paper production. Materials Wood samples were collected from six Macaranga Slik et al. (2003), these two species are pioneer species found in secondary forests with medium to high disturbance levels. Table 1 shows the stem diameter and tree height of the sample trees. Core samples (5 mm in diameter) for determining the basic density and anatomical characteristics were collected at breast height from each tree using an increment borer (Haglöf) (Fig. 1). Basic density and anatomical characteristics Core samples were cut into small segments at 1 cm intervals, from pith to bark, to determine the radial variations in basic density and anatomical characteristics ( Fig. 1). Basic density was determined by measuring the green volume of each 1 cm core segment by the water displacement method, and then oven-drying the segments at 105℃ to a constant weight, considered the oven-dry weight. Basic density was calculated by dividing the oven-dry weight by the green volume. Transverse sections of the core samples, 20 µm in thickness, were prepared with a sliding microtome (ROM-380, Yamatokohki) and then stained with safranin, dehydrated in a graded ethanol series, cleared in xylene, and mounted on glass slides. Photomicrographs taken with a digital camera (E-P3, Olympus) mounted on a microscope (BX51, Olympus) were used for measuring vessel diameter, fiber diameter, and fiber wall thickness, and the images were examined using image analysis software (ImageJ, National Institute of Health) (Fig. 2). The vessel diameter was measured for 30 vessels, and fiber diameter and fiber wall thickness were measured for 50 fibers at each radial position. Small wood blocks (1×1×5 mm) were prepared for measuring vessel element length and fiber length and were macerated with Schulze's solution (6 g potassium chlorate in 100 mL 35% nitric acid). At each radial position, 30 vessels and 50 fibers were measured using a microprojector (V12, Nikon) and a digital caliper (CD-30C, Mitutoyo). Wood properties related to pulp quality The pulp and paper properties were evaluated by calculating the following wood properties related to pulp quality: Runkel ratio (Runkel 1949), Luce's shape factor (Luce 1970), flexibility coefficient (Malan and Gerischer 1987), slenderness ratio (Malan and Gerischer 1987), solids factor (Barefoot et al. 1964), and wall coverage ratio (Hudson et al. 1998). These properties were calculated from the fiber morphologies determined by the method described above. The calculation formulas are listed in Table 2. Lignin content The lignin content was determined by the acetyl bromide method (Iiyama and Wallis 1988). Small wood samples were prepared from the core samples with a sliding microtome. Each small wood sample (5 mg oven-dry weight) was extracted with a 95% ethanol-toluene mixture (1 : 2, v/v) in a Soxhlet extractor for 6 hours. The extracted samples were put into 15 mL test tubes containing 5 mL 25% acetyl bromide in acetic acid and 0.2 mL 70% perchloric acid and heated at 70℃ for 30 min in a block heater (MG-2200, EYELA). This reaction mixture was added to a mixture of 10 mL 2M aqueous NaOH and 20 mL acetic acid, and the volume was adjusted to 100 mL with acetic acid. The absorbance at 280 nm was measured with a spectrophotometer (V-650, JASCO). The lignin content was calculated by the following equation: Lignin content (%)=100・(As-Ab)・V・(20.091 W) -1 where As and Ab are the absorbances at 280 nm for the sample and blank, respectively, V is the volume of the measurement solution, and W is oven-dry weight of the sample. Basic density The mean values for the basic density of M. bancana and M. pearsonii wood ranged from 0.29 to 0.31 g cm -3 and from 0.23 to 0.31 g cm -3 , respectively (Table 3). The previously reported range of basic densities of wood from Macaranga species was about 0.30 to 0.45 g cm -3 (Killmann 1990, Suzuki 1999a, Ogata et al. 2008, Chin et al. 2013. Killmann (1990) reported a basic density for M. hosei wood of 0.27 to 0.34 g cm -3 . Our results for the mean basic density for the two Macaranga species studied here were therefore similar to those reported for the other Macaranga species. ( Table 3) were lower than those reported for A. mangium and Eucalyptus species used for pulpwood production. Therefore, paper produced from M. bancana and M. pearsonii woods may have some advantages in terms of strength and sheet density compared to that from Acacia and Eucalyptus woods. However, the overall pulp yield might be lower. Anatomical characteristics The mean values for the anatomical characteristics of the two Macaranga species studied here are listed in Table 3 pearsoni were therefore similar to those reported by Ogata et al. (2008). Wood with long and large diameter vessel elements produces paper showing vessel picking which vessel elements are picked from the surface of paper during the printing process and are deposited on the printing surface (Hudson et al. 1998, Drew andPammenter 2006). Therefore, wood with short vessel elements with small diameters is preferable for paper production. The reported mean vessel diameters were 136 µm for A. mangium (Nugroho et al. 2012), 120 µm for E. camaldulensis, 157 µm for E. globulus (Ona et al. 2001), and 156 µm for E. tereticornis (Sharma et al. 2005), while the mean vessel element lengths were 0.24 mm for A. auriculiformis (Chowdhury et al. 2009), 0.31 mm for E. tereticornis (Sharma et al. 2005), 0.22 mm for E. camaldulensis, and 0.19 mm for E. maculata (Pirralho et al. 2014). As shown in Table 3, the vessel diameters and vessel element lengths of the two Macaranga species studied here were relatively larger and longer than those reported for Acacia and Eucalyptus species. These vessel morphology results indicate the possibility of a relatively higher occurrence of vessel picking in paper made from the two Macaranga species when compared to paper made from Acacia and Eucalyptus species. hybrid (Yahya et al. 2010), 29.3 % for E. grandis (Dutt and Tyagi 2011), 27.5% for E. regnans (Iiyama and Wallis 1988), and 33.2 % for E. urophylla (Dutt and Tyagi 2011), for an average lignin content of commercial pulpwood from fast-growing trees ranging from about 25 to 30%. As shown in Table 4, the lignin content of the two Macaranga species studied here was similar or somewhat lower values than previously reported for other fast-growing tree species used for pulpwood production. The lignin content therefore indicates that the wood from M. bancana and M. pearsonii has similar characteristics for pulpwood production to those of fast-growing Acacia and Eucalyptus species currently used commercially. Wood properties related to pulp quality The Runkel ratio is related to the suitability of papermaking: fibers with a Runkel ratio less than 1.0 are suitable for use as pulp (Runkel 1949). Fibers with high Runkel ratio are stiffer and form bulkier paper with lower bonded area when compared to low Runkel ratio fibers (Ashori and Nourbakhsh 2009). A lower Runkel ratio also indicates that the fibers easily collapse to form paper with good strength properties (Istikowati et al. 2016). The mean Runkel ratios in two species studied here were less than 0.1 ( Table 5), suggesting that the fibers from both species would produce a good quality paper. The mean values of Luce's shape factor were 0.08 and 0.09 for the two species studied here (Table 5). Luce's shape factor is an index for the resistance to beating in the pulp, so that a low value for Luce's shape factor indicates a decreased resistance to beating in paper making (Luce 1970). Pirralho (2014) reported that Luce's shape factor ranged from 0.39 to 0.74 in several Eucalyptus species. Ohshima et al. (2005) also reported mean values of Luce's shape factor of 0.37 for E. camaldulensis and 0.42 for E. globulus. The values for Luce's shape factor in M. bancana and M. pearsonii were therefore relatively lower than those reported for Eucalyptus species. The flexibility coefficient is related to paper strength (Malan and Gerischer 1987, Ashori and Nourbakhsh 2009, Yahya et al. 2010, Pirralho et al. 2014). Ashori and Nourbakhsh (2009) reported that the flexibility coefficient expresses the potential of the fiber to collapse during beating or during drying of the paper web. The collapsed fibers then provide a greater bonding area and therefore a stronger paper. In addition, Moriya (1967) reported that flexibility coefficient was positively related with paper strength, such as burst factor and tear factor. The reported values for the flexibility coefficient ranged from 0.37 to 0.65 in several Eucalyptus species (Pirralho et al. 2014) and were 0.70 and 0.72 in E. camaldulensis and E. globulus, respectively (Ona et al. 2001). In the present study, the mean values for the flexibility coefficient were 0.93 for M. bancana and 0.92 for M. pearsonii (Table 5). The mean values for the slenderness ratio were 58.7 in M. bancana and 60.8 in M. pearsonii (Table 5). The slenderness ratio is related to the tearing strength and folding endurance of paper (Malan andGerischer 1987, Yahya et al. 2010): a high ratio indicates a better formed and well-bonded paper (Ashori and Nourbakhsh 2009). Previously, Pirralho et al. (2014) reported values for the slenderness ratio ranging from 39.4 to 48.4 for several Eucalyptus species. Ohshima et al. (2005) also reported a ratio of 50.5 to 56.5 and 57.7 to 59.9 in 14-year-old E. camaldulensis and E. globulus, respectively. The values for the two Macaranga species studied here were similar or slightly higher than these previously reported values. Ona et al. (2001) reported values for the solids factor of 46×10 3 µm 3 and 91.2×10 3 µm 3 for 14-year-old E. camaldulensis and E. globulus, respectively. In addition, they found significant negative relationship between solids factor and sheet density. The mean values for the solids factor were 167×10 3 µm 3 for M. bancana and 182×10 3 µm 3 for M. pearsonii (Table 5), suggesting that sheet density of paper produced from these Macaranga species might be lower than that produced from Eucalyptus species. Wall coverage ratio is an index for bending resistance (Hudson et al. 1998) and is related to fiber flexibility (Amidon 1981). A material with a wall coverage ratio value less than 0.4 is considered to be good pulpwood (Kami parupu gijutsu kyokai 1969). In the present study, the mean values for the wall coverage ratios for M. bancana and M. pearsonii ranged from 0.07 to 0.08 and from 0.07 to 0.10, respectively (Table 5). CONCLUDING REMARKS The basic density, anatomical characteristics, and lignin content were investigated for wood from Macaranga bancana and M. pearsonii trees growing naturally in secondary forests of Central Kalimantan, Indonesia in order to determine the usefulness of these trees as wood resources for pulpwood production. The mean values for basic density, anatomical characteristics, and lignin content were within ranges of previously reported values for other Macaranga species. Compared to the wood properties related to pulp quality with those of Acacia and Eucalyptus species currently used for commercial pulpwood, both M. bancana and M. pearsonii showed better properties (Table 6), although these Macaranga species has lower basic density, longer vessel elements length, larger vessel diameter, and higher solids factor. Therefore, the wood from these two Macaranga species could produce paper with higher strength properties, but lower pulp yield, higher possibility of occurrence of vessel picking and lower sheet density, compared with currently commercialized papers made from the fast-growing trees. In addition, for kraft pulp production, mixing these Macaranga woods with other commercial pulpwood could compensate the low pulp yield from these Macaranga wood. study was financially supported by Strategic Funds for the Promotion of Science and Technology of the Japan Science and Technology Agency (project title; Creation of a Paradigm for the Sustainable Use of Tropical Rainforest with Intensive Forest Management and Advanced Utilization of Forest Resources).
2019-04-01T13:15:41.009Z
2016-10-04T00:00:00.000
{ "year": 2016, "sha1": "c4e982701875de4615a5ab9223d72ea3f017c6a9", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/tropics/25/3/25_MS15-23/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b797d8c594fecb26cea02343efb9df068dc55098", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Biology" ] }
207808150
pes2o/s2orc
v3-fos-license
Educators' Views on Using Humanoid Robots With Autistic Learners in Special Education Settings in England Researchers, industry, and practitioners are increasingly interested in the potential of social robots in education for learners on the autism spectrum. In this study, we conducted semi-structured interviews and focus groups with educators in England to gain their perspectives on the potential use of humanoid robots with autistic pupils, eliciting ideas, and specific examples of potential use. Understanding educator views is essential, because they are key decision-makers for the adoption of robots and would directly facilitate future use with pupils. Educators were provided with several example images (e.g., NAO, KASPAR, Milo), but did not directly interact with robots or receive information on current technical capabilities. The goal was for educators to respond to the general concept of humanoid robots as an educational tool, rather than to focus on the existing uses or behaviour of a particular robot. Thirty-one autism education staff participated, representing a range of special education settings and age groups as well as multiple professional roles (e.g., teachers, teaching assistants, speech, and language therapists). Thematic analysis of the interview transcripts identified four themes: Engagingness of robots, Predictability and consistency, Roles of robots in autism education, and Need for children to interact with people, not robots. Although almost all interviewees were receptive toward using humanoid robots in the classroom, they were not uncritically approving. Rather, they perceived future robot use as likely posing a series of complex cost-benefit trade-offs over time. For example, they felt that a highly motivating, predictable social robot might increase children's readiness to learn in the classroom, but it could also prevent children from engaging fully with other people or activities. Educator views also assumed that skills learned with a robot would generalise, and that robots' predictability is beneficial for autistic children—claims that need further supporting evidence. These interview results offer many points of guidance to the HRI research community about how humanoid robots could meet the specific needs of autistic learners, as well as identifying issues that will need to be resolved for robots to be both acceptable and successfully deployed in special education contexts. Researchers, industry, and practitioners are increasingly interested in the potential of social robots in education for learners on the autism spectrum. In this study, we conducted semi-structured interviews and focus groups with educators in England to gain their perspectives on the potential use of humanoid robots with autistic pupils, eliciting ideas, and specific examples of potential use. Understanding educator views is essential, because they are key decision-makers for the adoption of robots and would directly facilitate future use with pupils. Educators were provided with several example images (e.g., NAO, KASPAR, Milo), but did not directly interact with robots or receive information on current technical capabilities. The goal was for educators to respond to the general concept of humanoid robots as an educational tool, rather than to focus on the existing uses or behaviour of a particular robot. Thirty-one autism education staff participated, representing a range of special education settings and age groups as well as multiple professional roles (e.g., teachers, teaching assistants, speech, and language therapists). Thematic analysis of the interview transcripts identified four themes: Engagingness of robots, Predictability and consistency, Roles of robots in autism education, and Need for children to interact with people, not robots. Although almost all interviewees were receptive toward using humanoid robots in the classroom, they were not uncritically approving. Rather, they perceived future robot use as likely posing a series of complex cost-benefit trade-offs over time. For example, they felt that a highly motivating, predictable social robot might increase children's readiness to learn in the classroom, but it could also prevent children from engaging fully with other people or activities. Educator views also assumed that skills learned with a robot would generalise, and that robots' predictability is beneficial for autistic children-claims that need further supporting evidence. These interview results offer many points of guidance to the HRI research community about how humanoid robots could meet the specific needs of autistic learners, as well as identifying issues that will need to be resolved for robots to be both acceptable and successfully deployed in special education contexts. Keywords: education, special education, schools, teachers, autism, children, humanoid robots, social robots INTRODUCTION Robotic systems targeted toward people on the autism spectrum, especially children, are a growing subfield of social robotics and human-robot interaction (HRI) research. Autism is a lifelong neurodevelopmental condition or spectrum of related conditions that affects the way a person interacts with others and experiences the world around them (American Psychiatric Association, 2013). Many autistic 1 individuals also have additional difficulties with spoken language and/or intellectual disability, as well as co-occurring mental health problems, especially anxiety, and attentional difficulties-all of which can involve complex, long-term support needs. In England, ∼120,000 children are documented as having autism as their primary form of special educational need and disability [SEND; (Department for Education, 2018)]. Of these, 28% percent of autistic children are educated in special schools and represent over a quarter of the total special school population. The children attending these schools often have complex needs, including an additional intellectual disability and/or limited-to-no spoken communication, and often require much higher levels of support from specialist teaching and allied-health staff than regular, mainstream schools can typically provide. These particular children are frequently overlooked by researchers (Tager-Flusberg and Kasari, 2013) but, along with the specialist staff that support them, represent two sizeable populations of potential robot users in England-and were thus the focus of the current investigation. Autistic children are thought to be especially interested in and motivated by robots, potentially related to the fact that they are interactive-but programmed and ultimately rule-baseddevices. Indeed, robot-based programmes are often cited to be potentially beneficial for this group in particular because they offer the possibility of fairly predictable and consistent interactions (e.g., Dautenhahn, 1999;Dautenhahn and Werry, 2004;Duquette et al., 2008;Rudovic et al., 2017;Straten et al., 2018). These are precisely the sort of interactions that autistic people are often said to favour (Pellicano and Burr, 2012;Lawson et al., 2014). The extant HRI literature suggests that autistic children may be highly engaged during robot interactions (Robins and Dautenhahn, 2006;Straten et al., 2018), and show spontaneous joint attention and other social behaviours that are often challenging for this group (Anzalone et al., 2014;Warren et al., 2015). Yet, existing research on social robotics for autism often constitutes proof-of-concept studies with small samples (n < 10), single rather than repeated robot-child interactions, and incomplete information about the autistic participants, making it more difficult to understand the potential applicability of the work as education or therapy [see reviews by (Diehl et al., 2012;Scassellati et al., 2012;Begum et al., 2016), for discussion]. Existing autism and HRI studies have predominantly studied children interacting with robots in lab-based settings (e.g., Salvador et al., 2015;Yun et al., 2016) or closely controlled, researcher-designed procedures that effectively re-create labs in schools (e.g., Kozima et al., 2007;Robins et al., 2012). Although there is much to be learned from studies in controlled lab-like settings, moving robots from the lab into the classroom (or "the wild"), where teachers apply the teaching programme unsupervised, is no straightforward task (Diehl et al., 2012;Huijnen et al., 2016). Embedding robots into existing autism contexts and pedagogical practices requires in-depth understanding of specific contexts and practices, and of the adult users who will support robot-based programmes. Understanding the views of these adults is therefore essential, as they are key decision-makers for the adoption of new technologies, and would be the ones to directly facilitate any future use of robots. Several studies have sought teachers and professionals' views to explore implementing robots within regular educational settings (Fridin and Belokopytov, 2014;Kennedy et al., 2016;Serholt et al., 2017;Cheng et al., 2018) but only a handful have done so within special education settings. Diep et al. (2015) interviewed six teachers from a Canadian school for children with multiple and complex needs about their perceptions of social robots, in relation to an anticipatory governance framework (Guston, 2014). Although their results make some reference to autistic learners, they do not primarily focus on this group. In a larger study, Hughes-Roberts and Brown (2015) conducted interviews and focus groups with 20 teachers in special (though not autism-specific) education settings in the UK, incorporating a demonstration of a humanoid robot, NAO. Teachers stressed sustained engagement as a key indicator of success for many of their SEND pupils, and thus considered facilitating engagement as a key robot requirement. They highlighted three teacherproposed robot activities, which included adults facilitating one or more children's game-like interactions. Perceived barriers to adoption focused on technical factors, describing the need for simple, fast, versatile, and usable robot controls. The only other limiting factor mentioned was the potential for robots to distract students from learning-at least while the robots were new. It was unclear, however, whether these educators considered, overall, robots to be relevant, appropriate, and feasible for their SEND settings and learners-and, most relevant to the current study, whether they might be especially useful for autistic learners. Huijnen et al. (2017) took a related approach, combining focus groups, and co-creation sessions with autism stakeholders and professionals (including teachers and other school-based roles, all in the Netherlands) to develop 10 specific "intervention templates" for the humanoid robot, KASPAR. These included clear statements of goals, and explicitly mapped out the planned roles and "flow" of an interaction between a child, robot, and professional. This group discussed the role, requirements, and potential impact of the adult robot user in far more detail than any other study, ultimately "expect[ing] that the person operating KASPAR is a huge determiner of the success of the interaction and thereby of the intervention" (p. 3085). They also discussed characteristics or subgroups of autistic learners in relation to the suitability of robot use and, in a related paper, identified the potential educational roles that KASPAR could play, including those of a trainer, prompter, or mediator (Huijnen et al., 2019). The findings from Hughes-Roberts and Brown (2015) and Huijnen et al. (2016Huijnen et al. ( , 2017 suggest that many educators seem to be broadly receptive-albeit cautious-toward at least some purposes of robots in autism or special education [though see (Diep et al., 2015), for more negative or mixed sentiments]. Educator interviews provide a valuable starting point for understanding whether and how robots might be integrated into existing educational practices, and might transition into being teacher-(not researcher-)managed tools. Yet, these studies only give a partial picture of the information researchers need to know to work toward robot deployment with autistic learners within special education settings. This is for three key reasons. First, these learners' specific needs and the strategies used to support them can be very distinct from those educated within mainstream settings (Eaves and Ho, 1997). Greater knowledge is needed about the utility of robot-based programmes for these particular children in their own specific, specialist contexts. Second, these and other existing studies have frequently asked educators to answer questions or discuss ideas in relation to demonstrations of existing robots (e.g., Hughes-Roberts and Brown, 2015;Coeckelbergh et al., 2016;Huijnen et al., 2016;as in Cheng et al., 2018). This approach can be useful if the goal is to generate or assess applications for those specific robots, but it is necessarily limiting with respect to discussing perceptions and applications of robots as a category of tools, or for generating novel use cases, as it primes participants to think of that specific robot when developing their ideas. Third, much existing research has either used surveys and questionnaires (e.g., Coeckelbergh et al., 2016;Kennedy et al., 2016;Cheng et al., 2018) to ask educators to respond to topics and ideas that have been preidentified by researchers, or, have effectively leveraged educators' expertise for solving particular design or pedagogical problems (e.g., Huijnen et al., 2016Huijnen et al., , 2017. Educators' priorities and ideas about robotics might be different than those of researchers, but existing work seems to have given limited opportunities to explore these issues. The current study is part of the European Union funded DE-ENIGMA project (de-enigma.eu), in which teams with technical and autism education expertise are collaborating to explore the potential of humanoid robots as tools in autism education, particularly with respect to teaching social and emotional skills, and to develop real-time multimodal processing of autistic children's behaviour. One strand of the project sought to better understand current specialist autism education settings in England, i.e., the target users and context of use for DE-ENIGMA outputs. This paper reports Part B of a two-part interview study with autism educators. We focused on educators, rather than a wider range of autism stakeholders, because DE-ENIGMA's focus has been specifically on schools. Part A (reported in Ainger et al., Manuscript in Preparation) investigated autism educators' current goals and pedagogical practices. Part B, reported here, discussed the potential future use of robots. Our goal in Part B was to elicit educators' views and perspectives on the potential use of humanoid robots with autistic learners in special schools, to better understand the factors perceived to be important for deploying robots in these settings. We also focused on understanding educators' perceptions and suggested applications of humanoid robots as tools for teaching social and emotional skills, due to the focus on this topic within the DE-ENIGMA project. Unlike some previous studies that have asked educators to respond to ideas and topics pre-identified by researchers (e.g., in surveys and questionnaires), we used a semi-structured interview schedule, with researchers exploring participants' ideas in detail, following from fairly open questions. Participants and Educational Settings Thirty-one educators (female: n = 25) took part in individual semi-structured interviews or small focus groups, between December 2016 and January 2018. These educators were recruited via convenience sampling through existing community and personal contacts. All of our participants worked in specialist settings in England: 26 in special schools (n = 7, autism-specific; n = 18, general SEND), five in autism resource bases attached to a mainstream school, and one working across multiple SEND settings. Autistic children educated in special schools in England usually have a high degree of adult interaction and support throughout the school day. In special schools, classes are small (often 5-10 children), with a highly trained teacher and a team of teaching assistants, who often have less specialist training. There is further input from specialist allied health professionals, including speech and language therapists and occupational therapists. Consistent with this context, our participants reported working with learners on the autism spectrum in a variety of educational roles, including as a primary (n = 12) or secondary (n = 5) teacher, teachers working across multiple ages and/or school settings (n = 2), a teaching assistant (n = 2), a headteacher or deputy headteacher (n = 3), a speech and language therapist (n = 3), or an occupational therapist (n = 2). Many participants indicated more than one autism-related role and had worked with multiple age groups over time, from Early Years education (<5 years), up to age 18-19 years. They varied widely in their level of experience, ranging from <1 to 18 years' experience in their current education setting (M = 4.7 years, SD = 4.1) (see Supplementary Table 1 for participant details). Procedure Fourteen participants (female: n = 11) completed individual, semi-structured interviews in a quiet room at the university or school, and 17 participants took part in one of three focus groups (female: n = 14) in participating schools (two groups contained six participants, one contained five), facilitated by a researcher (see Supplementary Table 1). Part A of the interview study (Ainger et al., Manuscript in Preparation) focused on current educational contexts and practices, including participants' aspirations for their autistic students, their views on how social and emotional skills are currently taught within classrooms, curricula and supports used in their setting, and uses of technology (see Supplementary Table 2). To introduce the discussion of humanoid robots in Part B, the focus of the current study, participants viewed six example images of existing robots (Milo, KASPAR, NAO, Flobi, PARLO, and Pepper). They were not given any further information about these particular robots, their current capabilities, or examples of use and were encouraged not to be concerned about issues of technical feasibility. Instead, they were asked to consider the potential uses of humanoid robots for autistic children's learning, including potential benefits and concerns (see Supplementary Table 2). While the interview did not explicitly ask about respondents' prior experience with or knowledge of robotics, almost all educators stated that they had no prior experience or knowledge of robots. The exceptions were one educator working with older students, who reported using commercially available Bee-Bot R robots to teach science and programming, and some educators who had seen previous demonstrations of a humanoid robot (Milo) in connection with the DE-ENIGMA project. The protocol was approved by UCL Institute of Education Research Ethics Committee (REC857). All participants gave written informed consent to the interviews, including audio recording, in accordance with the Declaration of Helsinki. The total duration of the individual semi-structured interviews lasted 30-54 min (M = 40 min) and focus groups lasted 52-78 min (M = 62 min). The robotics-focused questions (Part B) lasted 5-12 min in individual interviews, equating to 14-31% of the total time (M = 8.5 min, 20%), and the robot section of focus groups lasted 15-18 min, or 24-35% of total discussion time (M = 17 min, 29%). Thematic Analysis Audio-recordings were transcribed verbatim. The robot interview data were analysed using thematic analysis (Braun and Clarke, 2006), which included familiarisation of the data; generating of initial codes; generating themes, reviewing, defining and naming themes; and compiling this report. We adopted an inductive approach (i.e., without integrating the themes within any pre-existing coding scheme or preconceptions of the researchers) within an essentialist framework (to report the experiences, meanings, and reality of the participants). Two authors (AA and EP) independently familiarised themselves with the data and liaised several times to review the themes and subthemes, focusing on semantic features of the data, resolving discrepancies and deciding the final definitions of themes and subthemes. Analysis was thus iterative and reflexive in nature. Participants' responses to Part A of the interviews, on current educational goals and practices, were analysed and reported separately (Ainger et al., Manuscript in Preparation). RESULTS We identified four themes in educators' interviews (see Figure 1 for summary of themes and subthemes). Throughout, educator quotes are attributed via participant ID numbers. Theme 1: Autistic Children Are Likely to Find Robots Engaging Participants stressed the importance of engagement and motivation for learning, and anticipated that the autistic learners in their settings would be "so interested" in and motivated by humanoid robots, potentially more motivated than when interacting with adult educators, or non-technical activities. One explained: "I think if the robot's doing it [modelling behaviours], it's more captivating than just us as a person. This is a toy that plays back essentially, it's engaging" [101]. Educators also felt that this engagement could have a positive impact on their readiness to learn: "They would be really happy to work with it for longer periods of time, much longer than usual because, let's be honest, a piece of paper and a worksheet, it's not as exciting as a humanoid robot can be" [011]. Participants reported that, for some children, the attraction of a humanoid robot might be sufficient to encourage them to engage in otherwise challenging social interactions: "engagement is a big key to the social barriers that children may face, and if they're able to engage and experience some of those interactive activities, which they avoid at all cost in other settings. . . I really think [a robot] could support the social skills" [004]. Yet, robot attractiveness and engagement were not perceived as wholly positive. Respondents often discussed this characteristic alongside potential drawbacks, including concerns "about the extent we're going to use the robots. . . when we're talking about autistic children, we need to be very careful with something [not] to become an obsession" [011]. Another educator commented: "particularly with the younger ones with autism, we're trying to make them think that people are amazing. . . so all the teachers in the sessions try to become the most exciting thing in the room" [105]. For some children, educators further felt that access to a highly attractive robot could conflict with overarching educational goals to help autistic learners attend to and understand other people (see also Theme 4B). Theme 2: Robots Offer Predictable, Consistent Interactions; Children Know What to Expect Educators in this study expected humanoid robots to be "consistent" and "obviously predictable" compared to people, who "behave in all sorts of different manners and ways" [015]. One educator summed it up: "Robots, unlike humans, they will always be the same. Their tone of voice will always be the same, their inflection will always be the same, the body language is always the same. They're very predictable, like if you say a certain thing, it will say a certain thing back to you. So I think with kids with autism, they love that kind of thing, predictability" [014]. Overwhelmingly, they saw predictability as a potential benefit for their students but, as in Theme 1, they frequently discussed this benefit alongside less-positive implications. Subtheme 2A: Predictability Is Understandable and Non-threatening Educators emphasised autistic children's difficulty in making sense of other people's often-unpredictable behaviour: "this is a struggle, they cannot predict people but a robot is quite predictable with its reactions" [017]. Robots were perceived to be "easier" for children because "they know what to expect" [010] and could help them to "predict what might happen" [015]. Educators often talked about the importance of their students feeling safe and secure, and thought that "a robot like that would be something safe for my students, safe to interact, safe to communicate. . . they wouldn't feel threatened" [008]. One specific, anticipated benefit of a robot's predictability was that children might feel more at ease interacting with robots, relative to how they feel in other school activities or human interactions: "These children might respond to the robots better than the way they respond to other people because they might predict their reaction. So, for example, if they know that when they say 'happy' he smiles, it could be less scary for them" [002]. Some educators also felt that this benefit could have a positive impact on their learning: "Many of my students won't push themselves harder because they are afraid of making a mistake. Maybe if a robot like that would exist in my classroom, they wouldn't feel so intimidated or threatened from the teacher's authority and they would be able to try different things and that would help them progress and develop in different aspects" [008]. Subtheme 2B: Consistency Could Support Learning Respondents also highlighted the possible benefits of a robot's consistency or "sameness, " particularly in its visual appearance and manner. One educator remarked: "We do have different people coming in as supply teachers or supply TAs [teaching assistants] for the day and, if some of the students do not like the way someone is dressed or smells or talks to them, they won't communicate with them. But a robot like that will have the same specific characteristics every single day and that's something that would be very useful for my students. They will know that this robot would look exactly the same every day and they will be able to build a trust with the robot and communicate more" [008]. Another respondent suggested using the robot for helping to focus their attention on academic learning due to their unchanging manner and appearance: "[autistic students] can only concentrate to the words that the robot says. When we [staff] used to teach them, they could concentrate on everything else on us, like the way we move our hands, the way that our hair is today. So I think a robot could actually attract their interest on a specific thing that we want them to learn" [005]. Educators also used "consistency" to refer to a concept sometimes described in the autism-robotics literature as repeatability: a robot could repeat usually-variable social behaviour (e.g., a facial expression) over and over, helping autistic children to begin identifying patterns and associating meanings with the behaviour. "The challenges of face-to-face and eye contact and response to facial expression and understanding somebody's facial expressions are so inconsistent that, with a robot, [autistic children] can start to learn what those consistencies are and it becomes much easier for them to respond to them, rather than a human facial expression, which could mean all kinds of things. I think with a robot they learn very quickly. . . [they] may start to associate meaning with some of those facial expressions and recognise those in others and maybe seek some of those communicative responses" [004]. Educators also felt that robotic consistency might be particularly advantageous if applied to classroom interventions that require consistency and rule following, such as the Picture Exchange Communication System (PECS; Bondy and Frost, 1994), a widely-used alternative/augmentative communication system. Indeed, they felt that a robot might deliver such an intervention with more fidelity than a human teacher: "The PECS system is very definite and it's very, very rule based, but as humans, there's distractions and that means the delivery of this rule-based training we often get wrong. Robots would do it consistently so that a child, an autistic child working with a robot that's programmed to deliver training only in that specific way following that specific algorithm, [the child is] going to respond much better because they're getting a consistent response. So I think you'd have better outcomes if robots are teaching autistic kids certain protocols" [013]. Subtheme 2C: But the World Is Fast-Paced and Unpredictable Educators repeatedly highlighted that, unlike the expected behaviour of robots, both humans and life are unpredictable, and that one key educational goal was to support children in learning to deal with this uncertainty. Educators were concerned that predictable and consistent robots would potentially hinder children's progress in this regard: "[technology], largely speaking, you know, does what they want it to do. What we want them to understand is that the world is unpredictable and the world has huge variety in it, and we want them to be able to respond quite flexibly to things, as well as follow somebody else's agenda" [201]. Educators noted that, while a robot might not "mind how long it takes for a child to do anything. . . it could be really deskilling for the child because you don't have all the time in the world with a robot waiting for you when you're an adult, like you do have to just go on the bus and swipe your [bus pass], you do just have to transition" [103]. Transitioning between activities and/or settings can be an area of particular difficulty for autistic children. Our participants also felt that, while children might learn more easily or feel more comfortable with a highly predictable robot than when learning with a person, that type of learning could be counter-productive in the long run because it does not support skill generalisation: "I don't know, maybe it's going to be too predictable for them, and then how will they generalise when they actually have to interact with actual people. So maybe by teaching them this predictability, it's not that easy to help them generalise it" [010]. Some educators reported that a robot could provide a "good base" for teaching simple social skills but warned, "if our goal is to teach kids social skills and interaction and how to interact into the world and the community, then that's not through robots because at the end of the day, our community and our real world are not made of robots. So it's very important that we phase out a bit and then have more human contact" [014]. Theme 3: Roles of Robots in Autism Education Educators' examples of how robots could potentially be used varied widely depending on their settings and the profile of their learners. Nevertheless, there were several key commonalities across the interviews. Subtheme 3A: "It Is Not a Toy": Robot Use Must Be Planned and Evaluated Educators agreed that robots are "not a toy." Rather, any use of social robots in their settings would need to be planned by teachers, "really thinking carefully, 'How do I use it? Is it appropriate?"' because a robot "might not be appropriate for every single child" [203]. Some framed the need for planning in relation to their past experiences with iPads in class. Like robots, iPads were perceived to be attractive, flexible technologies but, according to educators, were often introduced without clear goals, creating knock-on problems in which autistic learners might "see an iPad or a technological device as something that is mainly a toy. They can develop some obsessive behaviours or they will be repeatedly asking for an iPad without completing the work" [008]. One interviewee neatly summarised: "I don't think [robots] should end up being used like iPads, just for fun and just as a toy. I think they, when you use them, should have a very clear target for why you're using it and for a very clear amount of time and with a purpose" [011]. Indeed, educators emphasised that educational planning would therefore need to consider whether the robot was "appropriate for every single child... really just thinking carefully, like everything we do here, 'Oh is that child ready?' and to really teach something specific, not thinking just putting them in the same room with the robot and then leave and think they'll know everything" [204]. Another focus group agreed: "It wouldn't have to be like, 'okay, now we're learning the social skills next time the robot is coming up' but I would look at each child and see like, 'okay, how am I going to use it with that learner' and then find a time and a setting that feels appropriate" [302]. One respondent further suggested that planning to introduce robots or any new tool must incorporate evaluation, perhaps especially if teachers have high expectations and perceive the new tool as a "[scheme] that they believe will work and will fix everything." She noted, "We say, 'oh yes, try that, that might work, ' and there's nobody assessing as to whether or not it is working. We need a baseline check to start with and then we need to check whether or not it's worked at the end of the intervention. Interventions are incredibly expensive so therefore you have got to have the mindset that you're going to look to see whether that intervention has worked" [007]. Subtheme 3B: Robots Should Not Be One-Size-Fits-All; They Must Be Personalised In autism education, "personalisation" is a fundamental task in which educators choose, adapt (and often, invent) tools, and strategies "that are catered to that child" [301]. The educators we spoke to clearly expected that the same type of fine-grained childlevel personalisation would be necessary and "programmable" with robots, in addition to choices about the types of learning activities different students may do: "[robots have] got to be based on the likes and dislikes of the child. . . the adjustments would be, you know, that [the robots are] programmed to do a variety of different things" [013]. "If I have a very verbal student who just needs to practise reciprocal conversation or needs to practise its tone of voice or practise identification of feelings and expressions, then I'd program the robot for that. But then if I have only that one robot but then I want to use it with a different kid, who's non-verbal, doesn't like interacting with people at all, then I would have the robot programmed to not say anything, to not maybe do any sudden movements. I would program it depending on what level the student is or what social skill I want to work on" [014]. One respondent agreed wholeheartedly with the importance of technology personalisation, but questioned how well teacherimplemented robot personalisation would work in practice, based on their current experiences with a dyslexia-focused app, Wordshark (https://www.wordshark.co.uk/). She described how this program "can be tailor made to fit the particular child and quite often teachers don't use that tailor-made bit. They just think, 'oh yeah, Wordshark, Wordshark is supposedly very good so let's use it, ' and they're not using it in the way that the manufacturers intended" [007]. She also pointed out that these issues around personalisation and correct use can be exacerbated by school-level decisions around technology and training, in which institutions "invest in one particular member of staff, 'here you are, you're the expert in this' and then either that trend is not cascaded down, or that person then leaves and the technology is left behind and nobody really knows how to use it." Subtheme 3C: Robots Can Take on Some Adult Classroom Roles, but They Are Not Teachers In their discussions, interviewees' suggested robot roles reflect the types of routine support that staff members offer autistic children throughout the day, including "to guide them, to give them ideas, and maybe even to prompt them or to praise them" [008], "especially the higher ability ones, who when I leave them to work independently, they lose track of what they're doing" [009]. Others also felt that educators "could use it as a tool as a part of the group, so the robot could almost form part of the group or it could be used as, it might lead the session or the group" [302]. Yet, while the interviewees suggested that robots could usefully offer some types of support and facilitation currently provided by various adult staff, other discussions made clear that the robot was not seen as a potential teacher. Educators emphasised that "the adult always needs to be in control with what's happening" [303], especially with regard to planning and goal-setting. Where some respondents indicated that robots could be adaptively responding to children, these comments were always made within the context of supporting educational or social goals already identified by teachers. There was no discussion of future humanoid robots "assessing" or identifying children's needs. Beyond issues of planning and control, respondents pointed out that special education teachers are trained in a distinct set of skills and strategies that they need to support their learners. Educators were concerned that reliance on a robot may both deprive autistic learners of the benefit of those skills, but also (over time) detract from staff members' ability to exercise those skills. One focus group participant explained: "Part of our skills we have as special needs educators is that we're able to empathise, and use lots of creative strategies, to the point where you understand why someone finds it challenging to transition and hopefully don't find it so frustrating anymore. I think it's important to swap around as a team as well, not just leave it to a robot" [102]. This "professional deskilling" concern was shared. Another educator noted their own lack of robot experience and training, explaining that their "main concern is whether I would be able to use it appropriately and I wouldn't lose other aspects of my teaching. For example, I wouldn't want to rely too much on the robot to communicate with my students or to help my students access the knowledge" [008]. Subtheme 3D: Robots as Interaction Partners Even before being explicitly asked about possible applications of humanoid robots to social and emotional skills teaching, respondents spontaneously suggested social applications and roles for the robot. As they had explained earlier in the interviews (Ainger et al., Manuscript in Preparation), "the most important goal is to help them progress with their social skills" [010]. Teachers believed that attractive robots might act as social partners, motivating children to work on inherently challenging social and communication skills that are already targeted in existing class activities, such as turn-taking in activities ("You're waiting for the robot to finish talking and then it's your turn to talk, so it's like turn taking, you know, how to have a conversation with somebody" [011]) and conversations ("like having kids just learn general conversations like teach them to say, 'hi, my name is A, what's your name? How are you feeling today?' Like just have them practise conversations, have them practise answering questions but also having the kids practise coming up with questions themselves" [014]). Educators specifically highlighted the role a robot could play in understanding how children's own behaviour affects othersone of "the biggest thing[s] for our learners" [302]. Another interviewee concurred that some autistic learners "cannot see how the way that they're behaving affects other people. So this would be a nice thing to use the robots for. . . [learners] could perhaps see how their behaviour was affecting somebody else" [007]. Other respondents gave specific examples of how they might work on this concept, using the robot, including "a programme for how to make the robot happy today. . . The programme might ask for some steps that the child has to do like feeding or giving water or going for a walk or holding hands or playing a game, whatever makes a robot happy" [017]. Another offered: "I just think of like a robot crying and then having like props of tissues or whatever, you know, and then making my children try to calm him down. . . care for the robot as well, you know, when he says that he's angry or he's got a cut in his wrist or something, I think they really could connect with that. I think that could be a great tool actually" [015]. Educators also suggested that understanding cause-and-effect with the robot could also be used to go beyond grasping event relationships, to "build that empathy and understanding [of] other people. So, the child is angry and might be pulling or shaking the robot or hitting the robot, that the robot might be able to respond to that in a way that it's communicating to the child how those actions are making him feel" [304]. Respondents were not universally approving of using the robot to teach social communication. One respondent was receptive to the idea of robots in general, saying "with the right software or the right purpose, it could be awesome, " but was emphatic that its uses should not include anything "related to emotions or behaviour management or any patronising sort of thing" and "nothing like engaging in social skills or emotional stuff " [016]. This same respondent had expressed particular concern about the robot's capacity to meaningfully render complex human behaviour, and to respond appropriately to autistic children. Theme 4: Children Ultimately Need to Interact With People, Not Robots While they expressed interest and cautious optimism about the use of humanoid robots in autism education, interviewees were also very clear that robots were perceived to have potential and acceptability primarily as "stepping stones" to fostering humanhuman interaction. Subtheme 4A: Robots Supporting Progression Toward Human-Human Interaction Respondents either implicitly or explicitly indicated that working with a robot in a school context would be a transitory, middle phase between two different types of human-human interaction. Educators felt that they would first need to introduce the robot "in a familiar space, with trust and familiar adults that can say, it's okay" [301]. Many autistic children are highly anxious about all new people and activities, and staff suggested addressing this issue using existing educational strategies such as "a social story about it, [showing] pictures beforehand, [explaining] what's going to happen with the robot, when the robot will be coming" [301] (see Gray, 1994, on social stories). These steps, which can "build up almost the story of this robot, how it's coming here, and when it arrives then the pupils will probably be more-shall we say, prepared for its arrival" [101], are useful for any child's interaction with a robot, but especially so for autistic children, who require additional preparation to adapt to novel objects and events their environment. Educators then described how children might work with the robot on skills or activities over time, again potentially supported by some degree of adult guidance: "that's one of our targets, especially in my class, is getting kids to talk to one another. So that could be almost the first step, rather than talking to an adult, you're talking to the robot" [305]. At a later point, children might transition away from work with the robot, applying those skills in interaction with peers, adults, or the community: "You can practice having conversations, you can have the robot opposite you and you can set certain rules and you can first practise with robots before you move on to adults" [011]. Respondents suggested that humanoid robots might be particularly successful at supporting social learning and later generalisation, because "the fact that it is human-like might help them to associate the robot with human behaviour." Another explained with reference to the robot image examples provided in the interviews/focus groups: "I prefer the ones that look more like a human. Most importantly, it's going to be like it's a real boy, it's a real-life example. They would consider the rest like a toy but this [humanoid robot] might be actually an example" [010]. Other educators felt the opposite, that human-like robot appearance and behaviour could be confusing and create problems: "I think that will be my main concern, you know, how to explain to the child that this is only a robot, it doesn't have feelings, and it's different than mum and dad or friends and teachers" [015]. Another agreed that "we don't want them to start thinking this is a human, 'this is my friend' or 'It's the same as my peers"' [204]. Others thought children's understanding of robot-human differences would be dependent on their age and cognitive ability, and one respondent flatly dismissed these concerns, maintaining that to "someone who has autism, a robot is a robot, even if it looks like a person" [016]. Subtheme 4B: "You Don't Want Them to Connect Too Much to the Robot" Educators expressed concern that children might have "too much" interaction with a humanoid robot, in various ways. Some perceived time spent interacting with the robot as directly detracting from time spent with people: "with my kids, you know, [my concern] would just be maybe about the amount of time they would be engaging with it and making sure that they're not always engaging with the robot and they're engaging with other children" [302]. Our participants were also worried about children's emotional investment in the robot. They felt certain that autistic learners could trust and emotionally connect with a robot-perhaps more so than with a person: "You don't want them to connect too much to the robot, that then it's almost like an imaginary friend, like that they rely so heavily on this robot that then they don't socialise" [303]. Another predicted: "they will become too dependent, they will prefer to be with the robot than be with mum or be with sibling and interact with friends. I would be just scared that they will get too attached. I would rather see my children interacting and playing with me or with each other than with the robot" [015]. Suggested applications where children would "build up" from robot interactions to human interactions were repeatedly positioned as a way to balance the potential benefits of supportive, reciprocal robot interactions with the risk of these overshadowing existing relationships. One participant summed this up: "I feel that a robot will work more or less in the same way as our students. There would be a common ground to communicate and share feelings and emotions, a better way to express those emotions instead of interacting with an adult, or their peers. And I'm not saying necessarily to interact just with the robot because that would lose their communication part with, the other human beings in the classroom, with the adults or with their peers. But I think that would be the first step for them to start expressing their feelings and emotions and then it would be easier for them to involve other human beings in the classroom. . . [in] their everyday lives and showing their emotions and communicating their needs" [008]. Subtheme 4C: Robots May Not Convey-or Be Able to Process-Human Complexity Educators repeatedly noted the complexity of human behaviour, and were concerned that humanoid robots' behaviour would lack nuance and variation, particularly for social communication: "You could teach a robot to do this and that but not everyone does it the same way. One person when they're angry might cross their arms but some people might tap their foot. So human behaviour is so erratic and unpredictable and everybody's behaviour for whatever emotion is different" [001]. Educators felt that this lack of variation would limit the robot's potential with regard to what it could teach: "With autistic kids, certainly they could mimic [the robots] but because they could mimic them, they would be in risk of learning one expression for one feeling and that's not right 'cause the diversity of emotions is so wide and the way we adjust and the way we process emotions is so different" [016]. As with the mixed implications of robot predictability and consistency (Theme 2), educators felt that a robot that is programmed to-or is physically limited toshowing a social behaviour in only one way might potentially do autistic children a disservice by not preparing them to understand the true range of human behaviour. They also described how a real, two-way exchange of feeling would be missing: "Social interaction is emotional for both sides, so it's something more than you just get with the robot who is just there, he's predictable. Human relationships are much more complex than the robot I think can show" [104]. Other concerns focused on how the underlying technology would not be able to adequately cope with-and adapt tothe diversity and unpredictability of autistic learners' behaviour: "Even if our students are very structured and predictable, they can also be unpredictable and I don't know if a robot could be able to adjust to those things" [013]. Additionally, "I doubt that a robot could recognise the different ways a person with autism could express [the] same emotions. I think it would be hard to design a software for that" [016]. DISCUSSION In this study, educators were provided with minimal information about what humanoid robots "are like" or their current or future uses to avoid biasing educators' reflections toward specific, existing examples. Educators were therefore free to project their own ideas of whether, and in what ways, future humanoid robots might contribute to autism education. This approach differs from some recent practitioner studies, where participants were introduced to specific robots, or were asked to solve specific problems (e.g., whether KASPAR could add value to a particular learning domain; Huijnen et al., 2016). Overall, the current respondents were open to discussing humanoid robots within autism education contexts. They expressed a willingness to find out more about them, or to try interacting with them for themselves to see what their capabilities might be. These respondents from autism education settings shared many basic perceptions of robots with both the mainstream, UK-based educators in Kennedy et al. (2016), including robots as having "simplistic interactions" and being "primarily seen as a scripted, reactive machine" (p. 5), and with the Canada-based special educators in Diep et al. (2015), who felt that robots might "[provide] structure and repetitiveness in a consistent fashion" (p. 2). Yet, the same qualities that our participants saw as potentially so promising for meeting the needs of autistic learners were perceived as obstacles to adoption by the Kennedy et al. (2016) mainstream sample (see also Serholt et al., 2017); an illustration that "educators, " "autistic children" and "schools" are not homogenous groups and will have different needs-which need to be fully understood to inform future robotics work. Our respondents' openness to discussing future robot use did not equate to unqualified endorsement, however. Where educators predicted that robots could benefit their learners, these predictions were both conditional and carefully circumscribed: robots may be beneficial, if used in a certain way, and if certain measures are in place. These circumscriptions consistently position proposed future robot use within established educational goals and supports. Educator responses also revealed a shared prediction that any future robot use would pose a series of complex cost-benefit trade-offs: if a robot is appealing and motivating, it may become a liability if children engage with it to the exclusion of other interactions; a predictable robot could support short-term learning goals, but might then interfere with children's longer-term capabilities to cope with a mutable world. As part of their initial consideration of whether robots belong in autism education, teachers were already looking at the implication of robots across a child's school career, or their lifespan. Such predicted trade-offs must be addressed by carefully planning robot use, within existing practices and within individual learners' pre-existing goals (subtheme 3A). Autism specialists in Huijnen et al. (2017) made similar comments on the imperativeness of planning robot use, though did not discuss its longer-term implications and trade-offs as did the current participants. These perceived benefits and trade-offs have significant implications for the autism-robotics field, and will be discussed in turn below. Robots Are Novel, but Not Different From Existing Tools Across all of the interview prompts, educators discussed humanoid robots in a remarkably similar way. Interviewees proposed robot uses that supported existing curricular goals, and volunteered a range of established educational strategies that could be applied to introduce robots and support their use. Suggested robot activities and roles built on existing classwork (e.g., practicing turn-taking in a small group) and staff roles. Respondents' emphases on cause-and-effect and turn-taking, plus the specification that adults must be present to support robot use, echo the teacher-proposed robot learning activities in Hughes- Roberts and Brown (2015) and indicate that social skills practice with robots has wider relevance for special education populations. Humanoid robots are a novel technology to autism educators, and one for which they can propose possible applications. However, the current interviewees did not have an expectation of robots affording completely new educational goals, but rather, of robots representing a potentially powerful tool to pursue existing goals. Overall, humanoid robots were not perceived as being fundamentally different from current, widespread technologies, such as tablets. Autism specialists interviewed on their existing iPad use in King et al. (2017) described comparable patterns of use to those that our respondents envisioned for robots, "attempting to integrate tablets into the standard instructional methods that they were already using" (p. 9). To the current respondents, humanoid robots could be fully compatible with current autism education practices, if they can support key longer-term priorities (see Generalisation and Effectiveness: Challenges to Educational Robot Adoption?). This perceived instructional compatibility does not negate the desire for specialist training about robot use, and for that training to be distributed across school staff. Respondents in Huijnen et al. (2017) and King et al. (2017) made similar points about KASPAR and iPads respectively: they wanted training both on how to operate the devices and how to make the most of them pedagogically. As with any educational tool, educators indicated that humanoid robots should be one component or phase of educational activity that is carefully planned to integrate into wider practices; participants in Huijnen et al. (2017) similarly stressed the need for integration. Lesson planning, introducing the robot, and-eventually-transitioning to human interaction were envisioned as being planned and managed by teachers. At least some teachers also seemed to envision taking responsibility for programming robots, or otherwise adapting them to individual learners (see Personalisation, Content, and Teachers-as-Programmers). Respondents' examples of potential robot use implied that some degree of autonomous behaviour would be acceptable and useful, such as robots being able to respond to children in an ongoing activity, to detect when children need prompting, or to offer praise. In Huijnen et al. (2016), participants suggested similar preferences for "semiautonomous" robot operation with autistic learners with specific reference to the existing KASPAR platform. However, some current interviewees raised the concern that robotic technology may not be well-equipped to autonomously interpret and respond to autistic children's variable behaviour. Even if robots do not demand new ways of working, interviewees still identified areas of desired improvement from existing practices around technology use in their schools. They clearly had mixed experiences with iPads in particular, as devices that could be too engaging, and specifically referenced them when emphasising the need for careful lesson planning around robot use. Once again, there is close alignment between these respondents' views and those reported in King et al. (2017), in which educators acknowledged "numerous challenges" of iPads such as "perseveration, " but yet retained "an overall optimism about tablet use. They were aware of the incredible motivation tablets provided for [autistic children] and realised their potential across several areas" (p. 8). One area in which the current results differed from other teacher studies on robots or iPads was the degree of concern over children becoming too emotionally attached, or robots potentially detracting from children's peer, family, and staff relationships. This is more specific than concerns over the amount of use, and also seems distinct from concerns about technology isolating autistic children (e.g., King et al., 2017). This may be one area in which humanoid robots are perceived as special and facilitative of social relationships with autistic children in a way that other devices may not be. However, as with other robot characteristics, human-ness and social capacity were also perceived as pedagogically important (subthemes 3D, 4A). These concerns about overly close and important social relationships with robots are diametrically opposed to some of the Canadian special educators' opinions in Diep et al. (2015), where "face-to-face interaction was seen as an important task they felt the robot could not provide" and that robots "cannot perform the task of providing emotional comfort or communication" (p. 2). These divergent views may indicate both differences of opinion between groups of educators, but also views of robots shifting over time (data from Diep et al. were collected from six teachers in 2012) as technology becomes more sophisticated and is increasingly publicised. Generalisation and Effectiveness: Challenges to Educational Robot Adoption? When asked to discuss potential applications of humanoid robots, educators consistently talked about them as a "stepping stone" to learning, between an introduction that is carefully managed by school staff and a supported transition away from the robot, toward applying new skills with human partners. Endorsing this basic three-stage pattern of robot use appeared to counteract some respondents' concerns about the possibility of children becoming overly reliant on robots, or interacting with them at the expense of classmates and families (subtheme 4B), and made them more ethically acceptable. The stepping stone pattern also relies on educators' special skills and knowledge of children. (Huijnen et al., 2017) participants perceived this same factor as critical to the robot's success, and also linked it to the potential for generalisability-especially in Wizard-of-Oz interfaces with direct and fine-grained adult control. A child could practice transfer even within robot interactions by working with different staff, or in different locations. The "stepping stone" strategy (see also Vygotsky, 1978;or "social bridge" in Hughes-Roberts and Brown, 2015;Huijnen et al., 2016) assumes that children would successfully generalise skills from a robot interaction context to a human one, after sufficient practice. Yet, supporting autistic children to generalise, or transfer, their skills from the lab/intervention setting to a more real-world context is notoriously difficult (e.g., Schreibman et al., 2015). Concepts such as the "therapy register" (e.g., Johnston, 1988;Yoder et al., 2006) capture the issue of autistic children successfully learning and applying skills in one setting (e.g., speech and language therapy), but struggling to apply them in other relevant settings and situations (e.g., at home). Several studies that have specifically investigated autistic children generalising skills from technological contexts have not been particularly promising [e.g., see (Wainer and Ingersoll, 2011;Wass and Porayska-Pomsta, 2014;Whyte et al., 2015)]. With respect to technology-based autism tools, McCleery (2015) points out that there has been very limited, direct study of near transfer (i.e., skill transfer to another related task), and far transfer (i.e., skill transfer to other domains or naturalistic interaction contexts). The existing research has focused predominantly on screen-based technologies, over a wide range of ages and ability profiles, but not on social robots. More research is needed to test specifically whether robot-based activities can support near and far transfer of skills, and for which robots, activities, and subgroups of autistic learners (see section Conclusion). Following Huijnen et al. (2017), perhaps the role of adults in robot-based interventions, and in supporting successful transfer, should also be more overtly defined. For educators to see humanoid robots as potentially valuable and ethically acceptable tools, future research should focus on providing evidence of robots consistently supporting skill transfer into "real contexts." The interviewees' examples of potential future robot use also make a second critical assumption: that robots can actually teach autistic children new skills, particularly through implicit instruction. As with generalisation, this is not a settled question. Numerous social robotics studies have tested the efficacy of robots (i.e., whether a process can produce an intended result in a highly controlled setting), teaching autistic children specific, isolated skills such as point-following (e.g., David et al., 2018). Yet there are relatively few-if any-studies of robots' teaching effectiveness in non-lab contexts (though see Scassellati et al., 2018) and methodological issues mean many HRI studies do not provide clinically useful evidence (see Begum et al., 2016). Many of the skills that these educators wish to teach are also more complex than those in existing studies, with murkier criteria for success (e.g., a child understanding how her actions affect another person). Assuming that robots could facilitate skill transfer and show effectiveness in educational contexts, one outstanding question is whether robots could offer sufficient added value (vs. other technological/educational tools) to compensate for their current expense, fragility, and complexity. Personalisation, Content, and Teachers-as-Programmers Strikingly, none of the educators made any reference to any kind of "robot app store, " or of otherwise buying or accessing prepackaged curricula for robots, as they may already do with tablets or with some autism interventions. Instead, they repeatedly highlighted that successful robot use would need personalisation or adaptation of teacher-planned activities, especially given the enormous diversity of behaviours, preferences and traits of autistic learners. Directly or indirectly, respondents indicated that they (or people in teaching roles) should be the ones to implement whatever robot personalisation is required, with some explicitly explaining this in terms of programming (subtheme 3B). In both Hughes-Roberts and Brown (2015) and Huijnen et al. (2017), participants also stressed the need to personalise activities and robot behaviours (e.g., speech) to individual learners, suggesting that teachers would have responsibility over personalisation within the classroom, and even during the course of an interaction. Yet, technical complexity and need for expertise were perceived as significant practical barriers to robot adoption. One participant in a leadership role described existing problems with teachers not using the personalisation capacities of existing technologies, such as apps, due to lack of training or time constraints. Others were concerned that technology expertise and training may be deliberately limited to single "experts, " and thus not easily "cascaded" through an entire teaching team. Other participants agree: Hughes-Roberts and Brown (2015) interviewees raised similar requirements for "the teacher [to] manipulate the robot without needing external support, " warning that "if it takes too long to set up the robot or deliver a lesson... [teachers] won't use (it)" (p. 52). Participants in Huijnen et al. (2016) cited as a particular strength of KASPAR that they would be able to use software to create interaction scenarios themselves, without specialist technical support. These views and concerns highlight a clear deployment challenge for robot developers and for educators: if the type of flexible robots that educators envision require extensive training or technical knowledge, they may struggle to gain traction in schools because of expertise bottlenecks, or overly complex, time-consuming procedures. What Type of Tools Are Robots? Educator Views vs. Current Research The current findings suggest that autism educators at special schools in England have notably different expectations and priorities for humanoid robots than many existing HRI research projects, though share many points of agreement with other SEND and autism educators (Hughes-Roberts and Brown, 2015;Huijnen et al., 2017; though see Diep et al., 2015) and autism specialists working with other technologies (King et al., 2017). Educators expected that if they could access humanoid robots in the future, these would be flexible tools for them and their teams. They would be able to plan lessons using the same robot to work on different goals with individual learners or small groups, depending on need. This "flexible tool" view also agrees with a recent survey of UK-based teachers in regular, mainstream schools, where the second most popular proposed use of robots in schools was as a "versatile tool for the teacher, used in many situations" (Kennedy et al., 2016, Figure 6). Yet, many existing autism-robotics and educational robotics research projects do not appear to be working toward a "flexible tools" endpoint. There are some clear practical reasons for that, including the difficulty of demonstrating feasibility and efficacy for a tool that could be used in almost any way, or investigating learning gains when every participant may have unique targets. Existing proof-of-concept and psychological experimentation work with robots (see section Introduction) often have basic science goals that add to the autism-robotics knowledge base and have focused on the needs of child users, rather than the needs of adult users who may operate robot systems. While the KASPAR research programme (e.g., Robins and Dautenhahn, 2017) has worked on iteratively developing and evaluating domain-specific robot-based lessons over time and has created customisation software for end-users to develop new learning scenarios, this capability does not appear to be well-known or well-documented compared to other aspects of the project (though see Huijnen et al., 2017). There are also several examples of packaged robot-based or robot delivered content. US-based Robokind manufactures humanoid robots, but has also developed and sells the "robots4autism" curriculum for autistic learners (https:// robots4autism.com/). Scassellati et al. (2018) developed a monthlong home-based social communication intervention for schoolaged autistic children, using an autonomous robot. While both robots4autism and the Scassellati et al. (2018) system can present content adaptively to different children, neither offers the degree of flexibility and type of personalisation that educators within autism-specific special education settings seem to envision (e.g., programming the robot to use particular phrasing). At present, the robotics industry may be offering something closer to educators' desired flexible use and to the "single, simple point of control" that Hughes-Roberts and Brown suggested (2015, p. 52). There are several tablet-based controls for commercially available robot NAO, such as the "AskNAO Tablet" app (Softbank and ERM Robotique https://www.asknaotablet.com/), which offers a range of controls from push-button selection of pre-programmed actions to integrating with a powerful desktop program (Choreographe) for programming new robot behaviours. They also have a companion blocks-based visual programming language, AskNAO Blockly. Also using NAO, the EU-funded DREAM project developed a simplified, tablet-controlled version of their original autonomous system, DREAM Lite (Mazel and Matu, 2019), which therapists in Romania found fairly easy to learn and use, though they also requested further simplification (Cao et al., 2019). In addition to the contributions made by doing controlled robot experiments and developing specific teaching programmes, it would be a much-needed contribution for HRI and Human-Computer Interaction researchers and the commercial robotics industry to collaborate with educators, developing or modifying robot programming/control platforms to be both usable and secure. LIMITATIONS This study is not without limitations. First, given the convenience sampling of participants, we cannot be sure that our findings reflect the views of autism educators in all special schools across England, or of educators working with autistic students in mainstream schools (in which the majority of autistic students are educated; Department for Education, 2018). Nevertheless, given the current interviewees' expertise in working with autistic students, particularly those with high support needs, they are likely to have provided particularly informed and nuanced views on the potential of robots as educational tools, as our findings attest. A second key limitation is that the interviews prioritised the concerns of the larger DE-ENIGMA project in asking specifically about humanoid robots. Our respondents may have had different views and suggested other uses for animal-like robots such as Keepon (Kozima et al., 2007), or non-biomimetic robots. It is unclear whether the consensus present in the current dataset, such as using robots as "stepping stones" to human interaction, would also be present if discussing other robots. For the same reason, the interviews also specifically prompted respondents to consider applications for social and emotional skills teaching, but did not prompt them about academic or other applications, somewhat skewing the dataset in terms of the types of educational activities discussed. CONCLUSION The findings of this study show multiple, strong points of agreements with how related participant groups (e.g., Hughes-Roberts and Brown, 2015;Huijnen et al., 2016Huijnen et al., , 2017Huijnen et al., , 2019 have conceptualised robots as potential tools for autism education. Importantly, our educators were not uncritically approving of the use of robots in the classroom (see also (Serholt et al., 2017), for similar views from mainstream educators). Rather, they carefully outlined specific use-cases and circumstances in which robots were predicted to be beneficial (e.g., as "stepping stones" to social interaction), and conditions that would need to be met to ensure their adoption in the classroom, including integration with educational curricula, and the capacity to personalise robots to meet the specific needs of individual, autistic learners. The findings suggest several promising avenues for future research. First, educators repeatedly highlighted the idea, prevalent in HRI literature, that robots' predictability and consistency of behaviour should benefit autistic learners in particular (e.g., Rudovic et al., 2017;Straten et al., 2018); it should reduce demands on them, put them at ease, and potentially facilitate learning. These claims are logical based on the diagnostic features of autism and current educational practices that aim to offer children predictability and structure at school (e.g., Mesibov and Shea, 2010), as well as theories of autistic perception and information processing (e.g., Pellicano and Burr, 2012;Lawson et al., 2014). However, they have not been rigorously operationalised and evaluated at a behavioural level. Research is required to test these widely-held beliefs about the benefits of robot predictability and exactly how it may affect children in learning contexts. Second, the capacity of humanoid robots to support autistic children in developing transferrable, generalisable skills is not currently supported by clear research evidence. Given the centrality of educator views that robots need to be a stepping stone to human-human interaction, investigating skill transfer should be an urgent priority. Further generalisation studies might also test educators' beliefs, as expressed herein, that a humanoid robot might better teach, or support transfer of, social skills, than would other robot morphologies. These questions are not only the domain of autism education researchers; they should also concern robotics researchers. Based on this current research, robots for autism education-no matter how appealing or userfriendly-would not meet educators' and children's needs if they did not consistently support skill transfer. Robots that only facilitate learning gains within robot-based activities (i.e., training effects) are unlikely to be ethically or financially justifiable for educators or the broader autism community. Educator interview studies are a valuable source about of information for robotics researchers and industry about the needs of child and adult users, but are not in themselves sufficient to bridge the "deployment gap" between preliminary, lab-based research, and the vision of robots as educational tools. Huijnen et al. describe this gap perfectly, writing: "For socially interactive robots to actually make a difference to the lives of children with ASD and their carers, they have to find their way out from case studies with 'standalone' robots in robotics labs to. . . education environments as part of daily activities/therapies. Being effective in eliciting a certain target behaviour of a particular child in a lab environment, will not automatically ensure. . . adoption of use by professionals in the field" (2016, p. 446). Greater engagement with educators-and other key stakeholders, including autistic children themselves-during design, implementation, and evaluation should help to ensure that the resulting robotics systems and programmes are relevant to autistic learners and those who support them, sufficiently tailored to the realities of their everyday learning contexts, and consistent with their values (e.g., Lloyd and White, 2011). Such participatory processes are being championed across autism research (Nicolaidis et al., 2011;Pellicano and Stears, 2011;Fletcher-Watson et al., 2019), but especially within technology-related autism research (Frauenberger et al., 2011;Porayska-Pomsta et al., 2012;Brosnan et al., 2016). The children's interaction design community can offer useful examples and methodological guidance for undertaking participatory technology research with educators and children, including children on the autism spectrum (e.g., Frauenberger et al., 2013). In advocating for HRI researchers to engage more fully with autism education practitioners while planning, developing, and evaluating robotic tools, we realise that this could pose a substantial change to many established ways of working, and that fully co-produced research might not be possible on many projects. Yet stakeholder participation in research-beyond being a passive participant or subject-can take many forms, including as advisors, as consultants, or as full decision-making partners throughout a project. The risks of designing robots that do not consider stakeholders' views, needs and contexts could be far-reaching for research and industry, especially given the costs of developing and deploying robots. The current findings highlight that there will be no one-sizefits all design "solution" for robotics in autism education, and that current "solutions" may pose later challenges for autistic children. Such future work therefore needs to involve key stakeholders in the design and implementation process (see also Serholt et al., 2017), designing with educators, parents and autistic children, rather than to, on, or for them, to ensure that this work has a direct and sustained impact on those who need it most. This process will require beginning from a point of rigorously co-investigating the assumed and predicted benefits of robotics for autistic children, and balancing these against potential interpersonal, developmental, and resource costs. We envision that robot design driven by technical innovation will be increasingly combined with-or shaped by-approaches that prioritise the needs and values of users. DATA AVAILABILITY STATEMENT The datasets generated for this study will not be made publicly available because participants did not consent to future re-use of their interview data by other researchers. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The UCL Institute of Education Research Ethics Committee (REC857). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS VC, SP, BS, TT, and EP devised and piloted the interview schedule. EA, SM, and AA recruited participants. EA, SM, and AA interviewed participants. AA and EP analysed the data. AA and EP drafted the manuscript. All authors commented on and edited the manuscript prior to submission.
2019-11-01T13:15:16.164Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "c78c257348185b23f55336bee084a51d1e97d015", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2019.00107/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c78c257348185b23f55336bee084a51d1e97d015", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science", "Psychology", "Medicine" ] }
206743679
pes2o/s2orc
v3-fos-license
Experiences of civilian nurses in triage during the Iran-Iraq War: An oral history Purpose Nurses played a critical role in performing triage during the Iran-Iraq War. However, their experiences in triage have not been discussed. Therefore, the current study aimed to investigate the triage experiences of civilian nurses during the Iran-Iraq War. Methods Oral history method and in-depth interviews were used to collect data to gain the nurses' experiences in triage. Results Four themes were extracted from the data, which were the development of triage, challenging environment to perform triage, development of mobile triage teams, and challenges of triage chemical victims for nurses. Conclusion Triage is an important skill for nurses to manage critical situations such as disasters and wars. Nurses have to be competent in performing triage. Involvement in critical situations helps the nurses learn and gain more experience on how to manage unexpected events. Introduction The prolonged Iran-Iraq War (1980e1988) resulted in military and civilian casualties, which is documented as the most important landmark during the second half of the 20th century. 1 The crucial and ultimate goals in wars are the preservation of life, caring the victims, and returning the greatest possible numbers of wounded soldiers. 2 In Iran-Iraq War, Triage occurs at every level of care for victims and starts with a rescuer (Emdadgar) at battlefields, continues in emergency tents, emergency camps, and emergency field hospitals, operating rooms, and finally transferring to general hospitals in the safe cities. Nurses do care through performing triage at war zones. Resuscitation as an essential procedure of triage is performed frequently by nurses. Triage is a dynamic process of prioritizing care and treatments for the wounded. 3e6 The quality of triage had been improved in austere environments such as the First and Second World Wars, the Korean, Vietnam, Falkland, and the Persian Gulf War. It was clearly demonstrated that early assessment, prompt resuscitation and fast patient transfer significantly help to reduce mortality rates in military hospitals and battlefields. The mortality rate of soldiers reduced from 5% during the World War II to 1% by the end of the Vietnam War. 7 Triage has been traditionally performed by medics and nurses in battles and mass casualties, 8,9 and continues in hospital emergency departments. 10 However; performing triage is different in disasters and at hospital settings. In the Iran-Iraq War, triage was performed in relief posts, field emergency units and hospitals during the chemical agent attack. 11 During a disaster, the goal of triage is to save as many people as possible without prioritizing who has the best chance for survival. 7 In the Iran-Iraq War, because of the high number of chemical injuries, the triage was used in relief posts, hospitals and field emergency units. Triage was administered differently from the usual methods. 10 Upon exiting from the combat zone, the injured individuals were evaluated, and for those who critically wounded, resuscitation was performed, starting with an intravenous catheterization. Moreover, if they required immediate surgery, they were transferred to the emergency units with operating rooms established in the battlefields. The individuals were then transferred to the professional medical units behind the frontline, if needed. As a vast war zone and inadequate military nurses, civilian nurses participated in the war and gained valuable experiences on how to perform triage. The civilian nurses did not have any experience in performing triage in a war before deploying to the war zones. There is, thus, a paucity of knowledge related to their performance in the Iran-Iraq War on performing triage. Therefore, this study aimed to investigate the lived experiences of civilian nurses related to the triage performance during the Iran-Iraq War. Design This study aimed to investigate the lived experiences of nurses in triage during the Iran-Iraq War. Oral history was chosen to gain data from the civilian nurses who participated in the war. Oral history is a systematic approach for collection of first-hand data and an analytic framework. 12,13 Oral history can be used as "… source of objective information and filling gaps left by existing documentation". 13 Some others use oral history as a means of creating social history for those who does not have opportunities to voice themselves. 14 While it has been over 30 years since the war ended, Iranian nurses who contributed to save soldiers in frontline never told, if any, their stories and challenges on triage. Therefore, oral history is an appropriate approach to investigate the civilian nurses' experience. Data collection Semi-structured interview was used to elaborate the participants' experiences. As there was not a list of nurses who served in the Iran-Iraq War in military and non-military agencies, snow ball sampling method 15 was applied to recruit participants. The participants who were civilian volunteer nurse (registered and student) and were able to recall the memories and had experiences of performing triage in the war was included in the study. The ultimate sample comprised 16 civilian nurses and the demographic data are listed in Table 1. The participants' narrative was gathered through a semi-structured interview. Diaries, personal documents, photos and other available evidences were used to aid recollection and cross-check participant's claims. Informed consent was obtained from all participants before the interviews. All participants were interviewed for one or two sessions based on their triage information. The interviews ranged from 45 to 90 min, with an average of 60 min in each session. After collection and analysis of the data, obtained through each interview, successive respondents, suggested by the previous participants, were selected. This helped the researchers to extend the range, depth, and scope of the achieved information. Some of core questions were: "Would you mind describing your responsibilities in the frontline?", "Would you mind explaining the tasks that were done for the injured?", "Would you compare the initial and final days of war in terms of triage?" and "What else do you want to tell me about the triage?" Further, explanatory questions 12 were used to encourage the participants to elaborate the stories such as "Why did that happen?" and "How did relate to other events?" Judgmental questions 12 were also used to provide the opportunity for participants to talk about the "big picture" of events that positively and negatively influenced on participants' professional practice and attitude. All the interviews were recorded by a voice recorder and transcribed for data analysis. Data analysis The data analysis was formed based on the four-stage method of oral history. 16 In the first step, the initial codes were extracted from each interview separately. The audiotapes of interviews were transcribed and significant words, phrases, sentences, or paragraphs were highlighted as initial codes. Then subcategories were formed from the initial codes. The subcategories formed categories and finally the narrative themes created from the categories. Data collection was preceded until data saturation gained, which means no more information attained about research questions when analyzing data. Rigor The scientific rigor and trustworthiness of the data in historical research were measured based on credibility, dependability, confirmability and transferability criteria. 17 The credibility was achieved through investigating the participants' culture and a prolonged engagement between the researcher and the participants, triangulation of data through asking for the other evidences such as photos, and diaries, and giving back transcripts to some of the interviewees to check the accuracy of the texts and our interpretations, and debriefing sessions between the researcher and the project supervisor for developing ideas and interpretations. Dependability was maintained while the researcher asked another colleague to transcribe and analyze the interviews. Besides, the researcher used an external audit and bracketing to achieve data confirmability. The transferability of data limited to the Isfahan's nurses, whereas attempted to find a sample with the highest possible diversities. Ethical consideration The ethical approval was achieved from the Ethics Committee of Isfahan Medical Sciences University (Number thesis: 389295). The nurses who were willing to participate in the study signed a consent form. They had right to participate or reject to be interviewed at any time during the study. The participant's names were assigned to a number and the anonymity was guaranteed. Results After analyses of data, four themes were extracted from the interviews, which were the development of triage, challenging environment to perform triage, development of mobile triage teams, and challenges of triage chemical victims for nurses. Development of triage Organizing the medical staff was difficult at the beginning of the war due to unpreparedness of the medical centers to deploy trained staff. In the beginning of the war, volunteer civilians who were deployed to the war helped the wounded at the frontlines. The wounded were transferred to medical centers in the safe area, though triage did not apply at that time. As the war continued, the medical staff were deployed to the combat zones from other parts Table 1 The basic information of participants. Variable Results Mean age in war 20.9 years Mean age in interviewing time 40.9 years Registered nurse in war 5% Student nurse in war 75% Nurses with primary experiences in clinical skills 35% of the country. The primary medical centers were established over two years around combat zones to provide the first aid medical services to the wounded. Although during the first two years triage was not systematized, the medical staff performed the skills for the severely wounded soldiers. Participant 16 narrated: … Triage was meaningless at that time. But it was being done imprecisely. At the entrance, there was a large hall where the uncategorized wounded soldiers were brought. Then, they were divided based on the physical examinations and medical aids were given. While the war had been prolonged, field hospitals were established near the frontlines, triage became advanced. Nurses, physicians, surgeons and anesthetists were available at the emergency centers of the field hospitals. They became experienced and more competent to perform triage. In order to perform triage, the wounded soldiers were firstly examined by paramedics. Vein catheterization was provided, if necessary, and the airway was secured. They were transported to the medical centers by ambulance. By arriving to the medical centers, the injured was reassessed and medical interventions such as intubation, airway opening and control of bleeding were provided to save them. The severely wounded were directly transferred from the frontlines to the field hospitals. In the field hospitals, triage was firstly performed by nurses because of a shortage of physicians. After providing advanced intervention, the victims were transferred to a hospital at safe zones by an ambulance or a helicopter. The majority of the severely injured soldiers were resuscitated and moved to the next level specialized clinical centers. All the medical centers at the frontlines were able to provide medical tasks such as intubation, chest tube, tracheotomy, gastric and bladder drains and so forth. After getting intubation and respiration by the ambu bag (manual resuscitator), the soldiers with multiple trauma were transferred to the backlines. 18 The participant 14 narrated his experience of triage as: … The triage done over there was different from what is described in books. We did triage based on the survival of the patients. If he was going to expire, we did the tasks for him. The first stage of our triage was life saving of severely injured. We used to choose the patients who needed surgeries. For example, the wounds in the neck and abdominal areas were a priority for us. The clinical services were provided for those with internal bleeding. The vascular cases were regarded too. If we had a broken hand, we would brace it and dispatch the patient. At the second stage, triage was performed based on the availability of transporting vehicles (ambulance, buses with or without seats, helicopter …). Challenging environment to perform triage Given the high load of the wounded, especially in the critical situations, it was difficult to prioritize them. Performing triage was a difficult duty of the nurses in the war. Nonetheless, they experienced how to manage the victims and the mortality rate was considerably decreased during the last years of the war. To affirm what have been provided so far in the current documents, during the first two years of the war, approximately 12.5% of the injured had been operated on in less than 8 h, which was reduced to 4 h following six years (1983e1988). The average time to transfer the victims to a hospital was 12 h at the first two years of war (1980e1982), which was diminished to a 7-h during the following six years. 19 Triage was diversified during war stages. For the last six years of the war, the services for the critically wounded were speeded up through performing triage despite staff shortages. However, saving as many soldiers as possible was an extremely challenges for the nurses who were allocated among the other medical groups. Participant 1 narrated on this experience: …. The nurses were not so familiar with the tasks, but they gradually got to know. I exactly remember that we were so inexperienced and we couldn't firstly do a simple stitching, but we had to set chest tube sometimes. The nurses would get involved in emergencies, and had to do things they hadn't faced before, and this led them to get more experience in some tasks such as resuscitation. The other challenge in performing the triage was doing many interventions simultaneously despite staff shortages. Many of the critically wounded needed a long-time manual resuscitator after cardiopulmonary resuscitation (CPR). The number of nurses was not enough to continue the CPR for the injured soldiers. Spending more time for the injured with less chance of survival was extremely challenging while many of them waited for treatments. The investigation and analysis of the interviews with nurses showed that a number of the wounded died due to inadequate forces, high work load, and delayed triage and services. Participant 14 narrated: Lack of treatment utilities and the large number of casualties had overloaded the nurses. For instance, the resuscitation with an ambu bag keeps a nurse busy for too long making him unavailable to help casualties with severe condition. Development of mobile triage teams Mobile resuscitation teams of physicians and nurses were developed to increase successful retrieval of the critically wounded. The primary care, such as IV catheterization, airway opening, intubation and even placing a chest tube and control of bleeding were performed in the triage line before arriving to the emergency units. As the war continued, professional field hospitals with advanced medical equipments were installed, which decreased the mortality rate of the wounded soldiers. Participant 3 narrated: …. Mobile triage teams were one of the successes in this war. Due to a huge number of wounded soldier's in front of field hospitals, and increasing death rates in critically wounded, the emergency mobile teams were established, to promote the caring process, and saving the soldier's life. Participant 8 added: … And the emergency teams (mobile triage teams) consisted of medical and nurses. They were organized beforehand and composed of surgeons, assistant surgeons, nurses and assistant nurses. The teams' summoning was sudden, that is, they would contact us on the phone and placing an immediate request for a team to be sent to a specified destination in the war zone and our response was very rapid. Challenge for nurses to triage chemical victims The climax of the evolution in medical services occurred in the second four-years of the war. Due to the wide use of chemical agents by the enemy that resulted in the great number of the wounded, the nurses encountered many challenges in this field when performing the triage. Earlier in the first years of the war, the nurses and other medical staff had a shallow knowledge about chemical agents and their treatment. As the war prolonged and the frequency of the enemy's use of chemical agents increased, the number of soldiers affected by these agents increased. As a result, establishing emergencies to respond to chemical agents and deploying trained nurses to deliver specialized care were urgently required. To perform the triage, first, the medical forces looked for the recognition of the type of intoxication by different gases and special symptoms related to that special gas or chemical factor. The initial recognition was very important, which was emphasized also in the interview results. The triage of the wounded in the emergencies and recovery room in the second four-years of the war was performed by professional methods in which the nurses played essential roles in performing it and treating the chemically injured soldiers. Based on the severity of symptoms after being contaminated by chemical agents, the injured soldiers were divided into four groups: as to the intoxication by nerve agent, the A group included those who had alternated in the consciousness levels or were in coma. The B group was completely conscious and standing, but had symptoms like asthma, coughing, nausea, vomiting, miosis, and blurred vision. The C group enjoyed a rather good general condition. They did not have systematic intoxication or vomiting, but suffered from miosis, blurred vision and asthma. The D group was in a good general condition and did not require treatment. However, they thought they were sick and were actually the major problem in the chemical emergency. They brought about disorder in the treatment of the previous groups, and there were a large number of them. 20 The kind of medical activities was different based on the groupings. All the intoxicated soldiers asked to deliver their contaminated cloths and equipment after entering the emergency, which were destroyed after collection. Then, they went to the bathroom for decontamination. Those who were in coma or suffered severe muscular weakness were directly transferred by a stretcher and with contaminated cloths to the ICU, where decontamination and treatment were performed simultaneously. 20 Participant 13 mentions: The number of chemical casualties is too high after a chemical bombardment. We learned that we should categorize the casualties based on their condition in order to save lives of those in sever conditions and in urgent need of help. This was quite successful. Due to the closeness to the combat lines of the convalescent home that was designed and used especially in this war and had more beds, a large number of the injured soldiers with light contamination returned to the combat lines after initial treatments and a brief rest. Participant 15 remembered the scene when he looked after chemical contaminated soldier: … In the case of the Majnun Island (an area in Iraq) many injured soldiers needed intubation. We did intubation for most of them at 3 am, their lungs gurgled, and they definitely needed intubation. Two of them died, because of severe secretion. Specially, at the very first moments, the injured went into coma. The patients needed intensive care at the initial stages, and the nurses were competent. In chemical emergencies, after performing triage and categorizing the injuries, different treatments were provided. The medical procedures such as opening the airway, suctioning the secretions, placing patients in lateral position, and injecting the anti-dots if necessary had been prioritized. Administration of oxygen was continued from the admission until recovery. When the vital signs were stabilized, the patients were sent over to the specialized centers. Participant 5 explained how one in their group would get specialized in the treatment of chemical warfare casualties: At first we didn't have enough information on how to cope with war chemical contaminations. But by being in contact with the casualties, we gradually developed a good knowledge, such that when new casualties arrived in the hospital we could identify which chemical agent could have contaminated the person's body. Once sure about the cause, we would carry out the relevant protocol. Discussion In the Iran-Iraq War, the majority of healthcare providers were nurses. They applied triage, though they were not competent. In field hospitals, triage was valued more and performed better because of the presence of many nurses and different medical specialties. One of the findings of the present study was the development of triage. The professional medical forces were insufficient and incompetent to provide medical services punctually to the critically wounded. Imbalance between health needs and health care providers challenge due to the large number of injured and dead people. 20 The major problem at the beginning of the war was the lack of coordination among forces. Military nurses were capable to deal with difficult and unpredictable situations and to care victims independently even without the presence of physicians. However, the civilian nurses were incompetent to organize a critical situation. 21 The results are in-line with Schmedake's study that consider war conditions are unpredictable and many civilian and military people dies because of inadequate medical staffs and equipments. 22 In addition, Blaz et al. 23 states that civilian and military forces encounter with many damages and threats during combats. Although immediate consideration to the wounded and preparation for helping them did not exist at the beginning of the war, it attained gradually through deploying trained forces. Challenging environment to perform triage was another theme of the current research. Environmental conditions play a major role in the implementation of triage. 24,25 There was no infrastructure to facilitate medical care especially the triage. Lack of equipped places, no well trained nurses, and crowed of victims during military combats were difficulties of performing triage. In the nights when military operations were done, for example, performing triage was difficult in dark and finding and assessing the injured soldiers were very difficult. Besides, unexpected attack of the Iraqi forces was at the initial of the war, offering successful medical services were impossible until the medical units and professional forces were established. The results of the present study supported Goniewicz (2013) 26 reports that the degree of success in servicing the wounded was increased after gaining experience by the medical forces. Developing primary medical units and the use of various experiences ended with the development of professional medical units and efficient forces at wars. Development of mobile teams with nurses and physicians to perform triage and offer care for the critically wounded was the other finding of the present research. The mobile team was a big step to save the lives of the injured through on-time classification of the critically wounded and delivering emergency care. Despite the large number of the wounded, the interventions of the mobile team were effective in decreasing the morbidity and mortality rate in the soldiers. The mobile team performance showed that it was successful in achieving the purpose of triage, which is a rapid recognizing of the critically wounded and performing care for them. 7 The results of our study support the quotation of Gierson et al. that state a team of specialists, civilian and military nurses, act more successfully in treating the wounded. 27 the results also in line with the findings of using mobile teams to survive the Nepal earthquake victims. 28 The other finding of the present research was the challenges of triage chemical victims for nurses. Dealing with a large number of chemically wounded soldiers was a new challenge for the nurses. The psychological effect of the chemical agent attacks was panic. Most of the medical staff and the victims did not understand how to deal with that crisis. A huge number of soldiers crowded in front of the medical units to receive care. However, many of them did not need any care. Therefore, delivering care and treatments were difficult for those who were really affected. To overcome the problem, reduce the mortality rates, and lifelong complications, the medical team developed Mass Casualty triage in the medical units to help the victims. The organized program of triage was performed by the nurses in chemical emergency units and in the convalescent home. In the centers for treating the chemically injured increased the nurses' ability to enhance performance of the medical care. 29 In order to decrease the morbidity and mortality rates the victims who have been exposed to chemical agents received critical care promptly. 30 In agreement with the results, Culley et al. 31 show that there is an imbalance in chemical crises between the number of the wounded and the medical facilities. Many of the wounded do not need special medical activities. Recognition of those injured who are in a critical situation and are in need of prompt medical activities is very important. The triage was successful in saving the lives of many wounded and chemically injured. The nurses were able to offer an effective medical service, despite a few numbers of forces and inadequate medical facilities. No study was found, if any, to contradict the findings of the current study, most studies support our findings with different extents. Some of the study limitations included lack of access to military documents and military nurses. Further study is recommended to be conducted in the military forces to reveal other aspects of the presented health services for the victims. Conclusion and recommendations Triage is an unforgettable practice in war, which has been saving lives of many soldiers. Although the nurses were incompetent in performing triage in the early years of the Iran-Iraq War, by continuation of the war and the presence of trained nurses, they could manage the critical situation relatively well through performing triage. Unexpected events such as war, may happen everywhere, thus medical staffs and nurses always should be ready to help victims. Continuing education on crisis management has to be offered for the nurses and public. Awareness of the importance and performing triage enhances the self-confident, satisfaction, and professional competencies in nurses as well as leads to low morbidity and mortality rate. Nurses' experiences also can be beneficial in managing similar events. The lessons and experiences achieved from triage by the nurses could be helpful not only for nursing education, nursing practice, and crisis management but also for the other healthcare groups in civilian and military services. The triage also should be updated based on high tech advances and public needs.
2018-04-03T01:22:51.298Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "1878fcaaff31f3b6ad3ad888c33b2bc8dd106135", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.cjtee.2017.07.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1878fcaaff31f3b6ad3ad888c33b2bc8dd106135", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202803781
pes2o/s2orc
v3-fos-license
Traumatic Closed Proximal Muscle Rupture of the Biceps Brachii in Military Paratrooper Traumatic closed proximal muscle rupture of the biceps brachii has been infrequently cited in the medical bibliography. Early reports of this injury derived from US military during parachute jumps, and it may compromise >4% of injuries at altitude. The mechanism is a direct blow to the upper extremity by static lines. We report a case of traumatic closed proximal rupture of the biceps brachii in a healthy 25 years of age military paratrooper. He was managed with primary surgical repair, and after three years of follow-up, the patient has excellent functional results. Introduction Traumatic closed proximal muscle rupture of the biceps brachii has been infrequently cited in the medical bibliography. This injury is chiefly associated with military static line parachute jumps where the paratrooper orients the static line incorrectly around his arm at the onset of jumping causing a straight blunt force on the biceps brachii [1]. Moreover, United States of America is the only country which has reported this specific static line injury [2]. We report a case of traumatic closed proximal rupture of the biceps brachii in a healthy 25 years of age military paratrooper. Case Report A 25-year-old right-handed paratrooper was presented to orthopedic emergency department because he felt a sudden sharp pain in his right upper extremity after attempting a parachute jump. On physical examination, there was pain around the right shoulder and upper arm. Open trauma was not observed. Furthermore, extensive ecchymosis and edema were presented due to subcutaneous hemorrhage with spreading in the forearm. The muscle defect was palpable, and Popeye's deformity was noticed as shown in Figure 1. Additionally, supination of the forearm and elbow flexion strength was evaluated in comparison with the uninjured limb. Both decrease in motor strength and weakness in elbow flexion were noted. Manual strength testing revealed 4/5 strength of elbow flexion and supination. Neurovascular examination of the afflicted upper extremity was negative. Simple radiographs did not reveal bony pathology. Magnetic resonance imaging (MRI) of the right upper limb disclosed the proximal rupture of the biceps brachii at the musculotendinous junction (Figures 2(a) and 2(b)). On the grounds that the patient was both active and young, operative treatment was decided in order the deformity be diminished and the strength be restored. The latter plays a prominent role in case of highly demanded occupation. The surgical intervention was undergone 10 days after the injury. Under general anesthesia, the patient was placed in beach chair positioning and an anterior approach was preferred. A lazy "S" 10-12 cm incision was made with a proximal medial edge on the deltopectoral groove and ending with a distal midline edge on the lateral aspect of the humerus, about halfway down its shaft. When the subcutaneous fascia was opened, a great hematoma of roughly 300 cc was moved out. As a result, better visualization and mobilization of the muscle was achieved. After that, damaged tissue was carefully removed and muscle belly ends were thoroughly inspected. On observation, the long head of the biceps brachii was almost ruptured beneath the myotendinous junction and the short head was completely transected within the intramuscular substance ( Figure 3). The musculocutaneous nerve was identified in the interval between the biceps and the brachialis muscles and carefully protected. Then, the gap was bridged with locking intramuscular nonabsorbable sutures (number 2, ETHIBOND EXCEL braided polyester on a tapered needle) ( Figure 4). During that process, the elbow flexed to 90°and the forearm positioned in neutral. With the upper extremity in that position, the defect was diminished and the surgical repair was aptly facilitated. Lastly, the trauma was closed in layers. Postoperatively, a posterior splint with the elbow at 90°w as used for a period of 4 weeks. When the splint was removed, active ROM exercises based on gravity were started. The DASH score when the splint was removed was 64.2. At 8 weeks, physiotherapy was begun emphasizing on strengthening exercises. Altogether, 20 sessions were done and reevaluation was followed after one month. The DASH score was 16.7. The patient was able to fully return to former activities in six months' time. The DASH score was 0.0. The next follow-up was scheduled one year after the injury. After that, it was recommended that he should be examined once a year. The last follow-up that took place 3 years after the injury revealed great functional results with a full return of strength as well as satisfactory cosmetic results (Figures 5(a) and 5(b)). The patient did not mention any difficulties in his labor and everyday life activities. Discussion Closed proximal ruptures of the biceps brachii muscle are rare incidents, and they usually involve the long head of the biceps. Early reports of this injury derived from US military during parachute jumps, and it may compromise >4% of injuries at altitude [3]. The most common mechanism is a direct blow to the upper extremity by static lines. It is claimed that the static line becomes misrouted under the arm at the time of exit from the aircraft, and this in turn ends up with a blunt trauma to the biceps belly [4]. In our case, the jump platform was a C130 military aircraft. The parachute used was a MC1-1C maneuverable parachute assembly designed by the United States Army in 1988 for military static line airborne operations. This type of injury occurs in the phase of the exit. As the paratroopers approach the door, they hold the static line in their right hand in the case of exiting the right door (as in our case) or in their left hand in the case of exiting the left door. When they reach the door, the paratroopers hand off the static line to the jumpmaster and forcefully adduct their Outside of military parachutists, biceps muscle belly tears are only reported as individual cases [7][8][9]. Imaging may include ultrasonography and magnetic resonance imaging (MRI). The latter seems to be useful for the detection and characterization of this injury while ultrasound has been reported as an adjunct to MRI in both diagnosis and determination of injury severity [4]. In this case, MRI of the right upper limb revealed the injury confirming its essential role in diagnosis. Treatment can be either operative or nonoperative. Early reports suggested operative exploration and primary repair [10]. Currently, there is a scarcity of randomized, controlled studies to support standardized operative or nonoperative treatment while few treatment comparison studies have been performed (Table 1). Heckman and Levine compared operative repair vs. early percutaneous hematoma evacuation and immobilization, but no significant difference in clinical outcomes was found [1]. On the contrary, Kragh and Basamania demonstrated better results with operative repair when it comes to strength, appearance, and patient satisfaction [6]. Although no comprehensive data exists to support operative vs. nonoperative treatment, the present case was treated surgically with excellent postoperative results. With regard to postoperative immobilization and rehabilitation, recommendations have varied including elbow immobilization at ≥90°of flexion from 3 days to 3 weeks postoperatively, followed by the use of a dynamic splint for early motion [6,7]. Although in this case a posterior splint with the elbow at 90°was used for a period of 4 weeks, there were no problems in terms of the rehabilitation program with good functional outcomes in the long term. Finally, the duration of follow-up in most of the studies was short while in some of them the outcome was completely omitted [4]. In conclusion, closed proximal tears of the biceps brachii comprise a rare injury. Military static line parachute jumps seem to be the predominant mechanism. As a result, orthopaedic surgeons must be aware of this clinical feature especially if they work in military hospitals. This study corroborates the superiority of surgical repair in a long term basis and acts as a contributory factor in medical literature in terms of guiding treatment. Consent Consent was obtained. Conflicts of Interest The authors declare that they have no conflicts of interest.
2019-09-17T02:45:54.063Z
2019-08-29T00:00:00.000
{ "year": 2019, "sha1": "c88c67e3e25a96f635f9fed9ca14973fde79da22", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/3472729", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97c11844a604d3452c5c01254c846b80a1599db4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245105175
pes2o/s2orc
v3-fos-license
Adapting sustainability and energy efficiency principles to architectural education: A conceptual model proposal for the design studio sequence Architectural education is the first step into the professional career of an architect. It has strong connections with the profession itself regarding the technological trends and the needs of the society. Therefore, emerging challenges and developments in the world of architecture like environmental problems and sustainability issues need to be addressed by the educational programs of architecture. In addition to this, sustainable Development Goals (SDG) Program of United Nations puts an important responsibility on the shoulders of architectural educators. SDG includes both architectural and educational goals such as quality education, affordable and clean energy, sustainable cities and communities, and climate action. Accordingly, architectural education needs to be formed in a way to respond to the requirements of the contemporary global society. Design studio is the heart and core of architectural education. It is the place where all the theoretical and technical knowledge and skills gained in other courses become useful for the students to come up with design ideas and products. Additionally, design studios are not isolated environments. They form a series of courses by the consecutive repetition throughout the continuum of architectural education. Thus, they need to be treated as a developing sequence. Therefore, it is very important and valuable that the structure of the design studio sequence is improved and updated with suitable revisions towards emerging needs of the profession and society like the adaptation of sustainability principles, to reflect the dynamic character of architectural education itself. This paper presents a conceptual model proposal for the design studio sequence for the adaptation of sustainability principles to architectural education. Introduction Since ancient times, the practice of architecture and its products are affected from the environment they exist in. These effects can be taken physically in terms of conditions such as the accessibility of the material, topographic and geographical situation, climate, as well as in the sense of psychological, social and cultural influence. Rapoport [1] mentions that the form transformed into a structure is determined by cultural factors, and natural conditions have only a limited effect in this process. As a result of the effect of all these factors and conditions, the concept of architectural identity emerges. According to Hacıhasanoğlu [2], architectural identity, as a sub-expansion of cultural and urban identity systems, interacts with city and conservation plans, architectural styles, architectural languages, building and environmental policies, materials and technology, behaviors and attitudes towards the environment. All architectural products that emerged throughout the history of humanity have revealed an identity belonging to the time, geography and culture they are in, and on the other hand, contributed to the formation of culture. This cycle has continued in this way from the first ages until the twentieth century without exceptions. In the early 1900s, changes began to emerge in the world order as a result of the industrialization process. Mass production takes the place of handwork, and gigantic factories take the place of small workshops. The rapid development in technology and communication facilities not only allowed events and developments to spread around the world in a short time, but also contributed to the replacement of traditional architectural movements by innovative trends like modern architecture which are based on technology and industrialization. Modern architecture has changed the course of architectural history both through its intellectual infrastructure and its expression in form, and has led to the first step to be taken in reconsidering the concept of "architectural identity". Architecture is a profession that benefits from technology, follows the economy, and is affected by political events. Therefore, the profession and education of architecture are affected by emerging developments. Architecture has been developing with the globalization of technology, reaching everywhere in a short time, and obtaining information more easily wherever it is, and architectural products started to be similar to each other [3]. After the 1950s, architecture in the world has started to be applied on a basis of technology and globalism. An important result emerged from this development is the increasing amount of resources and energy required in architectural and construction activities. The energy consuming structures required by the increased populations and living comfort standards, together with technological systems used in the buildings have exceeded the sustainable limits, especially since the late 20th century. According to the World Energy Council, global energy demand will increase until 2030 [18]. Today, the construction sector and its components, which is the most important application area of architecture, is the sector that has the largest share in the world's total energy use. 50% of the energy consumed worldwide, 50% of greenhouse gases that cause global warming, 24% of air pollution, and 50% of CFC and HCFC emissions are due to activities related to buildings [4]. Therefore, any action within the architectural practice is expected to have a major impact on energy and environmental issues. Consequently, serious approaches to mitigate energy and resource use have been developed within the practice of architecture especially in the last 40 years. Sustainability is an urgent theme for humanity. The responsibility of architecture about sustainability and energy efficiency problems is emphasized in the Sustainable Development Goals Program of the United Nations as well, considering that many of the goals are directly or indirectly related with architecture [8]. Especially goal number 7, affordable and clean energy, and goal number 11, sustainable cities and communities shall be considered under the responsibility of architecture profession. To contribute to the achievement of these goals, and to provide more sustainable built environments, architecture profession needs to fulfill its responsibilities towards the society. But firstly, these responsibilities and strategies to overcome the energy and resource problems must be clarified, and architects need to be made aware of the situation. Considering the strong connection between the profession and education of architecture and that both are affecting and being affected from each other, the process of awareness most effectively starts in the education period. Consequently, increasing environmental problems and sustainability issues need to be initial elements of architectural education so that it can contribute to the profession to stay up-to-date. Sustainability and architectural education Education is a powerful agent that raises awareness for emerging developments and contributes to social change [12]. The profession of architecture is a special field that differs from other professions in terms of its prominent features at the education stage. Unlike other fields of science, architectural education requires the students to work with all their senses and emotions. In architectural education, the ability of the mind, eye and hand to work together should be developed [5]. According to Cook, the most enjoyable and at the same time most disturbing aspect of architecture is its open-endedness, which is a mixture of measurable and immeasurable features [6]. The transfer of knowledge and experience in architecture can only be carried out in a healthy way with a mutual interaction between the educator and the student. The rapid development and change seen in today's world of architecture requires a rapid change in architectural education as well. Therefore, it is an important step to be taken by the institutions providing architectural education to design their programs and contents in such a way that they can be changed and updated frequently, to ensure the continuity of education and its competence in meeting professional needs. It is assumed that environmental and energy problems, and therefore energy efficiency and sustainability concepts in design will continue to have the importance they have today, and will even become more critical in the upcoming years. Therefore, energy efficient design is not a temporary architectural trend, but a design approach that should be permanently embedded in architectural practice. It is necessary that mandatory requirements of sustainability is a core issue for the formation of professional competence [13]. The training of architects who will implement this approach is only possible if they recognize and apply this approach during their education. Architectural education has great responsibility in the training of young architects who are aware and capable of solving environmental problems [11]. Architectural education is failing to respond to the rapid changes in the world [9]. Architects should have more knowledge, competence and awareness about energy efficiency and sustainability than anyone else. UIA [14] states that architectural education is expected to prepare architects to bear the responsibility for the health, safety, welfare and cultural interests of the public and for the sustainability of the built environment. Accordingly, sustainability principles need to be implemented into the architectural education curriculum as fundamental elements. The concerns about sustainability of the built environment and the sustainability of the society are important aspects of architectural education [15]. However, this is a complex process, including theory and practice-based interdisciplinary knowledge [10]. The issue about which phases and which sections of the curriculum these principles are supposed to be inserted into is a question of curriculum design. The curriculum needs to be revised and developed by the adaptation of sustainability principles to the relevant and suitable points. Contemporary architectural education can be divided into 4 main categories: In the first category, there are courses that form the theoretical and conceptual foundations of architecture as in the second category, the concepts that constitute the technical infrastructure of architecture are taught; in the third category courses that develop presentation and expression techniques are; and in the fourth category there are studio lessons in which the lessons learned in the other three categories are applied by bringing them together [7]. Therefore, the implementation of sustainability principles may happen at various courses or course groups, but as the heart and core of architectural education, the design studio is the most suitable place to create connections between design issues and sustainability. The importance of the design studio The common aspect of architectural education in all institutions is that it is a branch that advocates learning by practicing. Studios are the places where the practice in architectural education happens most effectively. Design studio is the environment where the students learn various ways of design and nourish their creativity [16]. In the design studio, students learn different aspects of design education such as visualization and architectural thinking [17]. While theoretical, technical or practical information is given to students in all courses, studios are the place where all this information is brought together and the acquired skill is transformed into a product. Studio courses have the greatest weight in the curriculum of all architectural education institutions. The studio lessons, which continue regularly from the first semester of architectural education to the end, are the most important factors that shape the student's view and style of architecture throughout his/her education life. In this context, the content of architectural design or studio courses should be examined and prepared very well, both as a single course and as a series of courses. While planning the curriculum as a whole, the general structure of the architectural project series should be established, and the scope and subjects of the successive studios should be fictionalized within a certain context. It is important that the topics, plots and systems in question are not repetitive, but complement each other. The individual contents of the project courses should also be created in accordance with the general framework, including the information and practices that the student will need according to the level she/he is in. Since the project course environment is an environment where the lecturer and the students are in direct contact, it is also important that these courses are designed in a way that makes it possible to be personalized. In terms of energy efficient design approach, studios draw attention as the most suitable application areas. It should be among the objectives of the design courses that many different topics, from the basic principles of energy efficient design to the latest technological applications and technical details, are handled in the studios and that various applications that provide energy efficiency are integrated into the design by the student. Which energy efficient application will be used in which periods and in what kind of projects and how it will be adapted to the design are also important issues to be decided during the design of the studio sequence and the preparation of the curriculum. Design studio courses are the most suitable parts of architectural education regarding the adaptation of sustainability principles. However, the studio courses need to be considered as a complete sequence from first to the last year of education, instead of producing independent solutions for each studio separately. It should be taken into consideration that the coordination of the studios is important to hinder repetitive, irrelevant, and incomplete attempts. A conceptual model proposal for the design studio sequence Adapting sustainability and energy efficiency principles to architectural education is an important issue for the future of architecture profession. Accordingly, the implementation needs to be executed firmly. The design studios are appropriate courses to be the first steps of a revision of this kind because they are flexible and open for customization. Each design studio is unique and has its own character and requirements. However, within the frame of the curriculum, all design studios must be connected with each other within a context. This study evaluates the design studio sequence as a continuous structure, and proposes a holistic conceptual model for the whole sequence. Therefore, the definitions for each design studio are open to interpretation. Following chapter presents the specifications for each design studio from first to the last year of education. Details of the proposed model Adapting sustainability and energy efficiency principles to architectural education is an important issue for the future of architecture profession. Accordingly, the implementation needs to be executed firmly. The design studios are appropriate courses to be the first steps of a revision of this kind because they are flexible and open for customization. Each design studio is unique and has its own character and requirements. However, within the frame of the curriculum, all design studios must be connected with each other within a context. This study evaluates the design studio sequence as a continuous structure, and proposes a holistic conceptual model for the whole sequence. Therefore, the definitions for each design studio are open to interpretation. Following chapter presents the specifications for each design studio from first to the last year of education. First semester design studio The design studio in the first semester is mostly considered as an introduction to architectural practice. It is the first design experience of the students, so that the structure of the studio needs to be kept simple but effective. The architectural program of the design problem shall be very simple, consisting of a single space or volume with only a natural environment in the surroundings. The size of the construction area might be up to 50 square meters, and the emphasized sustainability element shall be orientation and relationships with the land and topography. Second semester design studio In the second semester of their educations, students are still in their freshman year, and their introduction to architecture continues. In the design studio, the basic definitions of design shall be emphasized. The architectural program shall be a simple one with a singular function, and the context shall be natural or rural environment with low density. The structure can be between 50 and 200 square meters, and the sustainability applications can focus on the relations between indoor and outdoor spaces, and passive systems like natural ventilation and daylight use. Third semester design studio In the third semester, the design studio shall become relatively complicated. Architectural program is still low in complexity with a basic function like residential or commercial purposes. Environmental context shall be an urban area with low density, and the area of construction can be between 200 and 500 square meters. The sustainability applications emphasized in this studio are simple insulation and indoor environmental quality issues. Fourth semester design studio Architectural program gets more complicated in the fourth semester, considering two interconnected actions like working and residing together. The building size shall be between 500 and 1000 square meters, and the surrounding area shall be an urban space with average density. The sustainability applications proposed for this studio are artificial lighting, artificial ventilation and recycled materials. Fifth semester design studio Fifth semester design studio deals with a more complicated design problem like a local public building in an urban area with high density. The total construction area in this design studio can be between 1000 and 2000 square meters, and the related energy efficiency concerns are focused on green roofs and rainwater management. Sixth semester design studio In the sixth semester, the architectural program gets complex through a mixed-use building that most possibly consists of multiple floors. The environment around the building may be a special urban area, probably one with a historic value. The total construction area is 2000 to 3000 square meters, as the sustainability applications are focused on acoustic solutions, wastewater management, and solid waste management. Seventh semester design studio Seventh semester design studio deals with a complicated design problem with a unique function that requires specialization like education or healthcare. The context is urban with high density, and the total construction area is supposed to be between 3000 and 6000 square meters. Related energy efficiency applications in this design studio are focused on alternative and sustainable energy production, and energy efficient façade design. Eighth semester design studio Eighth semester is the last one of the proposed architectural curriculum, and the design studio shall reflect the advanced level the students have reached at the end of their education. The architectural program in this design studio represents a public building with multiple functions. The context of this design studio consists of an urban environment with high density and special features, and the total required construction area is between 6000 and 10000 square meters. Sustainability applications in this design studio are smart building systems, life-cycle cost analysis and optimization, and sustainable construction methods. The proposed design studio sequence is continuous, and each studio builds upon and develops the one that comes before (See Table 1). The architectural program starts with a single volume and gets more complicated in the upcoming studios, the environmental context gets denser and complex, and constructed area gets gradually bigger each semester. Accordingly, applications about sustainability and energy efficiency also start from simple and basic ones in the first semesters, and they get more technical, complicated, and technology-based in the advanced semesters. Consequently, the proposed design studio sequence covers a wide perspective of sustainability and energy efficiency approaches. However, it must be not forgotten that the design studio courses are not alone in the curriculum, and must be supported with theoretical and technical courses that answer to the questions revealed in the relevant design studios. Conclusion Architectural education is a multidimensional and multicomponent process. Energy efficient design has recently started to be included in these training programs as an important component and it is predicted that this situation will not change in the future. In this context, it is an undeniable fact that architectural education should be redesigned to include an energy efficient design approach. It is important to ensure the continuity of education that this redesign is carried out not as an action that can be done in a single step, but as a planned process that spans a certain time. In addition, taking into account the factors depending on the place and time will ensure that this process works in a healthy way. Existing architectural, educational and social conditions have led to the process of adapting energy efficient design principles to architectural education to progress more slowly and without determination. However, it is seen that it is possible to accelerate this process and make it healthy and holistic with the measures to be taken and the practices to be made. The approach of using the design studio sequence as the focal point for the implementation sustainability principles into the architectural education is an effective one. However, this operation requires expertise and strong collaboration among education curriculum designers. Design studio educators, students, and all other stakeholders of the design studio need to be a part of this process, to eventually come up with a structure that responds to the needs of students and educators, and the requirements of the emerging professional medium at the same time. The proposed model is conceptual and creates a general frame for the design studio sequence. For more concrete implementations, the local factors for the architecture and education of the region or country must be considered. Accordingly, revisions and improvements for better adaptation might be required as well. This study is purely focused on the design studio sequence as the revisions in other technical and theoretical courses are not considered. Even though this study provides a lot of information and knowledge about the possibilities for the improvement of the architectural education curriculum, for better and more holistic results, all courses must be considered as a whole. The interaction between design studios and other courses shall ensure a more effective result in terms of the desired objectives regarding the implementation of sustainability principle into architectural education. This paper and similar studies that aim to provide connections between architectural education and sustainability are important and beneficial for creating a framework of the problem. However, to be more effective and sufficient, these studies need to increase in numbers and become more realistic through experience-based case studies, together with other empirical and statistical implementations concerned about the same or similar research subjects.
2021-12-12T17:02:23.592Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "06f2b8e55d0617eab46d3cc275377d6d8a477d1e", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/105/e3sconf_gesd2021_01070.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b963c7e40441713987fc5180e898f8ccab9409cd", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }